EKC2008 Proceedings of the EU-Korea Conference on Science and Technology (Springer Proceedings in Physics)

  • 50 304 8
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

EKC2008 Proceedings of the EU-Korea Conference on Science and Technology (Springer Proceedings in Physics)

springer proceedings in physics 124 springer proceedings in physics 106 Modern Trends in Geomechanics Editors: W. Wu a

1,451 42 86MB

Pages 514 Page size 620.25 x 940.5 pts Year 2008

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

springer proceedings in physics 124

springer proceedings in physics 106 Modern Trends in Geomechanics Editors: W. Wu and H.S. Yu 107 Microscopy of Semiconducting Materials Proceedings of the 14th Conference, April 11–14, 2005, Oxford, UK Editors: A.G. Cullis and J.L. Hutchison 108 Hadron Collider Physics 2005 Proceedings of the 1st Hadron Collider Physics Symposium, Les Diablerets, Switzerland, July 4–9, 2005 Editors: M. Campanelli, A. Clark, and X. Wu 109 Progress in Turbulence II Proceedings of the iTi Conference in Turbulence 2005 Editors: M. Oberlack, G. Khujadze, S. Guenther, T. Weller, M. Frewer, J. Peinke, S. Barth 110 Nonequilibrium Carrier Dynamics in Semiconductors Proceedings of the 14th International Conference, July 25–29, 2005, Chicago, USA Editors: M. Saraniti, U. Ravaioli 111 Vibration Problems ICOVP 2005 Editors: E. Inan, A. Kiris 112 Experimental Unsaturated Soil Mechanics Editor: T. Schanz 113 Theoretical and Numerical Unsaturated Soil Mechanics Editor: T. Schanz 114 Advances in Medical Engineering Editor: T.M. Buzug 115 X-Ray Lasers 2006 Proceedings of the 10th International Conference, August 20–25, 2006, Berlin, Germany Editors: P.V. Nickles, K.A. Janulewicz

116 Lasers in the Conservation of Artworks LACONA VI Proceedings, Vienna, Austria, Sept. 21–25, 2005 Editors: J. Nimmrichter, W. Kautek, M. Schreiner 117 Advances in Turbulence XI Proceedings of the 11th EUROMECH European Turbulence Conference, June 25–28, 2007, Porto, Portugal Editors: J.M.L.M. Palma and A. Silva Lopes 118 The Standard Model and Beyond Proceedings of the 2nd International Summer School in High Energy Physics, Mug˘la, 25–30 September 2006 Editors: T. Aliev, N.K. Pak, and M. Serin 119 Narrow Gap Semiconductors 2007 Proceedings of the 13th International Conference, 8–12 July 2007, Guildford, UK Editor: B. Murdin 120 Microscopy of Semiconducting Materials 2007 Proceedings of the 15th Conference, 2–5 April 2007, Cambridge, UK Editor: A.G. Cullis 121 Time Domain Methods in Electrodynamics Editor: P. Russer 122 Advances in Nanoscale Magnetism Proceedings of the International Conference on Nanoscale Magnetism ICNM-2007 (June 25–29, Istanbul, Turkey) Editors: B. Atkas and F. Mikailov 123 Computer Simulation Studies in Condensed-Matter Physics XIX Proceedings of the 19th Workshop Editors: D.P. Landau, S.P. Lewis and H.-B. Schüttler 124 EKC2008 Proceedings of the EU-Korea Conference on Science and Technology Editor: S.-D. Yoo

Volumes 81–105 are listed at the end of the book.

S.-D. Yoo (Ed.)

EKC2008 Proceedings of the EU-Korea Conference on Science and Technology Ryu-Ryun Kim Han Kyu Lee Hannah K. Lee Hyun Joon Lee Jeong-Wook Seo

123

Seung-Deog Yoo VeKNI Berliner Allee 29 22850 Norderstedt Germany

Editorial Board Ryu-Ryun Kim, University of Hamburg, Institute for Food Chemistry, Grindelallee 117, 20146 Hamburg, Germany Han Kyu Lee, University of Hamburg, Center for Molecular Neurobiology Hamburg (ZMNH) Falkeried 94, 20251 Hamburg, Germany Hannah K. Lee, University of Hamburg, Security in Distributed Systems (SVS), Vogt-Koelln-Str. 30, 22527 Hamburg, Germany Hyun Joon Lee, University of Hamburg, Center for Molecular Neurobiology Hamburg (ZMNH) Institute for the Biosynthesis of Neural Structures, Falkenried 94, 20251 Hamburg, Germany Jeong-Wook Seo, University of Hamburg, Division Wood Biology, Department of Wood Science, Leuschnerstrasses 91, 21031 Hamburg, Germany

ISSN 0930-8989 ISBN 978-3-540-85189-9 Springer Berlin Heidelberg New York Library of Congress Control Number: 2008932954 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specif ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microf ilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media. springer.com © Springer-Verlag Berlin Heidelberg 2008 The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specif ic statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Data prepared by SPS using a Springer LATEX macro package Cover: eStudio Calamar, Girona, Spain Printed on acid-free paper

SPIN: 12043446

57/3180/SPi

543210

Preface

The EU-Korea Conference on Science and Technology, EKC2008, was held in Heidelberg, Germany from 28 to 31 August 2008. Current research fields in science and technology were presented and discussed at the EKC2008, giving an insight into the interests and directions of researchers in EU countries and Korea. The Korean Scientists and Engineers Association in the FRG (VeKNI) had organized the EKC2008 jointly with the Korean Scientists and Engineers Association in the UK (KSEAUK), the Korean Scientists and Engineers Association in France (ASCoF), and the Korean Scientists and Engineers Association in Austria (KOSEA). This conference is dedicated to the 35th anniversary of the foundation of VeKNI. The EU has been steadily increasing both in the number of its members and in its economic volume. The different economies of the EU countries are becoming more unified, and are achieving a close cooperation in science and technology. For instance, the EU Research Framework Programme - the world’s largest funding programme for research projects - prompts research projects throughout the entire EU community. In the future, the EU will play an increasingly leading role in the world. In the last decades Korea has experienced a rash development of its economy and the level of its science and technology. Korea's economic volume is currently positioned about 12th in the world, and it is a leading country in communication technology, home entertainment and shipbuilding. But despite these achievements, many EU citizens still think of Korea as a minor industrial country. It will be beneficial for both Korea and the EU to get to know each other better, especially in fields of science and technology. The EKC2008 emerged from this idea, and the success of the conference has clearly shown the interest of both sides to strengthen the relationship between EU and Korean scientists and engineers. I would like express my sincere thanks to the members of the international organizing committee, Jinil Kim, Yujin Choi and Man-Wook Han for their excellent cooperation in arranging this conference. I would also like to thank the members of the editorial board for their contribution in the preparation of these proceedings. I am also grateful to all the sponsors, and especially our main sponsors Hyundai, KOFST, LG and Samsung. Many thanks also to the Springer-Verlag for publishing the proceedings of the EKC. Finally, I thank all of the participants of the conference. Seung-Deog Yoo VeKNI EKC 2008 Co-chair

Contents

Computational Fluid Dynamics (CFD) A Numerical Study on Rotating Stall Inception in an Axial Compressor Je Hyun Baek, Minsuk Choi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers Myungsung Lee, Chan-Shik Won, Nahmkeon Hur . . . . . . . . . . . . . . . . . . . . .

19

Application of a Level-Set Method in Gas-Liquid Interfacial Flows Sang Hyuk Lee, Gihun Son, Nahmkeon Hur . . . . . . . . . . . . . . . . . . . . . . . . . .

33

Modelling the Aerodynamics of Coaxial Helicopters – from an Isolated Rotor to a Complete Aircraft Hyo Won Kim, Richard E. Brown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

State-of-the-Art CFD Simulation for Ship Design Bettar el Moctar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

Investigation of the Effect of Surface Roughness on the Pulsating Flow in Combustion Chambers with LES Balazs Pritz, Franco Magagnato, Martin Gabi . . . . . . . . . . . . . . . . . . . . . . . .

69

Numerical Study on Blood Flow Characteristics of the Stenosed Blood Vessel with Periodic Acceleration and Rotating Effect Kyoung Chul Ro, Seong Hyuk Lee, Seong Wook Cho, Hong Sun Ryou . . . .

77

Analysis and Test on the Flow Characteristics and Noise of Axial Flow Fans Young-Woo Son, Ji-Hun Choi, Jangho Lee, Seong-Ryong Park, Minsung Kim, Jae Won Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

VIII

Contents

Implicit Algorithm for the Method of Fluid Particle Dynamics in Fluid-Solid Interaction Yong Kweon Suh, Jong Hyun Jeong, Sangmo Kang . . . . . . . . . . . . . . . . . . . .

95

Mechatronics and Mechanical Engineering EKC 2008: Summary of German Intelligent Robots Research Landscape 2007 Doo-Bong Chang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Characteristics of the Arrangement of the Cooling Water Piping System for ITER and Fusion Reactor Power Station K.P. Chang, Ingo Kuehn, W. Curd, G. Dell’Orco, D. Gupta, L. Fan, Yong-Hwan Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Development of Integrated Operation, Low-End Energy Building Engineering Technology in Korea Soo Cho, Jin Sung Lee, Cheol Yong Jang, Sung Uk Joo, Jang Yeul Sohn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 RF MEMS Switch Using Silicon Cantilevers Joo-Young Choi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Robust Algebraic Approach for Radar Signal Processing: Noise Filtering, Time-Derivative Estimation and Perturbation Estimation Sungwoo Choi, Brigitte d’Andr´ea-Novel, Jorge Villagra . . . . . . . . . . . . . . . . 143 A New 3-Axis Force/Moment Sensor for an Ankle of a Humanoid Robot In-Young Cho, Man-Wook Han . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 DGPS for the Localisation of the Autonomous Mobile Robots Man-Wook Han . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Verification of Shell Elements Performance by Inserting 3-D Model: In Finite Elements Analysis with ANSYS Program Chang Jun, Jean-Marc Martinez, Barbara Calcagno . . . . . . . . . . . . . . . . . . . 171 Analysis of Textile Reinforced Concrete at the Micro-level Bong-Gu Kang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Product Service Systems as Advanced System Solutions for Sustainability Myung-Joo Kang, Robert Wimmer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

Contents

IX

Life Prediction of Automotive Vehicle’s Door W/H System Using Finite Element Analysis Byeong-Sam Kim, KwangSoo Lee, Kyoungwoo Park . . . . . . . . . . . . . . . . . . . 201 Peak Oil and Fusion Energy Development Chang Shuk Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Modelling of a Bubble Absorber in an Ammonia-Salt Absorption Refrigerator Dong-Seon Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Literature Review of Technologies and Energy Feedback Measures Impacting on the Reduction of Building Energy Consumption Eun-Ju Lee, Min-Ho Pae, Dong-Ho Kim, Jae-Min Kim, Jong-Yeob Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography Ki-hwan Kim, Na-young Song, Byung-kwon Choo, Didier Pribat, Jin Jang, Kyu-chang Park . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 ITER Plant Support Sytems Yong Hwan Kim, G. Vine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Growth Mechanism of Nitrogen Incoporated Carbon Nanotubes with RAP Process Chang-Seok Lee, Je-Hwang Ryu, Han-Eol Im, Sellaperumal Manivannan, Didier Pribat, Jin Jang, Kyu-Chang Park . . . . 249 Micro Burr Formation in Aluminum by Electrical Discharge Machining Dal Ho Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Multi-objective Environmental/Economic Dispatch Using the Bees Algorithm with Weighted Sum Ji Young Lee, Ahmed Haj Darwish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Experimental Investigation on the Behaviour of CFRP Laminated Composites under Impact and Compression After Impact (CAI) J. Lee, C. Soutis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Development of and Research on Energy-saving Buildings in Korea Hyo-Soon Park, Jae-Min Kim, Ji-Yeon Kim . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Non-Iterative MUSIC-Type Imaging Algorithm for Reconstructing Penetrable Thin Dielectric Inclusions Won-Kwang Park, Habib Ammari, Dominique Lesselier . . . . . . . . . . . . . . . . 297

X

Contents

Network Based Service Robot for Education Kyung Chul Shin, Naveen Kuppuswamy, Hyun Chul Jung . . . . . . . . . . . . . . 307 Design of Hot Stamping Tools and Blanking Strategies of Ultra High Strength Steels Hyunwoo So, Hartmut Hoffmann . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Information and Communications Technology Efficient and Secure Asset Tracking Across Multiple Domains Jin Wook Byun, Jihoon Cho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Wireless Broadcast with Network Coding: DRAGONCAST Song Yean Cho, Cedric Adjih . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 A New Framework for Characterizing and Categorizing Usability Problems Dong-Han Ham . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 State of the Art in Designers’ Cognitive Activities and Computational Support: With Emphasis on the Information Categorization in the Early Stages of Design Jieun Kim, Carole Bouchard, Jean-Fran¸cois Omhover, Am´eziane Aoussat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 A Negotiation Composition Model for Agent Based eMarketplaces Habin Lee, John Shepherdson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Discovering Technology Intelligence from Document Data in an Organisation Sungjoo Lee, Letizia Mortara, Clive Kerr, Robert Phaal, David Probert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process J.M. Park . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Life and Natural Sciences Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis Sungbo Cho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Mathematical Modelling of Cervical Cancer Vaccination in the UK Yoon Hong Choi, Mark Jit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

Contents

XI

Particle Physics Experiment on the International Space Station Chanhoon Chung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Effects of Methylation Inhibition on Cell Prolieration and Metastasis of Human Breast Cancer Cells Seok Heo, Sungyoul Hong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Proteomic Study of Hydrophobic (Membrane) Proteins and Hydrophobic Protein Complexes Sung Ung Kang, Karoline Fuchs, Werner Sieghart, Gert Lubec . . . . . . . . . . 429 Climate Change: Business Challenge or Opportunity? Chung-Hee Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Comparison of Eco-Industrial Development between the UK and Korea Dowon Kim, Jane C. Powell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 On Applications of Semiparametric Multiple Index Regression Eun Jung Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Towards Transverse Laser Cooling of an Indium Atomic Beam Jae-Ihn Kim, Dieter Meschede . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Heat and Cold Stress Indices for People Exposed to Our Changing Climate JuYoun Kwon, Ken Parsons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 The Thrill Effect in Medical Treatment: Thrill Effect as a Therapeutic Tool in Clinical Health Care (Esp. Music Therapy) Eun-Jeong Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Status of the Climate Change Policies in South Korea Ilyoung Oh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Trends in Microbial Fuel Cells for the Environmental Energy Refinery from Waste/Water Sung Taek Oh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 Cell Based Biological Assay Using Microfluidics Jung-Uk Shim, Luis Olguin, Florian Hollfelder, Chris Abell, Wilhelm Huck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Finite Volume Method for a Determination of Mass Gap in the Two Dimensional O(n) σ-Model Dong-Shin Shin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503

XII

Contents

Understanding the NO-Sensing Mechanism at Molecular Level Byung-Kuk Yoo, Isabelle Lamarre, Jean-Louis Martin, Colin R. Andrew, Pierre Nioche, Michel Negrerie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 Environmentally Friendly Approach Electrospinning of Polyelectrolytes from Aqueous Polymer Solutions Miran Yu, Metodi Bozukov, Wiebke Voigt, Helga Thomas, Martin M¨ oller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531

Computational Fluid Dynamics (CFD)

A Numerical Study on Rotating Stall Inception in an Axial Compressor Je Hyun Baek and Minsuk Choi 1

POSTECH, Department of Mechanical Engineering, Pohang, South Korea [email protected] 2 Imperial College, Department of Mechanical Engineering, London, UK [email protected]

Abstract. A series of numerical study was conducted to analyze stall inception process and to find the mechanism of rotating stall in a subsonic axial compressor. The compressor used in this study showed different flow behaviors depending on the inlet boundary layer thickness at the near stall condition. The hub-corner-separation grew to become a full-span separation for the thick inlet boundary layer as the load was increased. The initial disturbance was initiated in these separations on suction surfaces, and then it was transferred to the tip region. This disturbance grew to be an attached stall cell, which adheres on a blade surface and rotates at the same speed as the rotor. Once the attached stall cell reached a critical size, it moved along the blade row and became the rotating stall. On the other hand, it was reduced to be indistinguishable from the rotor wake and other large separation occurred near the casing for the thin boundary layer. The different boundary layer affects the stall cell’s size and the initial disturbance causing the rotating stall. The stall cell grew large with the increasing boundary layer, causing large performance drop during stall developing process. Influence of the number of flow passages on the rotating stall was also investigated by comparing the results obtained using four and eight flow passages. The stall inception process was similar in both cases, while the number of stall cells was different because of the size of the computational domain. Based these results, the minimum number of flow passages was suggested for rotating stall in a subsonic axial compressor.

1 Introduction The rotating stall in a compressor is a phenomenon in which separations in blade passages advance along the blade row in the circumferential direction. It is generally known to be originated in the operating range between normal flow and surge. These moving separation regions also referred to as stall cells can act as the blockage in flow in blade passages, resulting in a reduced operating range. The rotating stall also changes pressures on blade surfaces periodically and could break blades. Because this deterioration and damage have bad effects on the reliability of the airplane as well as the compressor itself, much attention has been paid to the characteristics of the rotating stall to establish effective methods of its active control. Most studies on the rotating stall have been conducted by experiments focused on stall inception.[1-6] Based on these previous results, the rotating stall follows pre-stall waves such as modal perturbations or originated from spike-type precursors depending on compressor operating conditions. It is generally known that interaction between tip

4

J.H. Baek and M. Choi

leakage flow and other flow phenomena, such as boundary layer on end-walls and passage shock, causes the rotating stall. In recent years, many researchers has been using numerical method to investigate the cause of the rotating stall.[7-14] Hoying et al. [9] established a relationship between stall inception and trajectory of the center of the tip leakage flow. Vahdati et al. [14] numerically investigated effects of rotating stall and surge on the blade load. These numerical results allowed researchers to see stall inception process in detail and gave an intuitive understanding about the rotating stall. According to these previous studies, it is now recognized that tip leakage flow plays an important role in stall inception. However, little attention has been paid to role of hub-corner-separation on the rotating stall although it is a common flow feature in an axial compressor operating near the stall point, exerting a large effect on internal flows and loss characteristics. Only Day [3] noted that some relations between corner-separations and modal wave in his stall tests although several researchers [1517] investigated the structure of corner-separations and its impact on internal flows. This work is the results of three previous studies [18-20] conducted to find the cause and key factors of the rotating stall. Firstly, stall inception process was analyzed in detail using numerical results and then evaluated that how the inlet boundary layer thickness and the number of flow passages affect rotating stall in numerical simulation.

2 Test Configuration This work was conducted using a low-speed axial compressor, which was tested by Wagner et al. [17]. Because this compressor not only has a rotor without stator and inlet guide vane, but also rotates slowly about its axis at 510 rpm, the maximum pressure ratio between the inlet and the outlet is equal to about 1.01. The Reynolds number is about 244,000, which is based on the inlet velocity at the design condition and the blade chord length at mid-span. Unlike other axial compressor with a constant tip clearance, this compressor has variable tip clearance as shown in Fig 1. Detailed geometry specifications are Fig. 1. Schematic diagram of the single rotor test rig summarized in Table 1. for thick inlet boundary layer(Wagner et al., 1983) Wagner et al. [17] changed the inlet boundary layer thickness on the hub and casing in their experiment in order to investigate separations on blade surfaces and secondary flows at the downstream of the rotor. In Fig. 1, there are five measurement points which have important meanings in their experiments. The boundary layer thickness was changed at STA.-1 with several screens of different wire diameters and spacing. The inlet and the exit flow conditions were measured at STA.1 and STA.2 respectively. To complete the static pressure rise curve, the upstream and downstream static pressures were measured on the hub and casing of STA.1 and STA.3. There is a small gap on the hub between moving and stationary parts at STA.4. Relative

A Numerical Study on Rotating Stall Inception in an Axial Compressor Table 1. Geometric Specifications No. of blade

28

Casing radius

0.7620m

Hub radius

0.6096m

Chord length

0.1524m

Tip clearance

0.0036m, 0.0015m, 0.0051m

Blade profile

NACA 65

Aspect ratio

1

Hub/Tip ratio

0.8

Rotation speed

510rpm

Stagger angle

35.5 (at mid span)

Inlet flow angle

59.45 (at mid span)

Outlet flow angle

11.50 (at mid span)

o

o o

Table 2. Measurement Positions STA. Thin Boundary Layer Thick Boundary Layer

5

measurement positions are summarized with STA.0 as a reference point in Table 2. On the other hand, the STA.2 is located at 30% axial chord downstream of the rotor for the thick boundary layer but at 10% axial chord downstream of the rotor for the thin boundary layer. However, because exit flow conditions were measured at four locations such as the 10%, 30%, 50% and 110% axial chord downstream of the rotor for the thin boundary layer, exit flow conditions in this paper were specified at 30% axial chord downstream of the rotor regardless of the inlet boundary layer thickness.

3 Numerical Methods

Simulations of the three-dimensional flow were conducted using the in-1 -0.102m house flow solver, TFlow, which has 0 0.000m 0.000m been improved to calculate the inter1 0.206m 0.229m nal flow in turbomachinery since its 2 0.498m 0.498m development in the mid-1990s [21]. 3 0.744m 0.744m This flow solver has been validated 4 0.305m 0.279m through a series of calculations of the subsonic axial compressor, transonic axial compressor and subsonic axial turbine until now.[22-24] TFlow uses the compressible RANS equations to describe the viscous flows through a blade row. The governing equations were discretized by the finite volume method in space. The upwind TVD scheme based on Van Leer’s flux vector splitting method was used to discretize the inviscid flux terms and the MUSCL technique was used for interpolation of flow variables. The second order central difference method was used to discretize the viscous flux terms. The equation was solved using the Euler implicit time marching scheme with first order accuracy to obtain a steady solution and also with second order accuracy to simulate unsteady flow. The laminar viscosity was calculated by Sutherland’s law and the turbulent viscosity was obtained by the algebraic Baldwin-Lomax model because the flow field was assumed to be fully turbulent. The computational domain was fixed in the region between STA.1 and STA.3 and a multi-block hexahedral mesh was generated using ICEM-CFD. To capture the motions of stall cells, four and eight blade passages were used for an unsteady simulation as shown in Fig. 2. Each passage consists of 125 nodes in the stream-wise direction, 58 nodes in the pitch-wise direction, and 73 nodes in the span-wise direction. To capture the tip leakage flow correctly, the region of the tip clearance was filled with the

6

J.H. Baek and M. Choi

embedded H-type grid, which has 52 nodes from leading edge to trailing edge of the blade, 10 nodes across the blade thickness and 16 nodes from blade top to casing. After the steady solutions were obtained using a mesh with about 0.5 million nodes in one passage, the simulation of the rotating stall was conducted using the same mesh with four and eight passages. Therefore, the whole computational domain has a total of about 2.1 million (a) Grid with four passages nodes with four passages and 4.2 million nodes with eight passages. In these computations the distance of the first grid point from the wall was set at to be equal to or less than 5. In the unsteady simulation of the rotating stall, the computational domain must have several blade passages in a blade row to show the movement of stall cells. Since an identical grid was used at each passage, each blade passage was assigned to one processor, and the flow variables in contact with other passages were trans(b) Grid with eight passages ferred by MPI (Message Passing Interface) libraries. Fig. 2. Computational domain and grid for In the internal flow simulation of turstall simulation bomachinery, there are four different types of boundaries such as inlet, outlet, wall and periodic conditions. Obtained by using the temperature and pressure at the standard atmosphere and the velocity profile with different boundary layers on the hub and casing measured by Wagner et al. [17], the total pressure, total temperature and flow angles were fixed at the inlet condition and the upstream-running Riemann invariant was extrapolated from the interior domain. For the outlet condition, the static pressure on the hub was specified and the local static pressures along the span were given by SRE (Simplified Radial Equilibrium) equation. Other flow variables such as density and velocities were extrapolated from the interior. On the wall, the noslip condition was used to calculate the velocity components. The surface pressure and density were obtained using the normal momentum equation and adiabatic wall condition respectively. Since only a part of full passages were calculated, it was necessary to implement the periodic condition between the first and the last passages. The periodic condition was implemented using the ghost cell next to the boundary cell, and it enabled flow variables to be continuous across the boundary. The time accurate unsteady simulation was conducted as the back pressure at the outlet condition, p3/p1, was set to be 1.008 for the thick inlet boundary layer and 1.007 for the thin inlet boundary, which were slightly larger values than the outlet

A Numerical Study on Rotating Stall Inception in an Axial Compressor

7

static pressures of =0.65 at each case. However, no artificial asymmetric disturbances were imposed at the inlet condition.

4 Computational Results A. Performance Curve The performance of a compressor can be presented by the flow coefficient and the static pressure rise coefficient, which are defined as follows. In experiments, the latter was calculated using the static pressure increment between STA.1 and STA.3 at the tip. p − p1 ψ = 3 φ = Vx / U m , 0.5ρU m2 (1) As shown in Fig. 3, the static pressure rise curve obtained by the steady numerical computation corresponds to the experimental one from 0.65 to 0.95 of flow coefficients regardless of the inlet boundary layer thickness. The static pressure rise coefficients for the unsteady simulation were calculated using the instantaneous flow data, which were saved five times a period. One period here is defined as the time it takes a blade to traverse the computational domain once. The numerical results of the unsteady simulation have a good agreement with the experimental one until the rotating stall occurs. However, there are some discrepancies after the development of the rotating stall because a part of the whole blade passages were used in the unsteady simulation. The numerical result for the thick inlet boundary layer predicts the stall point relatively well in the static pressure rise curve, and there is an abrupt drop of the performance in the stall development process between φ =0.58 and φ =0.53. The magnitude of the performance drop matches to the experimental result well although the slope of it is not steep. However, there is no abrupt performance drop in the static pressure rise curve of both experimental and numerical results for the thin inlet boundary layer. The static pressure rise coefficients in both cases are clustered around

1

1 0.9

Stall Inception

0.8

0.8

0.7

0.7

ψ

ψ

0.9

0.6

Stall Inception

0.6 Exp.(tip, Wagner et al.) Cal. Unsteady(tip) Cal. Steady(tip)

0.5 0.4 0.4

0.5

0.6

0.7 φ

0.8

0.9

Exp. (tip, Wagner et al.) Cal. Unsteady(tip) Cal. Steady(tip)

0.5 0.4 1

0.4

0.5

(a) Thick BL

0.6

0.7 φ

(b) Thin BL

Fig. 3. Static pressure rise curve

0.8

0.9

1

8

J.H. Baek and M. Choi

the experimental value at the early stage of the unsteady calculation since the asymmetric disturbance is small. The rotating stall initiates at φ =0.62 and φ =0.595 for the thick and thin inlet boundary layers respectively and it scatters the static pressure rise coefficient because of the disturbance of the stall cell. For the thick inlet boundary layer, the performance drops abruptly below φ =0.58 as the rotating stall develops from the part-span stall to the full-span stall. For the thin inlet boundary layer, the stall inception is retarded to lower flow coefficient because the axial velocity is large near the casing in comparison to that for the thick inlet boundary layer. B. Steady Results 1

1

0.9

0.9

0.8

0.8

0.7

0.7

(V x/V xm)1

(V x/V xm)1

φ

0.6 0.5 0.4

EXP(Wagner et al.) Near φ=65% Near φ=75% Near φ=85% Near φ=95%

0.3 0.2 0.1 0

0

20

40

60

80

100

Span(%)

0.6 0.5 0.4

EXP(Wagner et al.) Near φ=65% Near φ=75% Near φ=85% Near φ=95%

0.3 0.2 0.1 0

0

20

40

60

80

100

Span(%)

(a) Thick BL

(b) Thin BL

Fig. 4. Inlet velocity profiles at STA.1

(a) Thick BL,

φ =0.65

(b) Thick BL,

(c) Thin BL,

φ =0.65

(d) Thin BL,

φ =0.85

φ =0.85

Generally, the steady flow near the stall point has a large effect on the rotating stall. So, it is very important to investigate the effects of the inlet boundary layer thickness on the steady flow. For the clear understanding of different inlet boundary layer thickness in case of “Thick BL” and “Thin BL”, where “BL” is an abbreviation of the boundary layer, Fig. 4 shows the inlet axial velocity profiles obtained in the previous study of Choi et al. [22]. As shown in Fig. 4, the simulation results show a good agreement with experimental ones at each flow condition, meaning that the computation can properly reproduce the inlet condition of the experiment of Wagner et al. [17]. To investigate the change of the hub-corner-separation and the tip leakage flow, Fig. 5 shows the coefficient of the rotary total pressure at STA.2. The rotary total pressure is used to remove rotational effects from the relative total pressure and is defined as:

Fig. 5. Rotary total pressure distribution at STA.2 pt ,Rot = p +

(

1 ρ W 2 −U 2 2

)

(2)

Its coefficient was calculated by using both the area-averaged total pressure at the inlet and the rotary total pressure given in Eq. (2). C pt ,Rot =

pt ,1 − pt ,Rot 0.5ρU m2

(3)

These results were compared with the experimental one in the previous study, revealing a good agreement except the size of the region affected by the tip leakage flow. Although there are no apparent differences on the hub-corner-separation and the

A Numerical Study on Rotating Stall Inception in an Axial Compressor

9

tip leakage flow at the design condition between two cases, a lot of difference could be found depending on the inlet boundary layer thickness at high load. While the hubcorner-separation grows to be a large separation on the suction surface for the thick inlet boundary layer, it is diminished to be indistinguishable from the rotor wake for the thin inlet boundary layer. Moreover, another corner-separation near the casing occurs due to the interaction between the tip leakage flow and the boundary layer on the suction surface. These corner-separations block internal flows and result in a large total pressure loss. C. Effects of the Inlet Boundary Layer Thickness In order to judge whether the rotating stall occurs or not, a time history of the axial velocity before the leading edge of the rotor has been used. According to the number of flow passages, four and eight numerical sensors were installed at 85% span from the hub and 25% of the chord length upstream of the rotor to catch the axial velocity as shown in Fig. 6. These sensors rotate in the counter- clock-wise direction as the calculation goes on because the numerical simulation was conducted in Sensor the rotating frame. These sensors Flow direction read the axial velocity at each position 480 times a period. Figure 7 shows the time-history of axial velocities measured by each numerical sensor in eight flow passages. There is no disturbance at the beginning of the unsteady calculaFig. 6. Positions of the fixed numerical sensor to tion, but some small disturbance apmeasure axial velocity pears in the time-history although no artificial asymmetric disturbance was imposed. For the thick inlet boundary layer, the first disturbance occurs near 3.0 periods and it moves at the same speed as the rotor, meaning that it adheres to the blade row. The rotating stall is found in the time-history of numerical sensors near 7.0 periods and the flow coefficient has a value of 0.62 at this moment. The rotational speed of the stall cell quickly comes down to 75% of the rotor so it moves in the opposite direction to the rotor blade in the rotating frame. For the thin inlet boundary layer the axial velocity at each numerical sensor does not catch any signal of a disturbance before 5.0 periods. The first disturbance occurs near 5.0 periods and grows slowly to be an attached stall. As the flow rate is reduced, this attached stall turns to be a rotating stall at 6.6 periods when the flow coefficient is about 0.60. This flow coefficient is small in comparison to the case with thick boundary layer and this means that the large axial velocity near the casing could delay the stall inception. As the flow coefficient is reduced, the stall cell steadily grows large and its rotational speed goes down to the 74% of the rotor speed. In both cases, firstly one stall cell is generated in the blade row and then one or two another stall cells are originated. Figure 8 shows the rotary total pressure distribution at the tip region in the stall inception process for the thick inlet boundary layer. Numerical sensors cannot detect any signal of a disturbance at 1.0 period because the rotary total pressure in the tip region has similar features in all passages. Then a local disturbance can be observed in

10

J.H. Baek and M. Choi

the tip leakage flow between the eight and second blades at 3.0 periods, when the 0.0 #7 numerical sensors detect some distur1.0 #6 0.0 bance for the first time. The rotary total 1.0 pressure shows some different pattern at #5 V /U 0.0 1.0 each passage as shown in Fig. 8(b). This #4 0.0 1.0 disturbance is fixed inside the blade pas#3 0.0 sage, rotates with the rotor at the same 1.0 #2 0.0 speed and grows to be a bigger attached 1.0 #1 0.0 stall cell by the throttling process as 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 shown in Fig. 8(c). The front line of the (a) Thick BL tip leakage flow is located behind the leading edge plane until this moment. 1.0 74% 100% When attached stall cell reaches a critical 0.0 #8 1.0 size, the tip leakage flow locally moves 0.0 #7 1.0 around the leading edge of the next blade #6 0.0 and it starts spilling into the adjacent flow 1.0 #5 V /U 0.0 passage because of the blockage of the at1.0 #4 tached stall cell. A critical size here 0.0 1.0 means the size of the disturbance when #3 0.0 1.0 the front line of the tip leakage flow #2 0.0 passes over the leading edge of the next 1.0 #1 0.0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 blade. The attached stall cell finally changes to a short-length-scale rotating (b) Thin BL stall through this stall inception process as shown in Fig. 8(d). Once the rotating stall Fig. 7. Time-history of axial velocities of is generated, it advances the next blades numerical sensors with eight flow passages one by one and grows into a large stall cell as shown in Fig. 8(e,f). During this period, another two stall cells are originated and three rotating stall cells are found in the blade row. This stall inception process via an attached stall cell has been already reported by Choi et al. [18] using the rotating stall simulation with four passages. For the thin inlet boundary layer, the stall inception process is similar to the previous case but has some different features. At 3.0 periods, the rotary total pressure distribution at the tip region is very similar to each other between the blades. The first disturbance appears at 5.0 periods between the fourth and the seventh blades as shown in Fig. 9(b). At this time, the front line of the tip leakage flow is also located behind the leading edge plane. This disturbance grows to be a large attached stall cell as the flow coefficient is reduced as shown in Fig. 9(c). The high velocity region occurs before the eighth rotor blade and this region is caused by the detour of the inlet flow due to the blockage of the attached stall cell. If this attached stall cell has a critical size, the low energy flow spills into the next blades around the leading edge of the rotor and through the tip clearance, and the rotating stall finally occurs as shown in Fig. 9(d). After stall inception, stall cell moves along the blade row continuously, letting the high velocity region take precedent over. At 8.0 periods, a new stall cell is originated at the sixth rotor blade by the same stall inception process. 1.0 0.0 1.0

x

m

x

m

100%

75%

#8

A Numerical Study on Rotating Stall Inception in an Axial Compressor

(a) 1.0 period

(b) 3.0 periods

(a) 3.0 period

(b) 5.0 periods

(c) 7.0 periods

(d) 7.2 periods

(c) 6.6 periods

(d) 6.8 periods

(e) 9.0 periods

(f) 12.0 periods

(e) 8.0 periods

(f) 10.0 periods

11

Fig. 8. Rotary total pressure distribution at Fig. 9. Rotary total pressure distribution at the tip the tip region in the stall inception process region in the stall inception process for the thin for the thick inlet boundary layer inlet boundary layer

Finally, two rotating stall cell exists in the blade row as shown in Fig. 9(f). In comparison to the previous case, the size of each stall cell is small in the circumferential direction. Unlike the results of Hoying et al. [9] that the localized disturbance near the casing occurred when the front line of the tip leakage flows propagates forward of the leading edge plane, the first disturbance in this study initiated before the tip leakage flows reached the leading edge plane. Figure 10 shows the rotary total pressure distribution

12

J.H. Baek and M. Choi

at STA.2 at 1.0 and 3.0 periods for the thick and the thin inlet boundary layers respectively. At 1.0 period, there was no disturbance in the time-history of axial velocities and in the rotary total pressure near the casing as mentioned above, but the hub-corner-separation shows some asymmetric disturbance at STA.2 for the thick inlet boundary layer. In the same manner, there is a small disturbance at the cornerseparation near the casing at 3.0 periods for the thin inlet boundary layer. These disturbances might have been generated by the numeri(a) Thick BL, 1.0 period cal round-off error or the change of numerical methods in the unsteady simulation. These results makes the authors conclude that the local disturbance is transferred from the separations on blade surfaces to the tip leakage flow. Based on above numerical results, the different inlet boundary layer affect the steady flow near the stall condition and makes rotating stall to be different in each case. The hub-corner-separation and the corner-separation near the casing cause the rotating stall for the thick and the thin inlet boundary layers respectively. The size of the at(b) Thin BL, 3.0 periods tached and the rotating stall cell for the thick boundary layer is larger than that for the thin boundary layer. This is why the large axial Fig. 10. Rotary total pressure distribution at STA.2 velocity near the casing in the latter in unsteady results case effectively keeps disturbances from growing large. However, for the thick inlet boundary layer, the weak axial velocity near the casing allows a small disturbance to grow large into the attached stall cell quickly. Due to the difference of the size of the stall cell, the performance drops abruptly for the thick inlet boundary layer but there is no steep performance drop for the thin inlet boundary layer.

A Numerical Study on Rotating Stall Inception in an Axial Compressor

13

D. Effects of the Number of Flow Passages

ψ

Before using eight passages, firstly the rotating stall stimulation was conduced in the grid with four passages (Fig. 2(a)). Figure 11 shows the static pressure rise curve for the thick boundary layer with four passages. The numerical results with four passages also predicts the stall point well, and capture an abrupt drop of the performance between φ =0.58 and φ =0.53. Although the pressure drop occurs sharply in this case, it is smaller than the experiment and the numerical result with eight passages in Fig. 3(a). In case with eight passages, the pressure drop is large but the slop of it is not steep. It is why the performance with eight passages is not severely affected by the stalled passage because the computational domain is relatively large while the performance with four passages does. After stall inception process, the distribution with four passages is more scattered than with eight passages because of the size of the computational domain. The time histories of axial velocities in Fig. 12 are very similar to the case with eight passages (Fig. 7(a)), and there is no disturbance until 9.0 periods. One period here is the time it takes for four blades to traverse the computational domain once and it is half of that in eight passages. The first disturbance occurs at 9.0 periods and it has the same speed as the rotor, meaning that it fixed to the blade row. The rotating stall occurs at 19.5 periods, and the flow coefficient has a vale of 0.62, which is 1 equal to the value in eight passages. The 0.9 Stall Inception rotating speed for stall cell quickly decreases to 79% of the rotor. In this case, 0.8 only one stall cell is initiated and grows 0.7 large as the simulations goes on because 0.6 the small computational domain keeps other disturbances from originating in Exp.(tip, Wagner et al.) 0.5 Cal. Unsteady(tip) circumferential direction. Cal. Steady(tip) 0.4 The rotary total pressure in the tip re0.4 0.5 0.6 0.7 0.8 0.9 1 φ gion has similar features in each passage at 3.0 periods. Then local disturbance Fig. 11. Static pressure rise curve for the happens in the tip leakage flow between thick BL with four passages the first and third blades near 9.0 periods as shown in Fig. 13(b). This disturbance 79% 100% rotates at the same speed as the rotor and 1.0 #4 grows to an attached cell as shown in 0.0 Fig. 13(c). The front line of the tip leak1.0 #3 age flow is also located behind the lead0.0 V /U 1.0 ing edge plane until this moment. When #2 0.0 the attached stall cell reaches a critical 1.0 size, it changes to the rotating stall as #1 shown in Fig. 13(d). After stall inception 0.0 0 2 4 6 8 10 12 14 16 18 20 22 24 26 process, the stall cell grows quickly to be a large stall cell in short time Fig. 12. Time-history of axial velocities of (Fig. 13(e,f)). This case with four pasnumerical sensors for the thick BL with four sages has just one stall cell, although this passages x

m

14

J.H. Baek and M. Choi

(a) 3.0 periods

(b) 9.0 periods

(c) 19.0 periods

(d) 19.5 periods

(e) 20.0 periods

(f) 20.5 periods

Fig. 13. Rotary total pressure distribution near the tip region in the stall inception process for the thick BL with four passages

Fig. 14. Rotary total pressure distribution at STA.2 with four passages at 3.0 periods

stall inception process is the same as the case with eight passages. Figure 14 shows the rotary total pressure distribution at STA.2 when there was no disturbance in other indicators. However, the hub-cornerseparation shows some asymmetric behavior at STA.2 at this moment. Choi et al. [18] have suggested using simulation results with four blade passages that this disturbance in the hubcorner- separation might trigger the disturbance near the casing and cause the rotating stall. The simulation of the rotating stall is the time-consuming work for researchers to get a successful result. Most of researchers have been frequently using a few passages instead of whole thing in order to capture the motion of stall cells because it is not easy to simulate the rotating stall with whole flow passages due to the limitations of computational time and memory. In this study, the stall inception process, before the rotating stall is originated, is nearly same regardless of the number of blade passages. However, after the rotating stall was originated, the flow had some discrepancies according to the numbers of passages. If so, how many passages have to be used in the simulation of the rotating stall? It, of course, is the best thing to use whole blade passages to capture the rotating stall if the computational resource and time are sufficient. If not, it depends on the focus of studies to select the appropriate number of the flow passages. If researcher wants to know a detail process of the stall inception and to find the cause of the rotating stall except for modal wave, it is proper for him to use the minimum number of passages. However, if researcher wishes to understand the

A Numerical Study on Rotating Stall Inception in an Axial Compressor

15

effect of the fully developed rotating stall on the performance and internal flows, it is better to use as many passages as possible. The maximum number is clearly the whole blade row but the minimum number may not be fixed generally. Authors propose that it would be possible at least to capture the stall propagation in circumferential direction using the minimum number. The size of the rotating stall, which is just initiated, was about two and half blade pitches in this study. If two or three passages were used in the simulation, the operating condition might jump into the surge without the rotating stall because all passages could be simultaneously blocked by the separation. In this study, four and eight passages were used for the rotating stall and properly showed the movement of the stall cell. In reference, the size of the stall cell in the study of Hoying et al. [9] was also about two pitches at the early stage of the stall inception. Therefore, the minimum number of the blade passage might be four for short-length-scale rotating stall in a subsonic axial compressor.

5 Conclusion In this study, several numerical simulations were conducted to analyze stall inception process in detail and to investigate effects of the inlet boundary layer thickness and the number of flow passages on the rotating stall. The first disturbance occurred in separations on blade surfaces. This disturbance was transferred from the separations to the tip leakage flow and it grows to be an attached stall cell. When this attached stall cell reached a critical size, the rotating stall was initiated. The inlet boundary layer thickness had a large effect on the flow coefficient at the stall inception and the size of stall cells. The small axial velocity near the casing allowed the disturbance to grow to be a large stall cell for the thick inlet boundary layer, while the large axial velocity for the thin inlet boundary layer kept the disturbance from growing. Therefore, the rotating stall occurred at lower flow coefficient for the thin boundary layer in comparison to that for the thick inlet boundary layer. Moreover, the size of the attached and the rotating stall cell grew large with the thick boundary layer. Due to the different size of stall cells, there was abrupt performance drop for the thick inlet boundary layer but not for the thin boundary layer. The number of flow passages did not affect the stall inception process but did the stall development. The stall inception was similar in two cases, four and eight flow passages. The number of flow passages had a large effect on the number of stall cells. Only one stall cell was originated with four passages, but there were three stall cells with eight passages. Therefore, in case researchers want to scrutinize stall inception process only, it is better to use as small number of flow passages as possible, at least 4 passages. Acknowledgments. The authors wish to acknowledge the support of the Agency for Defense Development under the contract UD04006AD.

16

J.H. Baek and M. Choi

References [1] I J Day and N A Cumpsty (1978) Measurement and Interpretation of Flow within Rotating Stall Cells in Axial Compressors. Journal of Mechanical Engineering Science 20:101114 [2] N M McDougall, N A Cumpsty, N A and T P Hynes (1990) Stall Inception in Axial Compressors. Journal of Turbomachinery 112:116-125 [3] I J Day (1993) Stall Inception in Axial Flow Compressors. Journal of Turbomachinery 115:1-9 [4] T R Camp and I J Day (1998) A Study of Spike and Modal Stall Phenomena in a LowSpeed Axial Compressor. Journal of Turbomachinery 120:393- 401 [5] M Inoue, M Kuroumaru, T Tanino, S Yoshida and M Furukawa (2001) Comparative Studies on Short and Long Length-Scale Stall Cell Propagating in an Axial Compressor Rotor. Journal of Turbomachinery 123:24-32 [6] M Inoue, M Kuroumaru, S Yoshida and M Furukawa (2002) Short and Long LengthScale Disturbances Leading to Rotating Stall in an Axial Compressor Stage with Different Stator/Rotor Gaps. Journal of Turbomachinery 124:376-384 [7] L He (1997) Computational Study of Rotating-Stall Inception in Axial Compressors. Journal of Propulsion and Power 13:31-38 [8] L He and J O Ismael (1997) Computations of Blade row Stall Inception in Transonic Flows, Proceedings of 13th International Symposium on Airbreathing Engines, Paper ISABE 97-7100, pp. 697-707 [9] D A Hoying, C S Tan, Huu Duc Vo and E M Greitzer (1999) Role of Blade Passage Flow Structures in Axial Compressor Rotating Stall Inception. Journal of Turbomachinery 121:735-742 [10] H M Saxer-Felici, A P Saxer, A P A Inderbitzin and G Gyarmathy (2000) Numerical and Experimental Study of Rotating Stall in an Axial Compressor Stage. AIAA Journal 38:1132-1141 [11] S Niazi (2000) Numerical Simulation of Rotating Stall and Surge Alleviation in Axial Compressors. Ph. D. Dissertation, Aerospace Engineering Dept., Georgia Tech., Atlanta, GA [12] N Gourdain, S Burguburu, F Leboeuf and H Miton (2004) Numerical Simulation of Rotating Stall in an Subsonic Compressor. AIAA Paper AIAA2004-3929 [13] C Hah, J Bergner and H.-P Schiffer (2006) Short Length-Scale Rotating Stall Inception in a Transonic Axial Compressor – Criteria and Mechanisms. ASME Paper GT2006-90045 [14] M Vahdati, G Simpson and M Impregun. Unsteady Flow and Aeroelasticity Behavior of Aeroengine Core Compressors During Rotating Stall and Surge. Journal of Turbomachinery 130, No. 2 [DOI: 10.1115/1.2777188] [15] C Hah and J Loellbach (1999) Development of Hub Corner Stall and Its Influence on the Performance of Axial Compressor Blade Rows. Journal of Turbomachinery 121:67-77 [16] S A Gbadebo, N A Cumpsty and T P Hynes (2005) Three-Dimensional Separations in Axial Compressors. Journal of Turbomachinery 127:331-339 [17] J H Wagner, R P Dring and H D Joslyn (1983) Axial Compressor Middle Stage Secondary Flow Study. NASA CR-3701 [18] M Choi, J H Baek, S H Oh and D J Ki (in print) Role of the Hub-Corner-Separation on the Rotating Stall in an Axial Compressor. Trans. Japan Soc. Aero. Space Sci. 51, No. 172 [19] M Cohi and J H Baek. Influence of the Number of Flow Passages in the Simulation of the Rotating Stall. ISOROMAC-12, Paper No. ISROMAC12-2008-20106 (also submitted to Int. J. of Rotating Machinery)

A Numerical Study on Rotating Stall Inception in an Axial Compressor

17

[20] M Choi, S H Oh, H, Y Ko and J H Baek. Effects of the Inlet Boundary Layer Thickness on the Rotating Stall in an Axial Compressor. ASME Turbo Expo 2008, Paper No. GT2008-50886 [21] H S Choi and J H Baek (1995) Computations of Nonlinear Wave Interaction in ShockWave Focusing Process Using Finite Volume TVD Schemes. Computers and Fluids 25:509-525 [22] M Choi, J Y Park and J H Baek (2006) Effects of the Inlet Boundary Layer Thickness on the Loss Characteristics in an Axial Compressor. International Journal of Turbo & Jet Engines 23, No. 1, pp 51-72 [23] J Y Park, H T Chung and J H Baek (2003) Effects of Shock-Wave on flow Structure in Tip Region of transonic Compressor. International Journal of Turbo & Jet Engines 20:4162 [24] J Y Park, M Choi and J H Baek (2003) Effects of Axial Gap on Unsteady Secondary Flow in One-Stage Axial Turbine. International Journal of Turbo & Jet Engines 20:315-333

A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers Myungsung Lee, Chan-Shik Won, and Nahmkeon Hur 1 2

Graduate school, Sogang University, Seoul, Korea Department of Mechanical Engineering, Sogang University, Seoul, Korea [email protected]

Abstract. This paper describes numerical methodologies of the flow and heat transfer analysis in heat exchangers of various types. Heat exchangers considered in the present study include a louver fin radiator for a vehicle, a shell and tube heat exchanger for HVAC and plate heat exchangers with patterns of herringbone and of dimple used in waste heat recovery. For the analysis of the louver fin radiator, a 3-D Semi-microscopic Heat Exchange (SHE) method was used. SHE was characterized by conjugated heat transfer analysis for the domain which consists of water in a tube, tube wall, the region where passes through the louver fin and ambient air. It is shown that both the air flow in louver fin area and the water flow inside the cooling water passages are successfully predicted along with the heat transfer characteristics. A conjugate heat transfer analysis in a shell and tube heat exchanger was also performed. For the analysis of entire shell side of the heat exchanger, geometric features such as tubes, baffles, inlet and outlet were modeled in detail. It is shown from the analysis that a design modification for better flow distribution and thus for better performance can be proposed. Finally an analysis method for the conjugate heat transfer between hot flow–separating plate–cold flow of a plate heat exchanger was proposed. By using periodic boundary conditions for the repeating sections and appropriate inlet and outlet boundary conditions, the heat transfer in a plate heat exchanger with patterns of herringbone and of dimple was successfully analyzed. Comparisons of the present numerical results are in a good agreement with available experiment data.

1 Introduction Heat exchangers are extensively encountered in many engineering applications such as the power generation, HVAC, chemical processing industry and waste heat recovery. The information on details of flow and temperature distribution in the heat exchanger is essential for high performance design of the thermal system. Most previous studies, however, are related to the system performance and/or empirical correlations, and lack the details of the flow and temperature distribution in the heat exchanger. Recent development in CFD enables us to predict the flow and temperature distribution by numerical methodologies through modeling of the detailed physics and geometry in the heat exchangers. The detailed study, however, requires time and cost for the modeling and analysis. In the present study, numerical methodologies of heat transfer analysis in heat exchangers of various types are proposed. Heat exchangers considered in this study are louver fin radiator for a vehicle application, shell and tube heat exchanger for HVAC, plate heat exchangers with patterns of herringbone and of dimple for a waste heat recovery.

20

M. Lee, C.-S. Won, and N. Hur

A design method of louver fin radiator based on experimental correlations is well documented in Kays and London [1]. Correlation equations for heat transfer and pressure drop are given in Davenport [2]. Chang and Wang [3] also proposed a correlation between heat transfer and pressure drop based on the experimental data of 91 louver fin radiator types. Previous studies, however, only considered overall heat transfer coefficient, but not the local heat transfer characteristics which give local flow and temperature distribution. In the present study a new method of simulating louver fin heat exchanger is proposed to analyze an underhood thermal management of a vehicle. A conjugate heat transfer analysis in a shell and tube heat exchanger is also performed. For the analysis of entire shell side, tubes, baffles, and inlet and outlet are modeled in detail. To improve performance of heat transfer in the shell side, the effect of the design factors such as sealing strip is also examined. To obtain a higher heat transfer performance, plate heat exchangers have been developed as a compact heat exchanger with various corrugation types of plate surface. Among the variety patterns of groove on the plate surface, the herringbone and dimple types are widely applied to the plate heat exchanger for many industrial applications. Most of the earlier heat transfer analysis of the herringbone-type plate heat exchanger is for the single passage of hot or cold fluid, and the effect of geometric parameters like chevron angle and plate pitch and flow parameters like Reynolds No. and Prandtl No. has been studied [4, 5, 6]. This gives overall performance, but not the local flow and temperature distribution. In the present study an analysis method is proposed for the conjugate heat transfer between hot flow-separating plate-cold flow in a repeating section of a herringbone-type plate heat exchanger with appropriate boundary conditions, and the effect of pulsatile flow on the heat transfer enhancement was also numerically investigated. A numerical analysis of a dimple plate heat exchanger was also performed. The present study investigated the heat transfer performance and friction characteristic of the dimple plate heat exchanger, proposing the correlations for friction factor and heat transfer coefficient as a function of geometric factor, which are very useful for the practical design work.

2 Louver Fin Radiator in a Vehicle The radiator of a vehicle are probably the most important component that affect the efficiency and stable operation of the thermal system due to its role in exhausting the engine heat to ambient air. The radiators using the louver fin (Fig. 1) are known to give the best efficiency of heat exchange among the types of radiators. To analyze underhood thermal management, a radiator is modeled as porous media through which air flows with resistance gaining heat from coolant in VTM module of STAR-CD [7]. Flow resistance through porous media is modeled from the experimental data. Coolant flows also through porous media occupying the same space as the previous air porous media. This module accurately predicts the temperature distribution of air through the radiator, whereas the coolant temperature distribution is not accurate since the coolant is modeled to flow through the porous media as explained above. In the present study, to predict coolant temperature distribution accurately a Semimicroscopic Heat Exchange (SHE) method is developed, where coolant passage is

A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers

21

Fig. 1. Vehicular louver fin radiator and numerical model

Fig. 2. Composition of louver fin radiator

modeled separately along with tube wall and only louver fin area is modeled as porous media. In this method, two distinct porous media occupying the same space are modeled, one being the same air porous media as previous case and the other being the solid porous media where only heat transfer by conduction is considered (Fig. 2). The amount of heat transfer is computed from the temperature difference of the air and the solid porous media with heat transfer coefficients between them. To model the porous media, Darcy equation of mass flow and pressure drop is used as momentum equation:

ρ ⎛ ∂U D ⎞ + U D ⋅ ∇U D ⎟ ε ⎜⎝ ∂t ⎠ ρC ∂P μ ' 2 μ =− + ∇ U D − U D − 1/ E2 U D U D + ρ f i ∂xi ε K K

(1)

where, ε is porosity, μ ' is effective viscosity and CE is inertia or Ergun coefficient. Different from the model used here, one medium model does not consider the effects

22

M. Lee, C.-S. Won, and N. Hur

of local heat transfer between air and louver fin since they use only one porous media with predetermined heat source. Energy equations of the present two medium model where heat source/sink is computed from local temperature difference between air and solid porous media are as follows: Fluid phase:

⎛ k fe ⎞ hsf a + (U D ⋅ ∇T f ) = ⎜ + D d ⎟ ∇ 2T f + (T − T ⎜ ε (ρC ) ⎟ ∂t ε (ρC p ) f s f p f ⎝ ⎠

∂T f

)

(2)

Solid phase: ∂Ts ⎛ k se =⎜ ⎜ ∂t ⎝ (1 − ε )( ρ C p ) s

⎞ 2 hsf a ⎟⎟ ∇ Ts − (Ts − T f (1 − ε )( ρ C p ) s ⎠

)

(3)

where, k fe and k se are fluid and solid effective thermal conductivity, hsf is interfacial convective heat transfer coefficient and a is the ratio of surface area to volume. In this study, for the interfacial convective heat transfer coefficient j-factor proposed by Chang [3] is used.

(a) heat exchanger and cooling fan

(b) section plot of engine room

Fig. 3. Application of SHE method for full automotive underhood model: computational mesh(left) and temperature distribution(right)

Fig. 3 shows a computational mesh for underhood thermal management. Unstructured mesh topology is seen in the figure along with detailed meshes of radiator and fan assembly. Results of temperature distribution by using VTM module of STARCD and SHE model are compared, where little difference is seen in temperature distributions. The amount of heat transfer rate using VTM model is computed as 24.4 kW and SHE model 23.9 kW. Therefore the heat transfer amount and temperature distribution by the present SHE model is comparable to those by VTM module of

A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers

23

Fig. 4. Heat transfer rate as function of geometric factor

(a) j/ReLp-0.49 = 0.0838

(b) j/ReLp-0.49 = 0.1676

(d) j/ReLp-0.49 = 0.5028

(e) j/ReLp-0.49 = 0.6704

(c) j/ReLp-0.49 = 0.3352

Fig. 5. Temperature distribution of louver fin radiator for various geometric factor

STAR-CD. In the present SHE model, the temperature distribution of louver fin is successfully predicted and that of coolant passages as well, which the VTM model cannot predict at all. SHE model developed in this study can be used for underhood thermal management analysis in case accurate temperature distributions of coolant and louver fin that required without modeling of louver fin geometry in detail. The thermal property of the radiator is affected by the geometry of louver fin. The j-factor and heat transfer coefficient are varied by changing the geometry. Thus, the term j / Re−L0.49 is a function of only the specific geometry of the louver fin radiator P

24

M. Lee, C.-S. Won, and N. Hur

[8]. Fig. 4 represents the heat exchange rate for various louver fin geometries. The temperature distributions in the coolant passages are shown for five radiators using variation of the geometric factor in Fig. 5. It is also seen from the figure that the present method can predict the temperature distribution of the coolant passages in detail which is not possible by conventional methods.

3 Shell and Tube Heat Exchanger Shell and tube heat exchangers are widely used in HVAC and process industry. In the present study a shell and tube heat exchanger designed for chilling the air in shell side by evaporating refrigerant in tube side. The heat exchanger consists of 330 circular tubes in U-shape and 10 baffles to hold the tubes in position and to guide the shellside flow as in Fig. 6. The detailed mesh structure around tubes is also shown in the figure. To model whole shell side of the heat exchanger, the total number of computational cells used in the study is around 15 million. As boundary conditions for the computation, air velocity and temperature are given at the inlet of shell side, while on the tube walls constant temperature of 0°C is given since the refrigerant is evaporating as it flows through the tubes. The results are given in Fig. 7. The velocity magnitude plot shows that most air flows through the gap between tubes and shell wall and little through the tube bundle, which results in lower performance. Thus, design modification for higher performance is required. Based on the findings from the present study, new design was proposed with sealing strips to block the high velocity streams near shell wall (Fig. 8). The velocity magnitude pattern inside shell was examined with respect to the location of the sealing strips as shown in Fig. 9. It is shown that the most active heat transfer occurs in model 3 and the least in model 1.

(a) shell

(b) tube

(c) end plate

(d) mesh structure

Fig. 6. Geometry of shell and tube heat exchanger

Fig. 7. Section plots of velocity magnitude inside shell

A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers

(a) model 1

Fig. 8. Geometry inside shell showing sealing strip

(b) model 2

25

(c) model 3

Fig. 9. Velocity magnitude with various location of sealing strips

Fig. 10. Outlet temperature and pressure drop (left), and heat transfer coefficient (right) as functions of size of sealing strips

The size and location of the sealing strips is a factor affecting the performance of the heat exchanger. From the results of the present study it is found that when the length of sealing strips is longer, the outlet temperature decreases, whereas the pressure drop increases as shown in Fig. 10. The heat transfer coefficient also increases as the size of sealing strips becomes larger. These results are useful in the optimal design of the shell and tube heat exchanger.

4 Plate Heat Exchanger 4.1 Herringbone-Type Plate Heat Exchanger

Plate heat exchangers are widely used in industrial applications. Among the great variety of possible corrugation patterns of plate surface, the herringbone type has been proved to be a successful design with good heat transfer performance [9]. In the previous study [10], heat transfer analysis was performed with herringbone-type plate heat exchanger by using simplified geometry concentrating on obtaining heat transfer coefficient. For this purpose single flow passage was modeled and effects of geometric and flow parameters on heat transfer coefficient were investigated. In the present study, an analysis method is proposed for the conjugate heat transfer between cold flow passage-separating plate-hot flow passage in a repeating section of a herringbone-type plate heat exchanger with appropriate boundary conditions. The

26

M. Lee, C.-S. Won, and N. Hur

model for the complex flow passages of a real plate heat exchanger, whose parts are shown in Fig. 11, has about 4 million computational cells (See Fig. 12). The geometry used in present study is the same as the one used in experimental study by Han and Kang [11].

Fig. 11. Plate heat exchanger used in experiment [11]

Fig. 12. Computational mesh of periodic section

Fig. 13. Schematic diagram of a plate heat exchanger

Fig. 14. Boundary conditions

Fig. 13 shows the schematic of a heat exchanger consisting of 10 plates, which have four cold flow and five hot flow passages, with alternating herringbone pattern (See Fig. 14). Boundary conditions used in the present computation are shown in Fig. 14. Outer plates are halved and a periodic condition was imposed (plate 4 and 6 in the figure). The total amount of flow entering the computational domain through inlet (1) in the figure is divided into two streams: one into the flow passage and the other to the next flow passage depicted as inlet (2). The velocities and temperatures of the flows at

A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers

27

these two boundaries are given. The difference in the flow rates at these two boundaries flows through the plates, and becomes mixed with the flow from inlet (3) and flows out. The flow from inlet (3) is the flow through downstream plates, and the flow rate is the same as that in inlet (2). Since the temperature of the flow in inlet (3) is not known a priori, the temperature is updated from the resulting temperature at the outlet from heat exchange process as the computation proceeds. In this manner one can obtain the final converged solution. Fig. 15 shows a temperature distribution in hot and cold flow passages and on the plate in between. It is well shown from the figure that the cold flow entering the passage gets heated as flowing through the plates. The counter flowing hot flow loses the same amount of heat as the cold flow gains. The temperature distribution on the plate 10 Numerical result - present study Experiment - Han and Kang (2005)

6

c

2

h [kW/m K]

8

4

2

0 0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

mass flux [kg/s]

Fig. 15. Temperature distribution in flow passages and separating plate

Fig. 16. Comparison of heat transfer coefficient with experimental data [11]

570 Hot flow passage Cold flow passage (7.5 Hz) Steady case

560

Q (W)

550

540

530

520 5

0 0

0.2

0.4

0.6

0.8

1

dimensionless time

Fig. 17. Schematic diagram of experimental Fig. 18. Comparison of heat transfer rate equipment [11] and inlet velocity for pulsatile between pulsatile flow and steady state flow

28

M. Lee, C.-S. Won, and N. Hur

is about the average of the temperature distributions of the two flows. It is also shown from the figure that the flow is vigorously mixed in span-wise direction due to the groove pattern. The heat transfer coefficients obtained from the present computation is compared to the existing experimental data of Han and Kang [11] in Fig. 16 for various flow rates of 0.04 to 0.12 kg/s. In these cases the flow rates are the same for both hot and cold passages. The results of the present computation agree quite well with the experimental data thus verifying the validity of the present numerical methodology of analyzing the heat transfer in a plate heat exchanger with pattern of herringbone. One can accurately predict the overall performance of a herringbone-type plate heat exchanger without any empirical correlation for the heat transfer coefficient, and at the same time, obtain the detailed temperature and flow distributions in the flow passages by the present computational method. To evaluate the improvement of heat transfer by pulsatile flow, a 7.5 Hz frequency was imposed at the inlet of cold flow passage (Fig. 17). In this case, the periodic average was adopted. The calculated heat is shown in the Fig. 18. From the result, the periodic-averaged heat transfer amount was obtained as Q=549 W. On the other hand, the steady-state heat transfer amount was Q=532 W. The heat transfer rate increased about 3% in this case with the pulsatile flow since it created more active mixing than flow of the steady state [10]. 4.2 Dimple-Type Plate Heat Exchanger

Recently, a plate heat exchanger with dimple pattern for high performance of heat transfer is getting attention in waste heat recovery industry due to its relatively small size. The present study investigated the characteristics of heat transfer performance and flow friction in the dimple plate heat exchanger with numerical simulation. The effect of the dimple height (Hd) and the dimple diameter (D) on the heat transfer performance was also investigated.

Fig. 19. Dimple plate heat exchanger and numerical model

(a) case1 (flue gas and water)

(b) case2 (flue gas and air)

Fig. 20. Computational domain of dimple plate heat exchanger

A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers

flue gas

water (a) case1

flue gas

29

air (b) case2

Fig. 21. Temperature distribution of each flow passage

The dimple plate heat exchanger and numerical model are shown in Fig. 19. In the present study, the total number of computational cells is around 10 million. The inlet and outlet are located for having the counter flow pattern between flue gas and water in case 1, and cross flow between flue gas and air in case 2 (See Fig. 20). In addition, the spacer of 5 mm was inserted between two separating plates in the case 2 as shown in Fig. 20. To predict the heat transfer characteristics in the heat exchanger, the conjugate heat transfer method was used to analyze the heat transfer between the hot flow passageseparating plate-cold flow passage. From the numerical analysis, Fig. 21 shows the characteristic of the heat transfer in the middle section of each channel. It is also shown that the hot flue gas loses heat as the cold fluid gains. The heat transfer rate obtained from the present study was compared with the experimental data which was presented by the company produced the dimple plate heat exchanger. The numerical results are in a good agreement with the experimental data as shown in Fig. 22. In addition, to analyze the characteristics of heat transfer and flow friction, Reynolds number was evaluated based on the characteristic length (Lc) which was

Fig. 22. Comparison of heat transfer rate between numerical result and experimental data

30

M. Lee, C.-S. Won, and N. Hur



(a) Fanning f-factor

(b) Colburn j-factor

Fig. 23. Correlations for flow friction(left) and heat transfer(right) as function of geometric factor

calculated with fluid volume per surface area. The characteristic of flow friction was correlated in terms of fanning f-factor defined as follow:

f =

ΔP 1 ρV 2 ( A / a ) 2

(4)

In this analysis, the heat transfer coefficient was obtained by the heat transfer area, and the difference of temperature between inlet and outlet, and the heat transfer rate. Thus, the Colburn j-factor was defined as follow: ⎛ ⎞⎛ μc p h ⎟⎜ j=⎜ ⎜ ρc pV ( H / L) ⎟⎜⎝ k ⎝ ⎠

⎞ ⎟ ⎟ ⎠

2/3

, h=

Q A × ΔT

(5)

The simulation was also performed to obtain the effect of geometric factor (Hd/D) which was calculated by dimple height over dimple diameter. In this case, the flue gas and the air were considered for hot and cold flow sides, respectively. Fig. 23 shows that the fanning f-factor and Colburn j-factor were obtained as functions of geometric factor (Hd/D) with variation of velocity at inlet boundary condition. In the figure, the f-factor rises as the geometric factor becomes larger, whereas the Colburn j-factor decreases in the same range.

5 Concluding Remarks In the present paper, numerical methodologies of heat transfer analysis in four types of heat exchanger are studied, which are a vehicular louver fin radiator, a shell and tube heat exchanger, a plate heat exchanger with patterns of herringbone and a plate heat exchanger with patterns of dimple. A Semi-microscopic Heat Exchange (SHE) method was proposed to examine the louver fin radiator in vehicle underhood thermal management. SHE method successfully predicts flow field and temperature distribution of air side, and those of coolant passages as well. Flow simulation method of a shell and tube heat exchanger was also proposed to obtain the overall flow and heat

A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers

31

transfer patterns inside the shell. To improve the heat transfer performance, the effects of various locations and length of the sealing strips were investigated. A herringbonetype plate heat exchanger was simulated by conjugate heat transfer analysis between hot flow and separating plate and cold flow with appropriate boundary conditions, giving good agreement with experimental data. Furthermore, pulsatile flow was adopted to improve the performance of heat transfer, and it is numerically shown that the heat transfer enhancement was occurred by 7.5Hz pulsatile flow. A numerical analysis of a dimple plate heat exchanger was also performed. As results of the present numerical studies, one can obtain not only the characteristics of flow field and temperature distribution, but also the correlations of flow friction and heat transfer, which would be very useful for optimal design of the heat exchanger system.

References [1] Kays WM, London AL (1984) Compact Heat Exchanger, 3rd ed. McGraw-Hill, New York [2] Davenport CJ (1983) Correlation for heat transfer and flow friction characteristic of louvered fin. AIChE Symp Ser 79(225):19-27 [3] Chang Y, Wang C (1996) Air side performance of brazed aluminum heat exchangers. J Enhanced Heat Transfer 3(1):15-28 [4] Focke WW, Zachariades J, Olivier I (1985) The effect of the corrugation inclination angle on the thermohydraulic performance of plate heat exchangers. Int J Heat Mass Transfer 28(8):1469-1479 [5] Heavner RL, Kumar H, Wannizrachi AS (1993) Performance of an industrial plate heat exchanger: effect of chevron angle. AIChE Symp Ser 89(295):65-70 [6] Martin H (1996) A theoretical approach to predict the performance of chevron-type plate heat exchangers. Chem Eng Process 35:301-310 [7] STAR-CD V3.24 User Guide (1998) Computational Dynamics, LTD [8] Hur N, Park J-T, Lee SH (2007) Development of a Semi-microscopic Heat Exchange (SHE) method for a vehicle underhood thermal management. Asian Symposium on Computational Heat Transfer and Fluid Flow, ASCHT2007-K04 (in CD-ROM) [9] Ko TH (2006) Numerical analysis of entropy generation and optimal Reynolds number for developing laminar forced convection in double-sine ducts with various aspect ratios. Int J Heat Mass Transfer 49:718-726 [10] Chin S-M, Hur N, Kang BH (2004) A numerical study on heat transfer enhancement by pulsatile flow in a plate heat exchanger. (in Korean) 3rd National Congress on Fluids Engineering, pp 1479-1484 [11] Han SK, Kang BH (2005) Effects of flow resonance on heat transfer engancement and pressure drop in a plate heat exchanger. The Korean Society of Mechanical Engineers 17:165-172

Application of a Level-Set Method in Gas-Liquid Interfacial Flows Sang Hyuk Lee1, Gihun Son2, and Nahmkeon Hur2 1 2

Graduate school, Sogang University, Seoul, Korea Department of Mechanical Engineering, Sogang University, Seoul, Korea [email protected]

Abstract. In this study, a numerical computation of gas-liquid interfacial flows was performed by using a Level Set (LS) method for various applications. The free surface motion from initial disturbed surface and drop motion on an inclined wall due to the gravity were computed for the verification of the simulation with a LS method. The binary drop collision was also simulated in the present study. After the drop collision, the behavior of drop and formation of satellite drop were obtained. Furthermore, an impact of a drop on a liquid film/pool was simulated. From the results, the crown formation and bubble entrapment were successfully observed. The present numerical results showed a good agreement with the theoretical and available experimental data, and hence, the LS method for interface tracking can be applied to various flow problems with sharp gas-liquid interfaces.

1 Introduction The phenomena involving gas-liquid interfacial flows are commonly encountered in various natural phenomena and engineering applications. It is important to predict the behaviors of drop and bubble in the formation of falling drop and evolution of sprays in industrial applications. Therefore, the experimental and theoretical analyses on the drop and bubble dynamics have been performed for this purpose. Ashgriz and Poo [1] analyzed the interaction regime of the binary drop collision through the experiment. From experimental results, they proposed boundaries between interaction regimes; coalescence, reflexive separation and stretching separation. And Ko and Ryou [2] proposed the theoretical correlations of the drop collision in the distributed drops. Not only the collision between drops but also the drop impact on the liquid film and pool has been studied. Cossali et al. [3] proposed the correlation for the drop splashing and spreading from the drop impact on liquid film. And Oguz and Prosperetti [4] analyzed the bubble entrapment from the drop impact on liquid pool. To analyze the characteristics of gas-liquid interfacial flows, not only the experimental analysis but also the numerical analysis has been investigated. In the numerical simulation of interfacial flow, the interface tracking plays an important role. Therefore, the methodologies for the interface tracking have long been developed. Hirt and Nichols [5] introduced the volume of fluid (VOF) method in which the interface is tracked by the VOF function based on the volume fraction of a particular phase in each cell. The VOF method widely used in most commercial CFD programs is

34

S.H. Lee, G. Son, and N. Hur

good for the mass conservation. However, it may have difficulties in computing the sharp interfaces, since the VOF method requires the assumption on how the interface is located inside the computational cell. To overcome these difficulties, Sussman et al. [6] proposed a level set (LS) method based on the smooth distance function. In the LS method, the interface is tracking by the LS function defined as a distance from the interface. Since the LS function can calculate the interface accurately, it has been used recently in the analysis of the gas-liquid two-phase flows. In this study, numerical simulation of gas-liquid interfacial flows was performed by using LS method for various applications. To verify the simulation with the LS method, the free surface motion and drop motion on an inclined wall were simulated. And the binary drop collision and drop impact on a liquid film or pool were numerically analyzed.

2 Numerical Analysis 2.1 Governing Equations To analyze the gas-liquid interfacial flow without mixing between two phases, mass and momentum equations for incompressible fluids are written as: ∇ ⋅u = 0 T ⎛ ∂u ⎞ + u ⋅ ∇u ⎟ = −∇p + ∇ ⋅ μ ⎡( ∇u ) + ( ∇u ) ⎤ + ρ g − σκ∇H ⎣ ⎦ ∂ t ⎝ ⎠

ρ⎜

where, u denotes velocity vector in Cartesian coordinates, p the pressure, ρ the density, μ the dynamic viscosity, g the gravity force, σ the surface tension coefficient, κ the interface curvature and H the smoothed step function. 2.2 Level Set Method For a numerical analysis of the interfacial flow, the information of interface between the liquid and gas are needed. In the LS method, the interface separating the two phases is tracked by a LS function φ defined as a distance from the interface. The negative sign is used for the gas phase and the positive sign for the liquid phase. And the interface is described as φ = 0 . For an incompressible condition, the governing equation for the advection of LS function is used as follows: ∂φ + ∇ ⋅ uφ = 0 ∂t

With the LS function obtained from above equation, a smoothed step function H and interface curvature κ can be calculated. And the density ρ and viscosity μ in governing equations are obtained with the step function in each cell. ⎡ 1 φ sin ( 2πφ / 3hn ) ⎤ ⎪⎫ ⎪⎧ H = min ⎨1, max ⎢0, + + ⎥⎬ 2π ⎪⎩ ⎣ 2 3hn ⎦ ⎪⎭

Application of a Level-Set Method in Gas-Liquid Interfacial Flows

κ = ∇⋅

35

∇φ ∇φ

ρ = ρ g + ( ρ l − ρ g ) H , μ = μ g + ( μl − μ g ) H 2.3 Contact Angle Condition In the LS formulation, a contact angle ϕ is used to evaluate the LS function at the wall. The contact angle varies dynamically between an advancing contact angle ϕ a and a receding contact angle ϕ r . And this dynamic contact angle is determined by the speed U c of the contact line. For a simulation of drop impact on dry wall, Fukai et al. [7] proposed a contact angle model. When the contact angle changes in the range of ϕ r < ϕ < ϕ a , the interline dose not move. However, while the contact line moves, the contact angle remains constant as ϕ = ϕ a for U c > 0 or ϕ = ϕ r for U c < 0 . Son and Hur [8] proposed an efficient formulation for Fukai et al.’s contact angle model to be implemented into the LS method on non-orthogonal grids. Using ∇φ = 1 and referring to Fig. 1, the LS function φ A at the wall can be calculated as r

φ A = φB + d cos (ϕ + β ) where, ⎣(

r

r

)

β = α sign ⎡ d − d n ⋅ ∇φ ⎤

⎦ r r r r r r r d = xB − x A , d n = d ⋅ nw nw

(

)

Fig. 1. Schematic for implementation of a contact angle condition

3 Gas-Liquid Interfacial Flows In this study, various gas-liquid interfacial flows were numerically simulated. The gas-liquid flows are influenced by various parameters. These parameters can be

36

S.H. Lee, G. Son, and N. Hur

summarized by the non-dimensional parameters. The non-dimensional parameters are defined as follows: Re =

ρlUD ρ U 2D U2 , We = l , Fr = , Oh = gD μl σ

μl ρ lσ D

where, Re denotes Reynolds number, We Weber number, Fr Froude number and Oh Ohnesorge number. In the definition, subscript l pertains to liquid. By the effect of these parameter and initial conditions, the interfacial flows are determined. The characteristics of interfacial flows were analyzed with the non-dimensional time τ = D0 / U . 3.1 Free Surface Motion To validate the LS formulation, the numerical analysis was performed for a free surface motion occurring when a heavier liquid is placed under a lighter [8]. Fig. 2(a) shows the computational grids and the initial conditions. The initial interface between

(

)

0 the liquid and gas is disturbed as yint = 7 + 2 cos x / 3 . And the non-dimensional

parameters are ρl / ρ g = 4 , μl / μ g = 1 , Re = 10 and We = 0.333 . From this simula-

tion, the behavior of free surface is shown in Fig. 2(b). During the early period of the free surface simulation, the interface oscillates due to the force imbalance between the gravity and the surface tension. As time elapses, the oscillation decays due to the viscosity and then the interface become stationary. The interface shape obtained at the stationary state was compared with the exact solution, yint = 7 . Fig. 2(b) shows that the numerical solution has an agreement with the exact solution.

(a) Initial interface

(b) Behavior of the free-surface

Fig. 2. Free-surface motion from initial disturbed surface

Application of a Level-Set Method in Gas-Liquid Interfacial Flows

37

3.2 Drop Motion on an Inclined Wall

To validate the LS formulation with the contact angle condition, the drop motion on an inclined wall was also analyzed [8]. The computational grids and initial conditions for a drop motion on the inclined wall are shown in Fig. 3(a). All boundaries of the domain are specified by the no-slip and contact angle condition. A semicircular drop is initially placed on the left side wall with no gravity force. And the non-dimensional

(a) Initial condition (b) Steady drop shapes at ϕ = 30° and ϕ = 90° Fig. 3. Drop motion on the inclined wall I (without gravity)

(a) ϕ adv = ϕ rec = 90° and Fr=0.2

(b) ϕ adv = 90° ϕ rec = 30° and Fr=0.1 Fig. 4. Drop motion on the inclined wall II (with gravity)

38

S.H. Lee, G. Son, and N. Hur

parameters are ρl / ρ g = 1000 , μl / μ g = 100 , Re = 44.7 and We = 1 . Form this simulation, Fig. 3(b) shows the numerical results with the contact angle of ϕ a = ϕ r = 30° and ϕ a = ϕ r = 90° . The steady-state drop shape is formed by the shape of a straight line or a truncated circle that satisfied a specified contact angle. And the numerical results at the steady state have no differences with the exact solution. Also, the drop motions with gravity force are shown in Fig. 4. In these cases, the gravitational force is dominant over the surface tension force holding the drop on the left side wall and hence the drop slides down an inclined wall. Then, the drop behavior is determined by the contact angle condition. 3.3 Binary Drop Collision

In this study, the binary drop collision was numerically simulated by using LS method. When the drop collision occurs, the interactions of drops are influenced by the drop property, drop velocity, drop-size and impact parameter. These parameters can be summarized by the non-dimensional parameter; Weber number, Ohnesorge number, drop-size ratio ( Δ = Ds / Db ) and non-dimensional impact parameter ( x = 2 X / ( Db + Ds ) ). By the effects of these parameters, the collision processes are

generated with the complicated phenomena. The drop collision can be classified into four interactions such as the bouncing, coalescence, reflexive separation and stretching separation. The bouncing regime is observed in the hydrocarbon drop collision. When the bouncing occurs, the gas layer around the drop disturbed the coalescence of two drops. However, this phenomenon cannot be shown in case of water drop coalescence. In this study, the simulations on the regimes of the coalescence, reflexive separation and stretching separation were performed. Fig. 5 shows the schematic of binary drop collision. The simulations on the drop collision were analyzed with two different conditions of head-on and off-center collision. 2D axi-symmetric simulation on head-on collisions and 3D simulation on offcenter collision were performed. From these simulations, the behavior of drops and formation of satellite drop were obtained. Fig. 6 shows the results of the head-on collision with various conditions. As Weber number increases, the reflexive energy

Fig. 5. Schematic of binary drop collision

Application of a Level-Set Method in Gas-Liquid Interfacial Flows

39

increases. Therefore, the reflexive separation occurs easily with high Weber number. And the size of satellite drop increases. Then, the secondary satellite drops may be separated from the bigger satellite drop. Fig. 7 shows the results of the off-center collision with various conditions. In the low impact parameter, the reflexive separation is generated like in Fig. 6. As the impact parameter increases, the stretching energy becomes bigger. When the stretching energy is similar to the reflexive energy, the coalescence of two drops occurs. And then, the stretching separation is generated in the high impact parameter. From these results, the characteristics of drop collision compared with the experimental [1] and theoretical [2] results like in Fig. 8. The interaction regimes of drop collision with various Weber number and impact parameters are shown in Fig. 8(a) and the numbers of satellite drops with various impact parameters are shown in Fig. 8(b). These numerical results have a good agreement with the previous correlations.

(a) We=23 and Δ =1.0

(b) We=40 and Δ =1.0

(c) We=56 and Δ =0.5 Fig. 6. Drop behavior of head-on collision

(a) We=83, Δ =1.0 and x=0.34

(b) We=83, Δ =1.0 and x=0.43 Fig. 7. Drop behavior of off-center collision

40

S.H. Lee, G. Son, and N. Hur

(a) Interaction regimes of drop collision

(b) Number of satellite drop

Fig. 8. Comparison of numerical results with experimental and theoretical results

3.4 Drop Impact on the Liquid Film

When the drop impacts on the surface, the phenomenon of drop impact is determined by the property of the surface such as a dry and wetted surface and liquid pool. In this chapter, the drop impact on the liquid film was numerically analyzed. The phenomenon of drop impact on liquid film depends on the drop property, impact velocity, drop size and liquid film thickness. These parameters can be summarized by the two main non-dimensional parameter; Weber number and non-dimensional film thickness ( δ = h / D0 ). By the effects of these parameters, the splashing and the crown formation and propagation are generated. In this study, 2D axi-symmetric simulations of the drop splashing and spreading due to the water drop impact on liquid film were analyzed with Weber number of 297. Fig. 9 shows the schematic of the drop impact on liquid film. A spherical drop with velocity U impacts on the liquid film with thickness h. Since the kinetic energy of the impacting drop is reflected by the static liquid film, the crown forms in the contact area between drop and liquid film. After a drop impacts on liquid film, the drop splashing and spreading are generated like Fig. 10. The crown height grows up and the secondary drop may be generated by the flow instability. After crown height reaches maximum value, the crown height is decreased by the gravity. Fig. 11(a) shows the evolution of the crown height with various film thicknesses. This numerical result of crown height has a good agreement with the experimental results of Cossali et al. [3]. After a drop impacts on the liquid film, the crown spreads outward. Fig. 11(b) shows the evolution of the crown diameter defined by the outer diameter of

Fig. 9. Schematic of the drop impact on liquid film

Application of a Level-Set Method in Gas-Liquid Interfacial Flows

41

Fig. 10. Drop splashing and spreading (we=297, d=0.29)

(a) Crown heights

(b) Crown diameters

Fig. 11. Characteristics of the crown with various film thickness (We=297)

the neck below the crown rim. These numerical results compared with previous empirical correlation [3, 9]. The correlation of Yarin and Weiss [9] underestimates the crown diameter than Cossali et al. [3] because Yarin and Weiss [9] do not consider film thickness. The numerical results of the present study corresponded with Cossali et al. [3]. 3.5 Bubble Entrapment

In this study, the formation of bubble due to initial drop impact was numerically analyzed. When a spherical drop impacts on the liquid pool, the bubble can be generated by cavity collapse. This drop entrapment is influenced by the drop property, impact velocity and gravity force. These parameters can be summarized by two nondimensional parameter; Weber number and Froude number. In the present study, 2D axi-symmetric simulation for a bubble entrapment was performed with Froude number of 200 and Weber number of 138. Fig. 12 shows the numerical results for a bubble entrapment. After the drop impact on liquid pool, the cavity grows up in lower and outer direction. By the formation of cavity, the imbalance between the gravity and the surface tension were generated. As time elapses, the interface becomes stationary. Then, the impact velocity and gravity have an effect on the tendency of cavity collapse. Especially, the behavior of surface at the center of cavity determines the formation of bubble. In this case, the bubble was entrapped at the center of cavity.

42

S.H. Lee, G. Son, and N. Hur

Fig. 12. Bubble entrapment (Fr=200, We=138, H=4D)

4 Concluding Remarks In the present study, a level set method was used to simulate various gas-liquid interfacial flows. By using the LS formulation and contact angle condition for non-orthogonal grids, it is shown that the LS method can be applied to various complicated phenomena of the sharp interfacial flows. To validate the LS method, the free surface motion from initial disturbed surface and drop motion on an inclined wall were simulated. The results with present LS method were shown to match well with the exact solution. In this study, the drop and bubble dynamics were numerically investigated. From a simulation of the binary drop collision, the behavior of drop and formation of satellite drop were successfully predicted. Furthermore, an impact of a drop on a liquid film or pool was simulated. After the drop impact, the crown formation is predicted on the liquid film and bubble entrapment on liquid pool. These numerical results showed a good agreement with the theoretical and available experimental data, and hence, the LS method for interface tracking can be applied to various flow problems with sharp gas-liquid interfaces.

References [1] Ashgriz N, Poo J Y (1990) Coalescence and separation in binary collisions of liquid drops. J Fluid Mech 221:183-204 [2] Ko G H, Ryou H S (2005) Modeling of droplet collision-induced breakup process. Int J Multiphas Flow 31:723-738 [3] Cossali G E, Marengo M, Coghe A, Zhdanov S (2004) The role of time in single drop splash on thin film. Exp Fluids 36:888-900 [4] Oguz H N, Prosperetti A (1990) Bubble entrainment by the impact of drops on liquid surfaces. J Fluid Mech 219:143-179 [5] Hirt C W, Nichols B D (1981) Volume of fluid (VOF) method for the dynamics of free boundaries. J Comput Phys 39:201-225

Application of a Level-Set Method in Gas-Liquid Interfacial Flows

43

[6] Sussman M, Smereka P, Osher S (1994) A level set approach for computing solutions to incompressible two-phase flow. J Comput Phys 114:146-159 [7] Fukai J, Shiiba Y, Yamamoto T, Miyatake O, Poulikakos D, Megaridis C M, Zhao Z (1995) Wetting effects on the spreading of a liquid droplet colliding with a flat surface. Phys. Fluids 7:236-247 [8] Son G, Hur N (2005) A level set formulation for incompressible two-phase flows on nonorthogonal grids. Numer Heat Transfer B 48:303-316 [9] Yarin A L, Weiss D A (1995) Impact of drops on solid surfaces: self-similar capillary waves, and splashing as a new type of kinematic discontinuity. J Fluid Mech 283:141-173

Modelling the Aerodynamics of Coaxial Helicopters – from an Isolated Rotor to a Complete Aircraft Hyo Won Kim1 and Richard E. Brown2 1

Postgraduate Research Student, Imperial College London, UK (Currently at University of Glasgow as a Visiting Researcher) [email protected] 2 Mechan Chair of Engineering, University of Glasgow, UK

Abstract. This paper provides an overview of recent research on the aerodynamics of coaxial rotors at the Rotorcraft Aeromechanics Laboratory of the Glasgow University Rotorcraft Laboratories. The Laboratory’s comprehensive rotorcraft code, known as the Vorticity Transport Model, has been used to study the aerodynamics of various coaxial rotor systems. Modelled coaxial rotor systems have ranged from a relatively simple twin two-bladed teetering configuration to a generic coaxial helicopter with a stiff main rotor system, a tail-mounted propulsor, and a horizontal stabiliser. Various studies have been performed to investigate the ability of the Vorticity Transport Model to reproduce the detailed effect of the rotor wake on the aerodynamics and performance of coaxial systems, and its ability to capture the aerodynamic interactions that arise between the various components of realistic, complex, coaxial helicopter configurations. It is suggested that the use of such a numerical technique not only allows insight into the performance of such rotor systems but might also eventually allow the various aeromechanical problems that often beset new helicopter designs of this type to be circumvented at an early stage in their design.

1 Introduction The flow field around a helicopter has posed a significant modelling challenge to the computational fluid dynamics (CFD) community due to the dominant and persistent nature of the vortical structures that exist within the wake generated by its rotors. CFD schemes based on the traditional pressure-velocity formulation of the NavierStokes equations generally struggle to preserve these vortical structures as the vorticity in the flow is quickly diffused through numerical dissipation. The effect of the artificial viscosity that arises from numerical dissipation can be reduced by increasing the grid resolution but the computation soon becomes prohibitively expensive. Of course, the problem is exacerbated further when a full helicopter configuration is considered, especially where the interaction between two (or more) geometrically separated components via their wakes acts to modify their aerodynamic loading. The inability to predict the consequences of certain interactional aerodynamics has indeed led to unexpected flight mechanic issues in many prototype helicopters [1-6]. In several cases, such interactions have resulted in significant overrun of development costs.

46

H.W. Kim and R.E. Brown

Modern requirements for high performance call for a new generation of highly innovative rotorcraft that are capable of both heavy-lift and high speed. Several nonconventional helicopter configurations, such as the tilt rotor and compound helicopter, have been put forward as possible solutions to these requirements. One such proposal, Sikorsky Aircraft Corporation’s X2 Technology Demonstrator [7], is based on a rigid coaxial rotor platform similar to the Advancing Blade Concept (ABC) rotor of the XH-59A, developed by the same company in the 1970s [8]. The advantage of a coaxial rotor with significant flapwise stiffness is that the effects of stall on the retreating blade can be delayed to higher forward flight speed as the laterally unbalanced load on one rotor can be compensated for by an equivalent, anti-symmetric loading on the other, contra-rotating rotor. The other limiting factor on the attainable speed, the effect of compressibility at the tip of the advancing blade, can be deferred by using an auxiliary device to augment the propulsive thrust of the main rotor. This allows the main rotor system to be offloaded, thus delaying the effects of compressibility to higher forward speed. In a system of such complexity, it is fair to expect the sub-components to interact aerodynamically, and hence their performance to be quite different when integrated as a complete rotorcraft compared to when analysed in isolation. The aim of the studies surveyed in this paper was to demonstrate that the current state of the art in computational modelling of helicopter aerodynamics has progressed in recent years to the point where the interactive aerodynamic flow field associated with a coaxial rotor system, and hence its performance, can be captured accurately. This survey will demonstrate that high fidelity computational simulations are capable of lending a detailed insight into the interactive aerodynamic environment of a new rotorcraft, even one with as complex a configuration as that of the compounded coaxial helicopter. The hope is that such analyses may soon be integrated early in the development of all rotorcraft, where they might help to avoid some of the costly mistakes that have been committed during the design of this extremely complex type of flying machines in the past.

2 Computational Model The VTM is a comprehensive code tailored for the aeromechanical analysis of rotorcraft systems. The model was initially developed by Brown [9] and later extended by Brown and Line [10]. Unlike a conventional CFD approach, the governing flow equations are recast into vorticity-velocity form to yield

∂ ω + u ⋅ ∇ω − ω ⋅ ∇u = ν∇ 2ω . ∂t

(1)

This form of the Navier-Stokes equation allows vorticity to be conserved explicitly. The vorticity transport equation is discretised and solved using a finite volume TVD scheme which is particularly well suited to preserving the compactness of the vortical structures in the rotor wake for long periods of time. In the context of coaxial rotor aerodynamics, this property of the VTM enables the long-range aerodynamic interactions between the twin main rotors and any other geometrically well-separated components of the aircraft to be captured and resolved in detail. The flow is assumed

Modelling the Aerodynamics of Coaxial Helicopters

47

to be inviscid everywhere except on the solid surfaces immersed in the flow. The generation of vorticity by lifting elements such as the rotor blades or fuselage appendages is then accounted for by treating these components as sources of vorticity, effectively replacing the viscous term in Equation (1) with a source term, S. The aerodynamics of the rotor blades are modelled using an extension of the Weissinger-L version of lifting-line theory in conjunction with a look-up table for the two-dimensional aerodynamic characteristics of the blade sections. The temporal and spatial variation of the bound vorticity, ωb, then yields the source term

S =−

d ωb + ub∇ ⋅ ωb . dt

(2)

The aerodynamics of the fuselage is modelled using a vortex panel approach in which the condition of zero through-flow is satisfied at the centroid of each panel. Lift generation by the fuselage is modelled by applying the Kutta condition along pre-specified separation lines on its surface. The viscous wake of the fuselage is not accounted for at present, however. The equations of motion for the blades, as forced by the aerodynamic loading along their span, are derived by numerical differentiation of a pre-specified non-linear Lagrangian for the particular system being modelled. No small-angle approximations are involved in this approach and the coupled flaplag-feather dynamics of each of the blades are fully represented. The acoustics, where applicable, are computed using the Farassat-1A formulation of the Ffowcs Williams-Hawkings equations. The aerodynamic force contribution from each collocation point along each blade is used to construct a set of point acoustic sources, integration of which over the span of the blades yields the loading noise contribution. The lifting-line approach to the blade aerodynamics assumes an infinitesimally thin blade and hence the thickness contribution to the noise is modelled using a source-sink pair attached to each blade panel. The noise contribution from quadrupole terms as well as that from the fuselage is neglected. The VTM has shown considerable success in both capturing and preserving the complex vortex structures that are contained within the wakes of conventional helicopter rotors [9, 10] and has been used previously to study rotor response to wake encounters [11, 12], the vortex ring state [13, 14], the influence of ground effect [15, 16], and the acoustics of a model rotor [17]. In this paper, a review of the study of coaxial rotor aerodynamics undertaken at the Rotorcraft Aeromechanics Laboratory of Glasgow University Rotorcraft Laboratories using the VTM is provided. The ability of the method to capture convincingly the aerodynamics of a coaxial rotor system in isolation and as a part of a more complex full helicopter system is demonstrated in the following sections of this paper.

3 Aerodynamics of a Hinged Coaxial Rotor 3.1 Aerodynamic Performance of an Isolated Coaxial Rotor The reliability of the VTM’s representation of coaxial rotor performance has been assessed in Ref. 18 by comparing the predicted power consumption of Harrington’s coaxial rotor to that experimentally measured by Harrington [19] in hover and by

48

H.W. Kim and R.E. Brown

0.007

CT

0.0005

CP

0.006

0.0004

0.005

0.004

0.0003

0.003

0.0002

Experiment - Single

0.002

Exp. - Coaxial Rotor

Experiment - Coaxial VTM - Single

0.001

VTM - Coaxial Rotor with C D ' (á )

0.0001

VTM - Coaxial

CP

VTM - Coaxial Rotor with C D '' (á )

0 0

0.0001

0.0002

0.0003

0.0004

0.0005

0.0006

0 0

Fig. 1. Total power consumption (CP) as a function of thrust (CT) in hover – VTM simulations compared to Harrington’s experiment [18]

0.1

0.2

Advance ratio 0.3

Fig. 2. Total power consumption (CP) in steady level flight as a function of forward speed (thrust coefficient CT = 0.0048) – Dingeldein’s experiment compared to VTM simulations with two different drag polars, CD′(α) and CD″(α) [18]

Dingeldein [20] in forward flight (see Fig. 1 and 2). Comparison of the numerical predictions against the experimental data shows that the overall power consumption is particularly sensitive to the model that is used to represent the drag polar for the blade aerofoil sections, and possibly thus to the precise operating conditions of the rotor blade. However, some degree of absolute quantification does appear to be justified when this variability in profile drag, hence profile power, is removed to reveal the induced component of the power consumption, as in the comparisons presented in the next section of this paper. 3.2 A Rational Approach for Comparing Coaxial and Single Rotors The relative merit of a twin coaxial rotor over a conventional single rotor in terms of efficiency and performance has long been a point of contention. Comparisons made in the existing literature have often failed to account correctly for the essential differences in the configurations of the two types of rotor and thus on occasion have drawn seemingly conflicting conclusions [21]. Numerical results from the VTM have been used to establish a rational approach to a like-for-like comparison of performance between coaxial and conventional single rotor systems[18]. It should be borne in mind, however, when extrapolating isolated rotor data to full helicopter systems, that the comparisons of performance can be skewed by the additional 5-10% of the main rotor power that is required by the tail rotor of the single rotor platform to trim the yaw moment in the system [22]. This torque compensation is provided inherently within the coaxial system. The numerical results obtained by replicating Harrington’s experiment have been used to highlight the potential for misrepresentation of the relative merit of the coaxial rotor when compared to a rotor of more conventional configuration. In the experiment, the performance of one of the constituent rotors of the coaxial system was compared to the performance of the entire coaxial system. Because of its lower solidity,

Modelling the Aerodynamics of Coaxial Helicopters 0.18

49

C T /ó

0.12

Experiment - Single

0.06

Experiment - Coaxial VTM - Single VTM - Coaxial

C P /ó

0 0

0.003

0.006

0.009

0.012

0.015

Fig. 3. Total power consumption (CP) as a function of thrust (CT) in hover – comparison of the coaxial rotor with one of its constituent rotors (‘single’) after normalising by solidity σ [18]

the single rotor is inherently limited in thrust-generating capability by blade stall andhence, even when the numerical results are normalised by solidity, the comparison of the performance of the two systems is misleading (see Fig. 3). It was proposed that the equivalent, conventional single rotor should thus have the same total number of blades as the coaxial system and that the blades of the two systems should be geometrically identical. In Fig. 4 and 5, this definition is shown to yield a fair like-for-like comparison between the two disparate systems. It is seen that the difference in performance of the two types of rotor is actually of the same order as the plausible variability in the profile power of the two systems. The two rotors have the same solidity and thus the lift potential of the two systems is matched. In other words, blade stall cannot obscure the comparison between the two systems. The blades also operate in a comparable aerodynamic environment. The differences in the performance of the two systems are thus induced solely by the fundamental difference in the way that the wake of the two types of rotor interacts with the blades. Definition of the equivalent conventional rotor in this manner thus yields a rational approach to comparing the relative performance of coaxial and single rotor systems. 0.007

0.0005

CT

CP

0.006

0.0004 0.005

0.0003

0.004 0.003 Coaxial Rotor

0.0002

(C D ' )

Coaxial Rotor

Equivalent Rotor (C D ' )

0.002

Coaxial Rotor

(C D '' )

Coaxial Rotor

0.001

CP 0 0

0.0001

0.0002

0.0003

0.0004

0.0005

0.0006

Fig. 4. Total power consumption (CP) as a function of thrust (CT) in hover – comparison between rotors with identical solidity and blade properties [18]

(Cd') (C D ' )

Equivalent Rotor (Cd') (C D ' )

0.0001

Equivalent Rotor (C D '' )

(Cd'') (C D '' )

Equivalent Rotor (Cd'') (C D '' )

0 0

0.1

0.2

Advance ratio 0.3

Fig. 5. Total power consumption (CP) in steady level flight as a function of forward speed (thrust coefficient CT = 0.0048) – comparison between rotors with identical solidity and blade properties [18]

50

H.W. Kim and R.E. Brown

3.3 Comparison of Performance in Steady and Manoeuvring Flight The performance of a coaxial rotor in hover, in steady forward flight, and in a level, coordinated turn has been contrasted with that of an equivalent, conventional rotor defined, as motivated in the previous section, as a single, conventional rotor with the same overall solidity, number of blades and blade aerodynamic properties [18]. Simulations using the VTM have allowed the differences in the performance of the two systems (without undue complication from fuselage and tail rotor effects) to be investigated in terms of the profile, induced and parasite contributions to the overall power consumed by the rotors (see Fig. 6 to 8), and to be traced to the differences in the structure of the wakes of the two systems. In hover, the coaxial system consumes less induced power than the equivalent, conventional system. The wake of the coaxial system in hover is dominated, close to the rotors, by the behaviour of the individual tip vortices from the two rotors as they convect along the surface of two roughly concentric, but distinct, wake tubes. The axial convection rate of the tip vortices, particularly those from the upper rotor, is significantly greater than for the tip vortices of the same rotor operating in isolation. The resultant weakening of the blade-wake interaction yields significantly reduced induced power consumption on the outer parts of the upper rotor that translates into the observed benefit in terms of the overall induced power required by the coaxial system. In steady forward flight, the coaxial rotor again shows a distinct induced power advantage over its equivalent, conventional system at transitional and low advance ratios, but at high advance ratio there is very little difference between the performance of the two systems. At a thrust coefficient, CT, of 0.0048, the maximum forward flight speed of the systems that were simulated was limited to an advance ratio of about 0.28 by stall on the retreating side of the rotors. The rather limited maximum performance of the two systems was most likely related to their low solidity. With the coaxial system, the near-simultaneous stall on the retreating sides of both upper and

0.0006

0.0005

CP

CP

TOTAL POWER

0.0005

0.0004 Coaxial Rotor Coaxial Rotor

0.0004

Induced Equivalent Rotor

Equivalent Rotor

TOTAL POWER

0.0003

0.0003

Parasite

0.0002 Induced

0.0002 Profile

0.0001

CT

0 0

0.001

0.002

0.003

0.004

0.005

0.006

0.0001 Profile 0

0.007

Fig. 6. Total power consumption, together with its constituents, in hover as a function of thrust – comparison of the coaxial rotor with the equivalent single rotor [18]

0

0.1

0.2

Advance ratio 0.3

Fig. 7. Total power consumption, together with its constituents, in steady level flight as a function of forward speed (thrust coefficient CT = 0.0048) [18]

Modelling the Aerodynamics of Coaxial Helicopters 0.0008

51

CP TOTAL POWER

0.0006 Coaxial Coaxial Rotor Equivalent Wind-up Turn Equivalent Steady TurnRotor

0.0004 Induced

Profile

0.0002

Parasite 0 1.0

1.5

Load factor

2.0

Fig. 8. Total power consumption, together with its constituents, in a level, wind-up turn as a function of the load factor of the turn [18]

lower rotors leads to backwards flapping of both discs, although blade strike occurs at the back of the system because the upper rotor stalls more severely than the lower. The structure of the wake generated by the coaxial and conventional systems is superficially similar at all advance ratios, and shows a transition from a tube-like geometry at low advance ratio to a flattened aeroplane-like form as the advance ratio is increased 1 (see Fig. 9). The formation of the wake of the coaxial rotor at posttransitional advance ratio involves an intricate process whereby the vortices from both upper and lower rotors wind around each other to create a single, merged pair of super-vortices downstream of the system (see Fig. 9). The loading on the lower rotor is strongly influenced by interaction with the wake from the upper rotor, and there is also evidence on both rotors of intra-rotor wake interaction especially at low advance ratio. In comparison, the inflow distribution on the conventional rotor, since the inter-rotor blade-vortex interactions are absent, is very much simpler in structure. Simulations of a wind-up turn at constant advance ratio again show the coaxial rotor to possess a distinct advantage over the conventional system (see Fig. 8) – a reduction in power of about 8% for load factors between 1.0 and 1.7 is observed at an

(a) Overall wake geometry

(b) Tip vortex geometry (coaxial rotor: upper rotor vortices shaded darker than lower)

Fig. 9. Wake structure of coaxial rotor (left) and equivalent single rotor (right) in forward flight at advance ratio μ = 0.12 1

The transition occurs at an advance ratio of approximately 0.1.

52

H.W. Kim and R.E. Brown

advance ratio of 0.12 and a thrust coefficient of 0.0048. As in forward flight, the improved performance of the coaxial rotor results completely from a reduction in the induced power required by the system relative to the conventional rotor. This advantage is offset to a certain degree by the enhanced vibration of the coaxial system during the turn compared to the conventional system. As in steady level flight, the turn performance is limited by stall and, in the coaxial system, by subsequent blade strike, at a load factor of about 1.7 for the low-solidity rotors that were used in this study. The inflow distribution on the rotors is subtly different to that in steady, level flight, and a progressive rearwards shift in the positions of the interactions between the blades and their vortices with increasing load factor appears to be induced principally by the effects of the curvature of the trajectory on the geometry of the wake. The observed differences in induced power required by the coaxial system and the equivalent, conventional rotor originate in subtle differences in the loading distribution on the two systems that are primarily associated with the pattern of blade-vortex interactions on the rotors. The beneficial properties of the coaxial rotor in forward flight and in steady turns appear to be a consequence of the somewhat greater lateral symmetry of its loading compared to the conventional system. This symmetry allows the coaxial configuration to avoid, to a small extent, the drag penalty associated with the high loading on the retreating side of the conventional rotor. It is important, though, to acknowledge the subtlety of the effects that lead to the reduced induced power requirement of the coaxial system, and thus the rather stringent requirements that are imposed on the fidelity of numerical models in order for these effects to be resolved.

4 Aerodynamics of a Stiffened Hingeless Coaxial Rotor The effects of flapwise stiffness on the performance of a coaxial rotor were studied using a modified form of Harrington’s rotor as described in Ref. 23. The effects of hub stiffness on the natural frequency of blade flapping are introduced into the simulations by modelling the blades of the rotors as being completely rigid, but then applying a spring across each flapping hinge of their articulated hubs. Introduction of flapwise stiffness into the system has a marked effect on the power consumption of the coaxial rotor in forward flight. VTM calculations suggest that the equivalent articulated system has a power requirement that is over twenty percent greater than that of a completely rigid rotor system when trimmed to an equivalent flight condition. Most of this enhanced power requirement can be attributed to a large increase in the induced power that is consumed by the system when the blades of the rotors are allowed to flap freely. Most of the advantage of the rigid configuration is retained if the stiffness of the rotors is reduced to practical levels, but the advantage of the system with finite stiffness over the conventional articulated system deteriorates quite significantly as the forward speed of the system is reduced. In high-speed forward flight, significant further power savings can be achieved, at least in principle, if an auxiliary device is used to alleviate the requirement for the main rotor system to produce a propulsive force component, and, indeed, such an arrangement might be necessary to prevent rotor performance being limited by aerodynamic stall.

Modelling the Aerodynamics of Coaxial Helicopters

53

Simulations suggest that the unsteady aerodynamic forcing of very stiff rotor systems is relatively insensitive to the actual flapwise stiffness of the system. This implies that simple palliative measures such as structural tailoring may have little effect in being able to modify the inherent vibrational characteristics of stiff coaxial rotor systems. The principal effect of the aerodynamic forcing of very stiff co-axial systems is to produce an excitation of the system, primarily at the fundamental blade passing frequency, in both pitch and heave, but aerodynamic interference between the rotors may introduce a small component of roll excitation into the vibration of the rotor. The numerical results presented in Ref. 23 lend strong support to the existing contention that the introduction of flapwise stiffness can lead to a coaxial rotor system that possesses clear advantages in performance over the corresponding articulated system, and hence, by comparison against the results of previous studies [18], over the equivalent, conventional, planar rotor system. The results also lend some support, though, to the contention that the advantages of the coaxial rotor configuration in terms of overall performance may need to be offset against fundamentally unavoidable penalties in terms of the vibration, and thus, possibly, the noise that is produced by the lower rotor as a result of its aerodynamic interference with the wake that is produced by the upper rotor. It seems risky to jump to such overarching conclusions in the absence of the further insight that could be gleaned from simulations of rotor systems that are more physically representative of full-scale practice than those tested in Ref. 23, however.

5 Thrust-Compounded Coaxial Helicopter The thrust-compounded coaxial helicopter with stiffened rotors is a particularly plausible contender to meet modern requirements for a new generation of high performance rotorcraft. The VTM has been used to study the aerodynamic interactions between the sub-components of a generic, but representative helicopter with this configuration. These studies [24, 25] are summarised below. 5.1 Helicopter Model The generic helicopter configuration studied in Refs. 24 and 25 comprises a stiffened contra-rotating coaxial rotor system, a tail-mounted propulsor, and a streamlined fuselage featuring a horizontal tailplane at its rear (see Fig. 10). In the interests of brevity, only a brief description of the configuration is presented here but a more detailed geometric description can be found in Kim et al. [24]. The coaxial system consists of two three-bladed rotors. The stiffness in the rotor system is approximated, somewhat crudely but supported by the results contained in Ref. 23, by assuming the rotor blades and their attachments to the hub to be completely rigid. The propulsor is a five-bladed variable pitch propeller mounted in a pusher configuraFig. 10. Generic thrust-compounded hingeless coaxial helicopter configuration [24, 25] tion oprovide auxiliary propulsive thrust,

54

H.W. Kim and R.E. Brown

offloading the main rotor in high speed forward flight. The geometry of the fuselage is entirely fictitious but the compact and streamlined design is chosen to be representative of a realistic modern helicopter with high speed requirements. In line with current design practice to yield sufficient longitudinal stability and control, a rectangular tailplane, just forward of the propulsor, is also incorporated into the design. This configuration was developed specifically to provide a realistic representation of the aerodynamic interactions that might occur in practical semi-rigid, thrust-compounded coaxial helicopter systems. 5.2 Interactional Aerodynamics and Aeroacoustics The aerodynamic environment of the configuration described above is characterised by very strong aerodynamic interactions between its various components (see Fig. 11). The aerodynamic environment of the main rotors of the system is dominated by the direct impingement of the wake from the upper rotor onto the blades of the lower rotor, and, particularly at low forward speed, the thrust and torque produced by the system are highly unsteady. The fluctuations in the loading on the coaxial system occur primarily at the fundamental blade-passage frequency and are particularly strong as a result of the phase relationship between the loading on the upper and lower rotors. A contribution to the loading on the coaxial rotor at twice the blade-passage frequency is a result of the loading fluctuations that are induced on the individual rotors of the system as a result of direct blade overpassage. As shown in Fig. 12, the wake of the main rotor sweeps over the fuselage and tailplane at low forward speed, inducing a significant nose-up pitching moment on the tailplane that must be counteracted by longitudinal cyclic input to the main rotor. This pitch-up characteristic has been encountered during the development of several helicopters and has proved on occasion to be very troublesome to eradicate. Over a broad range of forward flight speeds, the wake from the main rotor is ingested directly into the propulsor, where it induces strong fluctuations in the loading produced by this rotor. These fluctuations occur at both the blade-passage frequency of the main rotor and of the propulsor, and have the potential to excite significant vibration of the

(a) Bottom view

(b) Top view

Fig. 11. Visualisation of the wake structure of the thrust-compounded coaxial helicopter using surface contours of constant vorticity at advance ratio μ = 0.15 [25]

Modelling the Aerodynamics of Coaxial Helicopters

55

Upper Rotor Lower Rotor Tail Propeller

(a) μ = 0.05

(b) μ = 0.10

(c) μ = 0.15

(d) μ = 0.30

Fig. 12. Trajectories of the tip vortices of the main rotors and propulsor where they intersect the vertical plane through the fuselage centreline [24]

aircraft. VTM calculations suggest that this interaction, together with poor scheduling of the partition of the propulsive thrust between the main rotor and the rear-mounted propulsor with forward speed, can lead to a distinctly non-optimal situation where the propulsor produces significant vibratory excitation of the system but little useful contribution to its propulsion. The propulsor also induces significant vibratory forcing on the tailplane at high forward speed. This forcing is at the fundamental blade-passage frequency of the propulsor, and suggests that the tailplane position in any vehicle where the lifting surface and propulsor are as closely coupled as in the present configuration may have to be considered carefully to avoid shortening its fatigue life. Nevertheless, the unsteady forcing of the tailplane is dominated by interactions with the wake from the main rotor – the fluctuations at the fundamental blade passage frequency that were observed in the pressure distribution on the tailplane are characteristic of the close passage of individual vortices over its surface. Finally, predictions of the acoustic signature of the system presented in Ref. 24 and 25 suggest that the overall noise produced by the system, at least in slow forward flight, is significantly higher than that produced by similar conventional helicopters in the same weight class. The major contribution to the noise produced by the system in the highly objectionable BVI frequency range comes from the lower rotor because of strong aerodynamic interaction with the upper rotor. The propulsor contributes significant noise over a broad frequency spectrum. At the flight condition that was considered, much of this noise is induced by interactions between the blades of the propulsor and the wake of the main rotor and tailplane. The numerical calculations were thus able to reveal many of the aerodynamic interactions that might be expected to arise in a configuration as aerodynamically complex as the generic thrust-augmented coaxial helicopter that formed the basis of the study. It should be acknowledged though that the exact form, and particularly the effect on the loading produced on the system, of these interactions would of

56

H.W. Kim and R.E. Brown

course vary depending on the specifics of the configuration, and that many of the pathological conditions exposed in the study could quite feasibly have been rectified by careful aerodynamic re-design. 5.3 Understanding the Interactions By comparing the aerodynamics of the full configuration of the helicopter to the aerodynamics of various combinations of its sub-components, the influence of the various aerodynamic interactions within the system on its behaviour could be isolated as described in Ref. 25. The traditional approach to the analysis of interactional effects on the performance of a helicopter relies on an initial characterisation of the system in terms of a network of possible interactions between the separate components of its configuration (see Fig. 13). Thus, within the configuration that was studied in Ref. 25, it is possible to identify the effect of the main rotor on the fuselage and propulsor, the distortion of the wake of the main rotor that is caused by the presence of the fuselage and so on. The characteristics of these various interactions and their effects on the performance of the system have been described in detail in Ref. 25. Main rotor system Upper rotor Lower rotor

Tailplane Fuselage

Propulsor

Fig. 13. Schematic summarising the network of aerodynamic interactions between various components of the simulated configuration [25]

Many of the interactions that were exposed within the aerodynamics of the configuration have exhibited a relatively linear relationship between cause and effect and hence would be amenable to the reductionist approach described above. For instance, the distortion of the wake of the main rotor by the fuselage has a marked effect on the loading generated by the propulsor, but the effect on the propulsor is prevented from feeding back into the performance of the main rotor. This is because of the isolation that is provided by the particular method that was used to trim the vehicle, and also by the inherent directionality of the interaction that results from its physics being dominated by the convection of the wakes of the two systems into the flow behind the vehicle. Several of the interactions that were observed for this helicopter configuration exhibited a less direct relationship between cause and effect, however. These interactions are characterised by strong feedback or closed-loop type behaviour, in certain cases through a path which remains relatively obscure and hidden within the network

Modelling the Aerodynamics of Coaxial Helicopters

57

of interactions that form the basis of the traditional reductionist type approach. For instance, the load that is induced on the tailplane by the direct impingement of the wake of the main rotor requires, through the requirement for overall trim of the forces and moments on the aircraft, a compensatory change in the loading distribution on the main rotor itself, which then modifies the strength of its wake and hence, in circular fashion, the loading on the tailplane itself. Without this understanding of the strong mutual coupling between the performance of the tailplane and the main rotor, the observed dependence of the acoustic radiation of the aircraft on the presence or not of the tailplane (or, in practical terms, more likely on its design and positioning) may appear to the analyst as a very obscure and possibly even unfathomable interdependence within the system. Thus, although the reductionist, network-based approach to classifying the interactions present within the system is conceptually appealing and simple, it must be realised that the possible presence of feedback loops deep within the interactional aerodynamics, such as the one described above, may cause the approach to miss, obscure or hide the presence of interactions between some of the various subcomponents of the system. The analysis presented in Ref. 25 warns against an overly literal application of this reductive, building-block type approach to the categorisation of the interactions that are present within the system.

6 Conclusion The results of a programme of computational study of coaxial rotor aerodynamics conducted using the Vorticity Transport Model at the Rotorcraft Aeromechanics Laboratory of the Glasgow University Rotorcraft Laboratories have been summarised. Analysis of the computational results obtained using the VTM suggests that the differences in performance between a helicopter with a coaxial rotor system and the equivalently defined system with a conventional, single rotor are subtle, and generally result from small differences in the character and strength of the localised interaction between the blades of the rotors and the wakes that they produce. Reliable prediction of these effects is well beyond the scope of simple models and is absolutely dependent on accurate prediction of the detailed structure of the rotor wake. The study lends weight to the assertion though that the state of the art of computational helicopter aerodynamic predictions is advancing to a stage where the use of powerful models such as the VTM may allow useful insight into the likely aeromechanical behaviour of realistic helicopter configurations. Furthermore, it is suggested that there may be no real substitute for detailed simulations of the entire configuration if the effects on the performance of the vehicle of the most deeply hidden interactions within the system are to be exposed. It has been shown that modern numerical techniques are indeed capable of representing the very wide range of aerodynamic interactions that are present within the helicopter system, even one as complex as a compounded coaxial system. This bodes well for the assertion that modern computational techniques may be in a position to help circumvent future repetition of the long history of unforeseen, interaction-induced dynamic problems that have manifested on prototype or production aircraft.

58

H.W. Kim and R.E. Brown

References [1] Cooper D E (1978) YUH-60A Stability and Control. Journal of the American Helicopter Society 23(3):2-9 [2] Prouty R W, Amer K B (1982) The YAH-64 Empennage and Tail Rotor – A Technical History. American Helicopter Society 38th Annual Forum Proceedings, Anaheim, CA, pp.247-261 [3] Main B J, Mussi F (1990) EH101 - Development Status Report. Proceedings of the 16th European Rotorcraft Forum, Glasgow, UK, pp.III.2.1.1-12 [4] Cassier A, Weneckers R, Pouradier J, (1994) Aerodynamic Development of the Tiger Helicopter. Proceedings of the American Helicopter Society 50th Annual Forum, Washington DC [5] Eglin P (1997) Aerodynamic Design of the NH90 Helicopter Stabilizer. Proceedings of the 23rd European Rotorcraft Forum, Dresden, Germany, pp. 68.1-10. [6] Frederickson K C, Lamb J R, (1993) Experimental Investigation of Main Rotor Wake Induced Empennage Vibratory Airloads for the RAH-66 Comanche Helicopter. Proceedings of the American Helicopter Society 49th Annual Forum, St. Louis, MO, pp. 10291039 [7] Bagai A (2008) Aerodynamic Design of the Sikorsky X2 Technology DemonstratorTM Main Rotor Blade. American Helicopter Society 64th Annual Forum Proceedings, Montréal, Canada [8] Burgess R K, (2004) The ABCTM Rotor – A Historical Perspective. American Helicopter Society 60th Annual Forum Proceedings, Baltimore, MD [9] Brown R E, (2000) Rotor Wake Modeling for Flight Dynamic Simulation of Helicopters. AIAA Journal 38(1):57-63 [10] Brown R E, Line A J, (2005) Efficient High-Resolution Wake Modeling Using the Vorticity Transport Equation. AIAA Journal 43(7):1434-1443 [11] Whitehouse G R, Brown R E, Modeling the Mutual Distortions of Interacting Helicopter and Aircraft Wakes. AIAA Journal of Aircraft 40(3):440-449 [12] Whitehouse G R, Brown R E (2004) Modelling a Helicopter Rotor’s Response to Wake Encounters. Aeronautical Journal 108(1079):15-26 [13] Ahlin G A, Brown R E (2005) Investigating the Physics of Rotor Vortex-Ring State using the Vorticity Transport Model. Paper 89, 31st European Rotorcraft Forum, Florence, Italy [14] Ahlin G A, Brown R E (2007) The Vortex Dynamics of the Rotor Vortex Ring Phenomenon. American Helicopter Society 63rd Annual Forum Proceedings, Virginia Beach, VA [15] Brown R E, Whitehouse G R, Modeling Rotor Wakes in Ground Effect. Journal of the American Helicopter Society 49(3):238-249 [16] Phillips C, Brown R E (2008) Eulerian Simulation of the Fluid Dynamics of Helicopter Brownout. American Helicopter Society 64th Annual Forum Proceedings, Montréal [17] Kelly M E, Duraisamy K, Brown R E (2008) Blade Vortex Interaction and Airload Prediction using the Vorticity Transport Model. American Helicopter Society Specialists’ Conference on Aeromechanics, San Francisco, CA [18] Kim H W, Brown R E (2006) Coaxial Rotor Performance and Wake Dynamics in Steady and Manoeuvring Flight. American Helicopter Society 62nd Annual Forum Proceedings, Phoenix, AZ [19] Harrington R D (1951) Full-Scale-Tunnel Investigation of the Static-Thrust Performance of a Coaxial Helicopter Rotor. NACA TN-2318 [20] Dingeldein R C (1954) Wind-Tunnel Studies of the Performance of Multirotor Configurations. NACA TN-3236

Modelling the Aerodynamics of Coaxial Helicopters

59

[21] Coleman C P (1997) A Survey of Theoretical and Experimental Coaxial Rotor Aerodynamic Research,” NASA TP-3675 [22] Leishman J G (2006) Principles of Helicopter Aerodynamics. Second Edition Cambridge University Press, Cambridge, UK [23] Kim H W, Brown R E (2007) Impact of Trim Strategy and Rotor Stiffness on Coaxial Rotor Performance. 1st AHS/KSASS International Forum on Rotorcraft Multidisciplinary Technology, Seoul, Korea [24] Kim H W, Kenyon A R, Duraisamy K, Brown R E (2008) Interactional Aerodynamics and Acoustics of a Propeller-Augmented Compound Coaxial Helicopter. American Helicopter Society Aeromechanics Specialists’ Meeting, San Francisco, CA [25] Kim H W, Kenyon A R, Duraisamy K, Brown R E (2008) Interactional Aerodynamics and Acoustics of a Hingeless Coaxial Helicopter with an Auxiliary Propeller in Forward Flight. International Powered Lift Conference, London, UK

State-of-the-Art CFD Simulation for Ship Design Bettar el Moctar Germanischer Lloyd, Hamburg, Germany [email protected] Abstract. Nowadays, the work of a classification society associated with design assessment increasingly relies on the support of computer based numerical simulations. Although design assessments based on advanced finite-element analyses have long been part of the services of a classification society, the scope and depth of recently developed and applied simulation methods was so rapid that we present here a survey of these techniques as well as samples of typical applications. The article focuses on the basics of the techniques and points out progress achieved as well as current improvements. Listed references describe the simulation techniques and the individual applications in more detail.

1 Introduction We observe an increased scope and greater importance of simulations in the design process of ships. Shipyards frequently outsource the associated extensive analysis to specialists. The trend of modern classification society work is also towards simulation-based decisions, both to assess the ship’s design as well as to evaluate its operational aspects. Stability analyses were among the first applications of computers in naval architecture. Today, the naval architect can perform stability analyses in the intact and the damaged conditions. Two other “classical” applications of computer simulations for ships are CFD (computational fluid dynamics) and FEA (finiteelement analyses). Both applications were already used for several decades to support ship design, but today’s applications are far more sophisticated than they were 20 years ago. This article reviews different simulation fields as found in the work of Germanischer Lloyd, showing how advanced engineering simulations drifted from research activities to frontier applications

2 Stern and Bow Flare Slamming, Extreme Motions and Loads+ Linear approaches, such as standard strip theory methods and panel methods, el Moctar et al. (2006), are appropriate to solve many ship seakeeping problems, and they are frequently applied. These procedures are fast, and thus they allow investigating the effect of many parameters (frequency, wave direction, ship speed, metacentric height, etc.) on ship response. Nonlinear computations, such as simulation procedures based on Reynolds-averaged Navier-Stokes equation (RANSE) solvers, are necessary for the treatment of extreme conditions. However, simulations are computationally intensive and, consequently, only relatively short time periods can be analyzed. GL developed a numerical procedure based on the combined use of a boundary element method (BEM), a statistical analysis technique using random process theory, and an

62

B. el Moctar

extended RANSE solver to obtain accurate responses of ships in a seaway. The approach starts with a linear analysis to identify the most critical parameter combination for a ship’s response and ends up with a RANSE simulation that captures the complex free-surface deformation and predicts the local pressures more accuratelly. Within the scope of commercial projects, GERMANISCHER LLOYD performed RANSE simulations for a variety of ships to investigate effects of bow flare and stern slamming, water on deck, and wave-impact related slamming loads (Figs. 1-7).

Fig. 1. Computed Earthrace Trimaran motions in waves

Fig. 2. Wave induced loads in extreme wave conditions

Fig. 3. Computed motions of Catamaran in heavy sea

Fig. 4. Computed pressure distribution (wetdeck slamming)

Fig. 5. Computed influence of wetdeck slamming on beding moment of a Catamaran

Fig. 6. Computed and measured slamming forces actin on a MY bow

State-of-the-Art CFD Simulation for Ship Design

Fig. 7. Computed slamming forces acting on a MY stern

63

Fig. 8. Computed whipping effects on bending moment

3 Whipping Effects Impact-related loads lead to increased vibration related accelerations and, consequently, to higher internal hull girder loads. An accurate assessment of these hydroelastic effects requires an implicit coupling between a RANSE solver and a structural strength code. To this end, GERMANISCHER LLOYD developed a numerical procedure whereby the RANSE solver is coupled either to a Timoshenko beam model or to an appropriate interface between the RANSE solver and the FE code (Fig. 8).

Fig. 9. Computed free surface in a cylindrical tank

Fig. 10. Measured free surface in a cylindrical tank

4 Sloshing Sloshing is a strongly nonlinear phenomenon, often featuring spray formation and plunging breakers. Surface-capturing methods can reproduce these features, Sames et al (2002), el Moctar (2006) validated a RANSE solver Comet for sloshing problems (see

64

B. el Moctar

(a)

(b)

Fig. 11. Sloshing simulation for LNG-Tank (a), computed time history of pressure acting on LNG-Tank

Fig. 12. Computed free surface in a prismatic tank

Figs. 9-12). The computed fluid motions agreed also well with videos of the experiments. Extensive experience gathered over the last ten years allows us today to numerically predict with confidence sloshing loads in tanks with arbitrary geometry. A computational procedure to predict sloshing loads was published by el Moctar (2006).

5 Dynamic Stability of Ships To assess the safety of modern ships, it is vital for Germanischer Lloyd to have available numerical tools to investigate the dynamic stability of intact and damaged ships in a seaway. Large amplitude motions may lead to high accelerations. In severe seas, ships may also be subject to phenomena like pure loss of stability, broaching to, and parametric rolling. Linear seakeeping methods are unsuited to predict such phenomena, mainly because they do not account for stability changes caused by passing waves. Furthermore, linear methods are restricted to small amplitude ship motions, and hydrodynamic pressures are only integrated up to the undeformed water surface.

State-of-the-Art CFD Simulation for Ship Design

65

The two simulation tools ROLLSS and GL SIMBEL are available at Germanischer Lloyd to simulate large amplitude ship motions. Depending on the extent of the nonlinearities accounted for, simulation methods tend to be cumbersome to handle and unsuitable for routine application. Therefore, the numerically more efficient method ROLLSS is used to quickly identify regions of large amplitude ship motions, while the fully nonlinear method GL SIMBEL is then employed to yield more accurate motion predictions. To validate these tools and to demonstrate their practical application, extensive simulations were carried out to predict parametrically induced roll motions that were then compared against model test measurements performed at the Hamburg Ship Model Basin, Brunswig et al. (2006), Figs. 13-14.

Fig. 13. Computed cavitation behaviour on rudder (gray areas)

Fig. 14. Computed roll motions in irregular waves

6 Ship Appendages, Cavitation Problems Diagrams to estimate rudder forces were customary in classical rudder design. These diagrams either extrapolate model test results from wind tunnel tests, or they are

Fig. 15. Computed and measured sloshing pressures

Fig. 16. Computed Pressure distribution on podded drive

66

B. el Moctar

based on potential flow computations. However, the maximum lift is determined by viscous flow phenomena, namely, flow separation (stall). Potential flow models are not capable of predicting stall, and model tests predict stall at too small angles. CFD is by now the most appropriate tool to support practical rudder design. The same approach for propeller and rudder interaction can be applied for podded drives (Fig. 15), el Moctar and Junglewitz (2004). RANSE solvers also allow the treatment of cavitating flows (Fig. 16). The extensive experience gathered in the last five years resulted in a GERMANISCHER LLOYD guideline for rudder design procedures, GL (2005).

7 Room Ventilation HVAC (heat, ventilation, air condition) simulations involve the simultaneous solution of fluid mechanics equations and thermodynamic balances, often involving concentrations of different gases. The increasing use of refrigerated containers on ships motivated the application of advanced CFD simulations, Brehm and el Moctar (2004), to some extent replacing simple, analytical methods and model tests. Effects such as the difference between pressure and suction ventilation and the influence of natural thermal buoyancy can be reproduced (Fig. 17).

Fig. 17. Computed roll motions

Fig. 18. Computed temperature distribution in a cargo hold

Fig. 19. Computed smoke propagation

State-of-the-Art CFD Simulation for Ship Design

67

8 Aerodynamics of Ship Superstructures and Smoke Propagation Aerodynamic issues are increasingly of interest for ships and offshore platforms Potential applications include Smoke and exhaust tracing, operational conditions for take-off and landing of helicopters and Wind resistance and drift forces. The traditional approach to study aerodynamic flows around ships employs model tests in wind tunnels. These tests are a proven tool supporting design and relatively fast and cheap Forces are quite easy to measure, but insight into local flow details can be difficult in some spaces. Computational fluid dynamics (CFD) is increasingly used in related fields to investigate aerodynamic flows e.g. around buildings or cars. CFD offers some advantages over wind tunnel tests: The complete flow field can be stored and allowing evaluation at any time in the future. There is more control over what to view and what to block out. CFD can capture more flow details. CFD allows also full-scale simulations. Despite these advantages, CFD has so far rarely been employed for aerodynamic analyses of ships. This is due to a combination of obstacles: The complex geometry of superstructures makes grid generation labor-intensive. The flows are turbulent and require often unsteady simulations due to large-scale vortex generation. Recent progress in available hardware and grid generation techniques allows now a re-evaluation of CFD for aerodynamic flows around ship superstructures. Hybrid grids with tetrahedral and prism elements near the ship allow partially automatic grid generation for complex domain boundaries. The resulting higher cell count is acceptable for aerodynamic flows because the Reynolds numbers are lower than for hydrodynamic ship flows and thus there are fewer elements needed.. GERMANISCHER LLOYD performed RANSE simulations for a ship superstructures to investigate aerodynamic problems and smoke propagation el Moctar, Bertram (2002)

9 Fire Simulation SOLAS regulation allows the consideration of alternative designs and alternative arrangements concerning fire safety. The requirement is to prove (by engineering analysis) that the safety level of the alternative design is equal to that based on prescriptive rules. The main benefit of these regulations is expected for cruise vessels and ferries, as the alternative design approach allows large passenger and car deck spaces beyond what is possible with the prescriptive rules. In principle, ‘engineering analyses’ could also mean fire experiments, but these are too costly and time consuming to support ship design. This leaves computer simulations as a suitable option. At present, zone models and CFD tools are considered for fire simulations in ships. Zone models are suitable for examining more complex, time-dependent scenarios involving multiple compartments and levels, but numerical stability can be a problem for multilevel scenarios, for scenarios with Heating, Ventilation and Air Conditioning (HVAC) systems, and for post-flashover conditions. CFD models can yield detailed information on temperatures, heat fluxes, and species concentrations; however, the time penalty of this approach currently makes CFD unfeasible for long periods of real time simulations or for large computational domains. After initial validation studies, Bertram et al. (2004) presented more complex applications of fire simulations. While reproducing several typical fire characteristics, fire simulations are not yet mature,

68

B. el Moctar

and more progress can be expected in the next decade. For example, results are not grid-independent with the currently employed typical grid resolutions, but finer grids appear out of reach for present computer power and algorithms. Despite such shortcomings, fire simulations appear already suitable as a general support both for fire containment strategies and for design alternatives.

10 Conclusion The technological progress is rapid, both for hardware and software. Simulations for numerous applications now often aid the decision making process, sometimes ‘just’ for qualitative ranking of solutions, sometimes for quantitative ‘optimization’ of advanced engineering solutions. Continues validation feedback not only improves the simulation tools themselves, but it also builds confidence in their use. However, advanced simulation software alone is not enough. Engineering is more than ever the art of modeling and finding the delicate balance between level of detail and resources (time, man-power). This modeling often requires intelligence and considerable (collective) experience. The true value offered by advanced engineering service providers lies thus not in software or hardware, but in the symbiosis of highly skilled staff and these resources.

References [1] BERTRAM, V.; EL MOCTAR, O.M.; JUNALIK, B.; NUSSER, S. (2004), Fire and ventilation simulations for ship compartments, 4th Int. Conf. High-Performance Marine Vehicles (HIPER), Rome, pp.5-17 [2] BREHM, A.; EL MOCTAR, O. (2004), Application of a RANSE method to predict temperature distribution and gas concentration in air ventilated cargo holds, 7th Num. Towing Tank Symp. (NuTTS), Hamburg [3] BRUNSWIG, J, PEREIRA, R., SCHELLIN, T. (2006) Validation of Numerical Tools to Predict Parametric Rolling, HANSA Journal, September 2006 [4] EL MOCTAR, O. SCHELLIN, T.E., PRIEBE, T. (2006), CFD and FE Methods to Predict Wave Loads and Ship Structure Response, 26th Symp. Naval Hydrodyn., Rome [5] EL MOCTAR, O. (2006) Assessement of sloshing loads for tankers Shipping World & Shipbuilder, PP 28- 31 [6] EL MOCTAR, O , JUNGLEWITZ, A.;. (2004), Numerical analysis of the steering capability of a podded drive, Ship Technology Research 51/3, pp.134-145 [7] El MOCTAR, O., BERTRAM, V. (2002) Computation of Viscous Flow around Fast Ship Superstructures, 24th Symposium of Naval Hydrodynamics (ONR), Fukuoka, Japan, 2002 [8] FACH, K. (2006), Advanced simulation in the work of a classification society, 5th Int. Conf. Computer and IT Applications in the Maritime Industries (COMPIT), Oegstgeest [9] GL (2005), Recommendations for preventive measures to avoid or minimize rudder cavitation, Germanischer Lloyd, Hamburg [10] SAMES, P.C, Macouly, D., Schellin, T. (2002) Sloshing in Rectangular and Cylindrical tanks Journal of Ship research, vol. 46, Nr. 3

Investigation of the Effect of Surface Roughness on the Pulsating Flow in Combustion Chambers with LES Balazs Pritz1, Franco Magagnato, and Martin Gabi 1

Institute of Fluid Machinery, University of Karlsruhe, Germany [email protected]

Abstract. Self-excited oscillations often occur in combustion systems due to the combustion instabilities. The high pressure oscillations can lead to higher emissions and structural damage of the chamber. In the last years intensive experimental investigations were performed at the University of Karlsruhe to develop an analytical model for the Helmholtz resonator-type combustion systems [1]. In order to better understand the flow effects in the chamber and to localize the dissipation, Large Eddy Simulations (LES) were carried out. Magagnato et al. [2] describe the investigation of a simplified combustion system where the LES were carried out exclusively with a hydraulic smooth wall. The comparison of the results with experimental data shows the important influence of the surface roughness in the resonator neck on the resonant characteristics of the system. In order to catch this effect with CFD as well, the modeling of surface roughness is needed. In this paper the Discrete Element Method has been implemented into our research code and extended for LES. The simulation of the combustion chamber with roughness agrees well with the experimental results.

1 Introduction For the successful implementation of advanced combustion concepts it is very important to avoid periodic combustion instabilities in combustion chambers of turbines and in industrial combustors [3, 4]. In order to eliminate the undesirable oscillations it is necessary to fully understand the mechanics of feedback of periodic perturbations in the combustion system. The ultimate aim is to evaluate the oscillation disposition of the combustion system already during the design phase. In order to predict the resonance characteristic of Helmholtz resonator-type combustion systems an analytical model has been developed at the Engler-Bunte-Institute, Division for Combustion Technology at the University of Karlsruhe [1]. For the validation of the model a large series of measurements were carried out with variation of the parameters of the geometry (volume of the chamber, length and diameter of the exhaust gas pipe) and of the operation conditions (fluid temperature, mean mass flow rate, amplitude and frequency of the excitation) [1,5,6]. In order to better understand the flow effects in the combustor and to localize the main dissipation Large Eddy Simulations (LES) were carried out. In the first phase the analytical model was developed to describe a combustion chamber (cc) with the exhaust gas pipe (egp) as the resonator neck (single Helmholtz resonator). At the first comparison of the results from the numerical investigation with a set of experimental

70

B. Pritz, F. Magagnato, and M. Gabi

data a considerable discrepancy was detected. One possible explanation was that the numerical simulations had been carried out with perfectly smooth wall, which is the standard case for most of the simulations. The LES results were later compared with another set of data, where the agreement was quite good. The only difference between these two data sets consisted in the wall roughness in the exhaust gas pipe. The conclusion is that the surface roughness in the resonator neck plays a more important role in the case of pulsating flows than generally in the case of stationary flows. For the ability to predict the damping correctly for the rough case, a roughness model was needed. The choice fell upon the Discrete Element Method, as explained below, and it was implemented in our research code.

2 Modeling of Roughness It has been well known for many decades that the roughness of a wall has a big impact on the wall stresses generated by a fluid flow over that surface. It is of great importance in many technical applications to take this property into account in the simulations, for example for the accurate prediction of the flow around gas turbine blades. The modeling of wall roughness has traditionally been done using the log-law of the wall [7, 8, 9]. There the wall-law is simply modified to account for the roughness effect. Unfortunately the log-law of the wall is not general enough to be applied for complex flows, especially flows with separations and strong accelerations or decelerations (as the flow in the resonator neck). Another modeling has been proposed and used in the past which is based on a modification of the turbulence model close to the wall [10, 11, 12]. Here for example the non-dimensional wall distance is modified to account for the wall roughness in the algebraic Baldwin-Lomax model or the Spalart–Allmaras one–equation model. These are, however, restricted for Reynolds Averaged Navier-Stokes (RANS) calculations. A more general modeling is the Discrete Element Method first proposed by Taylor et al. [13] which can be used for all types of simulation tools. Since our main focus is on Large Eddy Simulation this approach seems to be most appropriate. The main idea of the method is to model the wall roughness effect by including an additional force term into the Navier-Stokes equations by assuming a force proportional to the height of virtual wall roughness elements. 2.1 The Discrete Element Method The Discrete Element Method models the effect of the roughness by virtually replacing the inhomogeneous roughness of a surface by equally distributed simple geometric elements for example cones. The height and the number of the cones per unit area are the parameters of the model. Since the force of a single cone on the fluid can be approximated by known relations it can be used to simulate the drag of the roughness onto the flow. While Taylor et al. have modeled the influence of the discrete element by assuming blockage effects onto the two-dimensional Navier-Stokes equations,

Investigation of the Effect of Surface Roughness on the Pulsating Flow

71

Miyake et al. [14] have generalized this idea by explicitly accounting for the wall drag force fi as source terms in the momentum and energy equations:

∂ρ ∂ρui + =0 ∂t ∂xi

(1)

∂ρui ∂ρuiu j ∂p ∂ ⎡ ⎛⎜ ∂ui ∂u j 2 ∂u k ⎞⎟⎤ + =− + + − δ ij ⎥ + f i ⎢μ ∂t ∂x j ∂xi ∂x j ⎣⎢ ⎜⎝ ∂x j ∂xi 3 ∂xk ⎟⎠⎦⎥ ⎡ ∂u ∂ρE ∂ρui E ∂ (qi − pui ) + = + μ⎢ i ∂t ∂xi ∂xi ⎢⎣ ∂x j

⎛ ∂ui ∂u j ⎞ 2 ⎛ ∂ui ⎞ ⎜ ⎟− ⎜ ⎟⎟ + ⎜ ⎜ ∂x ⎟ ⎝ j ∂xi ⎠ 3 ⎝ ∂xi ⎠

2

⎤ ⎥ + f i ui ⎥⎦

(2)

(3)

Miyake et al. assume the specific drag force fi to be proportional to:

1 A f i = cD ⋅ ρui2 ⋅ C 2 V

(4)

Here AC is the projected surface of the cone in the flow direction, ui is the velocity component in the appropriate direction, cD the drag coefficient of the cone and V is the volume of the cell. In the experiments cD varies between 0.2 and 1.5. We have chosen cD = 0.5. The effect of the roughness onto the wall shear stress must also explicitly be accounted for by:

τ wi = μ

∂ui f i ⋅ V + ∂n A

(5)

Here A is the projected surface of the cone onto the wall and n is the normal distance from the wall. The implemented method has been validated and calibrated by Bühler [14] at the flat plate test case using the experimental findings of Aupoix and Spalart [11]. The prediction of the roughness influence on a high pressure turbine blade has been investigated next and compared with experiments of Hummel/Lötzerich [15]. It was found that the method works very satisfactorily when used with the Spalart et al. one-equation turbulence model for RANS.

3 Simulated Configuration In order to compute the resonance characteristics of the system influenced by the surface roughness in the exhaust gas pipe the same methodology was chosen as described by Magagnato et al. [2]. The only information from the experiments was that the two different sets of data were achieved from a measurement with an exhaust

72

B. Pritz, F. Magagnato, and M. Gabi

gas pipe made of a polished steel tube and of a turned steel tube, respectively. The polished steel tube could be treated as an aerodynamically smooth wall. The roughness height for the turned steel tube could only be roughly estimated at k=0.01-1.0mm. Therefore more simulations were carried out with different height of the virtual cones, each with an excitation frequency of fex=40 Hz (approximately the resonance frequency of the system). The result nearest to the experiment with the turned tube was extended to two other excitation frequencies. In the simulations the mean mass flow rate was m& =61.2kg/h, the diameter and length of the chamber were dcc=0.3m and lcc=0.5m, respectively and the diameter and length of the exhaust gas pipe were degp=0.08m and legp=0.2m, respectively (s. Fig. 1). The rate of pulsation was Pu=25% and the temperature of the fluid was T=298K. 3.1 Numerical Method The simulations were carried out with the in-house developed parallel flow solver called SPARC (Structured PArallel Research Code) [16]. The code is based on the 3D block structured finite volume method and parallelized with the message passing interface (MPI). In the case of the combustor the compressible Navier-Stokes equations are solved. The spatial discretization is a second-order accurate central difference formulation. The temporal integration is carried out with a second-order accurate implicit dual-time stepping scheme. For the inner iterations the 5-stage Runge-Kutta scheme was used. The time step was Δt=2·10-5 s. The Smagorinsky-Lilly model had been used as subgrid-scale (SGS) model [17,18], as this model had been used for the earlier computations in [2] also. The detailed description of the boundary conditions can be found in [2]. Here only a brief listing is given. A pulsating mass flow rate was imposed on the inlet plane.

Non-reflecting far field BC

Wall

Pulsating mass flow rate inlet

legp

degp 28·degp lcc 20·degp

Fig. 1. Sketch of the computational domain and boundary conditions

Investigation of the Effect of Surface Roughness on the Pulsating Flow

73

The outlet was placed in the far field. At the surfaces the no-slip boundary condition and an adiabatic wall are imposed. For the first grid point y+30%) and to produce and recover all Tritium required as fuel for D-T reactors Tritium breeding self-sufficiency. The European Power Plant Conceptual Study (PPCS) [3] has been a study of the conceptual designs of five commercial fusion power plants, with the main emphasis on system integration. The study focused on five power plant models, which are illustrative of a wider spectrum of possibilities. The models are all based on the tokamak concept and they have approximately the same net electrical power output, 1500MWe. European utilities and industry developed the requirements that a fusion power plant should satisfy to become an attractive source of energy [4]. They concentrated their attention on safety, waste disposal, operation and criteria for an economic assessment. The most important recommendations are summarised as follows: • There should be no need for an emergency evacuation plan, under any accident driven by in-plant energies or due to the conceivable impact of ex-plant energies. • No active systems should be required to achieve a safe shut-down state. • No structure should approach its melting temperature under any accidental conditions. • “Defence in depth” and, in general, ALARA principles should be applied as widely as possible. • The fraction of waste which does not qualify for “clearance” or for recycling should be minimised after an intermediate storage of less than 100 years. • Operation should be steady state with power of about 1GWe for base load and have a friendly man-machine interface. However, as the economics of fusion power improves substantially with increase in the net electrical output of the plant, the net electrical output of all the PPCS models was chosen around 1.5GWe. • Maintenance procedures and reliability should be compatible with an availability of 75-80%. Only a few short unplanned shut-downs should occur in a year.

212

C.S. Kim

• Since public acceptance is becoming more important than economics, economic comparison should be made with energy sources with comparable acceptability but including the economic impact of "externalities". The fusion power is determined primarily by the thermodynamic efficiency and power amplification of the blankets and by the amount of gross electrical power recirculated, in particular for current drive and coolant pumping. “Improved supercritical Rankine cycle” seems the most promising. A revised configuration of the primary heat transport system leads to closer heat transfer curves between the primary and secondary, maximizing the thermal exchange effectiveness. It results in higher steam temperatures (increase of gross efficiency) and less steam mass flow (increase of net efficiency) compared to the other supercritical cycles. The improvement of the gross efficiency, with respect to the PPCS reference, is about 4 percentage points. As an alternative, independent supercritical CO2 Brayton cycles were considered for the blanket and divertor cooling circuits, in order to benefit from the relatively high operating temperature of the latter. In this case, it is possible to obtain a gross efficiency similar to the one achieved with the supercritical, improved Rankine cycle. In the PPCS models, the favourable inherent features of fusion have been exploited, by appropriate design and material choices, to provide safety and environmental advantages. The following are particularly noteworthy. • A total loss of active cooling cannot lead to structures melting. This result is achieved without any reliance on active safety systems or operator actions. • The maximum radiological doses to the public arising from the most severe conceivable accident driven by in-plant energies would be below the level at which evacuation would be considered in many national regulations. • Material arising from operation and decommissioning will be regarded as nonradioactive or recyclable after one hundred years (recycling of some material could require remote handling procedures, which are still to be validated). An alternative could be a shallow land burial, after a time (approximately 100 years) depending on the nuclides contained in the materials and the local regulations. The cost of electricity from the five PPCS fusion power plants was calculated by applying the codes developed in the Socio-economics Research in Fusion programme [5]. The calculated cost of electricity for all the models was in the range of estimates for the future costs from other environmentally friendly sources [6]. One important outcome of the conceptual study of a fusion power plant (FPP) is to identify the key issues and the schedule for the resolution of these issues prior to the construction of the first-of-a-kind plant. Europe has elected to follow a “fast track” in the development of fusion power [7], with 2 main devices prior to the first commercial FPP, namely ITER and DEMO. These devices will be accompanied by extensive R&D and by specialised machines and facilities to investigate specific aspects of plasma physics, plasma engineering, materials and fusion technology. The PPCS results for the near-term models suggest that a first commercial fusion power plant will be economically acceptable, with major safety and environmental advantages. These results also point out some of the key issues that should be resolved in DEMO and, more generally, help to identify the physics, engineering and technological challenges of fusion.

Peak Oil and Fusion Energy Development

213

Fig. 1. Korean fusion energy development roadmap

Besides, European fusion energy development, Korea has very similar time schedules for DEMO and FPP with EU. So, DEMO construction has been scheduled around 2025, and commercial power plant construction around 2040 as shown in Fig. 1. The market share of future fusion energy was already reported earlier [8]. Fig. 2 summarizes the calculated energy mix in the global electricity supply. Under environmental constraint that requires energy source with significantly reduced carbon dioxide emission, fusion electricity is estimated to have approximately 30% of global market [9]. In the calculation of the energy mix, it is possible to supply same amount of electricity without fusion energy while limiting the carbon dioxide emission at the same level. The result will be composed of larger fractions of other renewables and fossils with carbon sequestration. Total cost of electricity of this energy mix without fusion is more expensive, and this increase of the expense can be understood to correspond to a benefit of fusion.

Fig. 2. Estimated energy mix under environmental constraints

ITER is a unique opportunity to test the mock-ups in a tokamak environment prior the construction of the DEMO by Test Blanket Modules (TBMs). It is an ITER mission as “ITER should test tritium breeding module concepts that would lead in a future reactor to tritium self-sufficiency, the extraction of high grade heat and electricity production.” TBMs have to be representative of a DEMO breeding blanket, capable

214

C.S. Kim

of ensuring tritium-breeding self-sufficiency using high-grade coolants for electricity production. The ITER TBM Program is therefore a central element in the plans of all seven ITER Parties for the development of tritium breeding and power extraction technology.

5 Conclusion Fusion has the potential as the ultimate and clean energy source. Tritium can be recycled in Fusion Power Plants. Fusion is the only known technology capable in principle of producing a large fraction of world’s electricity without any serious issues. We should burn our oil to develop Fusion Power Plants.

References [1] www.peakoil.net [2] www.ITER.org [3] D Maisonnier et al (2006) Power Plant Conceptual Studies in Europe. The 21st IAEA Fusion Energy Conference, Chengdu, China [4] R. Toschi et al (2001) How far is a Fusion Power Reactor from an Experimental Reactor, Fusion Engineering and Design 56-57:163-172 [5] Socio-economic aspects of fusion power, EUR (01) CCE-FU 10/6.2.2, European Fusion Development Agreement Report, April 2001. [6] D J Ward et al (2005) The economic viability of fusion power, Fusion Engineering and Design 75-79:1221-1227. [7] European Council of Ministers, Conclusions of the fusion fast track experts meeting held on 27 November 2001 on the initiative of Mr. De Donnea, President of the Research Council (commonly called the King Report). [8] K Tokimatsu et al (2002) Evaluation of economical introduction of nuclear fusion based on a long-term world energy and environment model, in: 19th IAEA Fusion Energy Conference, Lyon, IAEA-CN-77/SEP/03 [9] P Lako et al (1998) The long-term potential of fusion power in western Europe, ECN-C98-071

Modelling of a Bubble Absorber in an Ammonia-Salt Absorption Refrigerator Dong-Seon Kim Sustainable Energy Systems, arsenal research Giefinggasse 2, 1210 Vienna, Austria [email protected]

Abstract. In this study, a one-dimensional model is developed for a vertical bubble tube absorber based on the analogies between momentum, heat and mass transfers in a two-phase cocurrent upward flow. The model is compared with two-phase flow correlations from literature and found that proper choice of pressure-drop and void fraction correlations allows predicting the heat and mass transfer coefficients of a bubble absorber with a reasonable accuracy.

1 Objective Ammonia is the second most popular refrigerant being used in the field of absorption refrigeration. Although smaller in latent heat than water, ammonia is dominantly used in most absorption machines with sub-zero evaporator (heat source) temperatures. Among many components in an ammonia absorption machine, the absorber is critically important for the energetic and economic performance of the machine. While a falling film-type absorber is regarded inevitable for a water absorption machine for minimal pressure drop, a bubble-type absorber (a flooded gas-liquid contactor) can be more advantageous for an ammonia machine because an ammonia absorber is relatively insensitive to pressure drop and therefore the inertial forces of primary fluids (i.e., solution and vapour) can be appropriately exploited to obtain larger heat and mass transfer coefficients. Although two-phase flows have been intensively investigated in the past, heat and mass transfer coefficient data are scanty and largely inconsistent in the flow regime of a bubble absorber. In this study, a one-dimensional bubble absorber model is developed using analogies between momentum, heat and mass transfer rates in a two-phase co-current upward flow. It is shown that proper choice of pressure-drop and void fraction correlations may allow the present model to predict the heat and mass transfer coefficients of a bubble absorber with a reasonable accuracy.

2 Model Development 2.1 Description of the Problem Fig. 1 shows a schematic diagram of a bubble absorber. Liquid (ammonia-salt solution in this case) and gas (ammonia vapour) enter the bottom of a vertical tube and flow upward while cooling water flows downward in the exterior of the tube. As a

216

D.-S. Kim

water

solution vapor

Fig. 1. Control volume of a two-phase co-current upward flow in a vertical tube

result of the cooling, the vapour is gradually absorbed into the solution and only liquid comes out at the top of the tube. The main subject in modelling of this system is the simultaneous heat and mass transfer process at vapour-liquid interface and therefore accurate prediction of the corresponding heat and mass transfer coefficients is very important. In the following, starting from general governing equations, a simple one-dimensional bubble absorber model is developed. 2.2 Governing Equations Conservation laws for the control element in Fig. 1 give a solution mass balance equation as d Γ s dz = n& ,

(1)

where Γ is a peripheral mass flux defined as Γ≡ m& /(πdh), an absorbent mass balance equation as d ( Γ s x b ) dz = 0 ,

(2)

where x is the mass fraction of the absorbent (salt), a vapour mass balance equation as d Γ v dz = − n& ,

(3)

where n& is the mass flux of ammonia being absorbed at the interface, an solution-side energy balance equation as & v − q& w , d ( Γ s h b ) dz = nh

(4)

an vapour-side energy balance equation as & v − q& iv , d ( Γ v h v ) dz = −nh

(5)

Modelling of a Bubble Absorber in an Ammonia-Salt Absorption Refrigerator

217

an energy balance equation for cooling water as dt dz = −q& w ( ΓCp ) w ,

(6)

and finally pressure drop (sum of friction, momentum and static losses) across the element as − dp dz = ⎡⎣8 ftp

( ρ d ) + ( dv s

3 h

tp

2 dz ) ×16 d h2 ⎤⎦ × ( Γ s + Γv ) + g v tp ,

(7)

where vtp is a mean specific volume for the two-phase flow and ftp is a two-phase friction coefficient. For easy solution of the problem, a few equations are simplified as follows by using some thermodynamic and transport theories. The solution enthalpy term hb in Eq. (4) causes inconvenience to solution of the problem because it is often given as a complex function of temperature and concentration. Using the enthalpy relation in [1], Eq. (4) can be written explicitly in terms of bulk solution temperature as d ( Γ s CpsT b ) dz = n& Δh − q& w ,

(8)

where Δh is heat of absorption defined as Δh≡hv-[hb-(1-xb)×∂hb/∂x]. In the equations above, wall heat flux q& w is defined between bulk solution and water temperatures by q& w = U (T b − t ) ,

(9)

where volumetric overall heat transfer coefficient is 1/U≈1/αw+1/αb, and vapourto-interface heat flux q& iv is defined by q& iv = α iv ai − w (T v − T i ) ,

(10)

where αiv and ai-w are vapour-side heat transfer coefficient and interface-to-wall area ratio (i.e. interface area divided by πdh), respectively, and interface-to-solution heat flux q& is is defined by q& is = α is ai − w (T i − T b )

(11)

n& = ρβ ai − w ( xb − xi )

(12)

and mass flux n& by Eqs. (10)~(12) require knowledge of interface variables, i.e., Ti and xi. These quantities are removed from the equations as follows. Applying heat and mass transfer analogy (i.e. αis=β×ρCpLem) in the liquid-side boundary layers near the interface (i.e. Δx and ΔTs in Fig. 1) and using 1st-order Taylor series of an equilibrium equation [e.g. Ts=f(x,p)], Eq. (11) and (12) are reduced to give n& = ρβ ai − w ( c1T b + c2 x b + c3 ) ,

(13)

where c1~3 are defined at a reference concentration xo as c1=-1/[∂Ts/∂x+Δh/(CpsLem)], c2=-c1(∂Ts/∂x) and c3=-c1[Tso-(∂Ts/∂x)xo].

218

D.-S. Kim

2.3 Prediction of Heat and Mass Transfer Coefficients The governing equations in the previous section can be solved when the following values are given; ε ftp αw αb αivai βai

void fraction two-phase friction coefficient water-side heat transfer coefficient at wall solution-side heat transfer coefficient at wall vapour-side volumetric heat transfer coefficient at interface solution-side volumetric mass transfer coefficient at interface

Among the variables above, ε, ftp and αw were intensively investigated in the past and a number of well-developed correlations are available from literature. However, for the remaining two-phase transfer coefficients, correlations from literature are scanty and largely inconsistent. For this reason, a consistent set of simple two-phase heat and mass transfer correlations were developed in the following assuming analogies between momentum, heat and mass transfer. Assuming that conditions exist for analogies between momentum and heat and mass transfer at the two boundaries in the system, i.e., heat exchanger wall and vapour-liquid interface, heat transfer coefficient α at one boundary can be related to the corresponding shear stress as

α = Cps Prs− m × τΔu −1

(14)

and similarly mass transfer coefficient β is given as

β = ρ −1Scs− m × τΔu −1 ,

(15)

where Δu is a driving potential for the momentum transfer at the corresponding interface. Note that Eqs. (14) and (15) have already been used to obtain Eq. (13). Then αb is given by

α b = Cps Prs− m ×

( − dp

s dz ) f × ( uavg ) , −1

(16)

αivai is v α iv ai = Cps Prs− m × ε ( − dp dz ) f × ( uavg − ui )

−1

(17)

and βai is s β ai = ρ −1Scs− m × ε ( − dp dz ) f × ( u i − uavg ) , −1

(18)

where ai is volumetric interface area density and m=2/3 according to Colburn analogy. It is clearly shown in Eq. (16)-(18) that calculation of the transfer coefficients requires knowledge of frictional pressure loss (dp/dz)f, void fraction ε and three characteristic velocities, i.e., average vapour velocity usavg, interface velocity ui and average liquid velocity usavg. From the definition of ε, average solution velocity is given by s uavg = 4Γ s ⎡⎣ ρ s d h (1 − ε ) ⎤⎦ ,

(19)

Modelling of a Bubble Absorber in an Ammonia-Salt Absorption Refrigerator

219

average vapour velocity is given by v uavg = 4Γ v

( ρv d hε )

(20)

and assuming volumetric interface area density is ai=ε1/2 (i.e. laminar annular flow), interface velocity is given by ui =

v s uavg ( μv μs ) × (1 − ε ) ε + uavg

(21)

1 + ( μv μ s ) × (1 − ε ) ε

Then, obviously, the transfer coefficient values in Eqs. (16)-(18) will depend on the choice of (dp/dz)f and ε models. Operating range of a bubble absorber in an absorption machine is typically below 0.2 in vapour quality. Unfortunately, however, it is the region where void fraction models from literature disagree as shown in Fig. 2. 2500

dp/dzf,models (pa/m)

1

0.8

ε

0.6

0.4 ρv=3.5, ρs=1000 Smith[4] Chisholm[2] Zivi[3]

0.2

0

Quality=0.1, xb=0.55, dh=0.015, Γs=0.03~0.4kg/ms Lockhart-Martinelli[9] Cichiti[8] McAdams[10] Dukler[11] Hibiki[6] Friedel[5] Lee[7]

2000

1500

+20%

-20%

1000

500

0

0.001

0.01

0.1

quality Fig. 2. Void fraction models

1

0

400

800

1200

1600

2000

dp/dzf,[9] (pa/m) Fig. 3. Pressure drop models

All three models in Fig. 2 are in good agreement for the vapour qualities above 0.1. However, disagreement is substantial in the low quality range that is of interest in this study. Among the models, Chisholm [2] predicts the largest and Zivi [3] the smallest with Smith [4] in-between. On the other hand, a few pressure drop correlations from literature are shown in Fig. 3. It can be seen that one separate flow model (Hibiki [6]) and one homogeneous model (McAdams [10]) agree with Lockhart-Martinelli [9] within 20% error boundary under the given conditions.

3 Results As shown in the previous section, (dp/dz)f and ε models from literature are not in good agreement. Although dispuitable, (dp/dz)f model from Lockhart-Martinelli [9] and ε model from Smith [4] are used in the following without justification.

D.-S. Kim

αbmodels (kW/m2K)

10

0.001

xb=0.55, dh=0.015, ms=0.002~0.025kg/s Eq. (16) Akers[13] Shah[12] Groothuis[14]

8

(βai/aw)models (m/s)

220

6

4 +30%

2

-30% 0

xb=0.55, dh=0.015, ms=0.002~0.025kg/s Eq. (18) Ferreira[18] Keiser[15] Banerjee[16] Kasturi[17]

0.0008

0.0006 +30% 0.0004 -30% 0.0002

0 0

0.4

α

0.8

b Eq.(16)

1.2

1.6

2

(kW/m K)

(a) heat transfer coefficients

2

0

0.0002 0.0004 0.0006 0.0008 0.001

(βai/aw)Eq. (18) (m/s) (b) mass transfer coefficients

Fig. 4. Eqs. (16) and (18) and some empirical correlations from literature

Fig. 4 compares the heat and mass transfer coefficients from Eqs. (16) and (18) with some empirical correlations from literature. The heat and mass transfer coefficients in Fig. 4 are average values for a 1m-long 15mm-I.D. vertical tube with ms varied in 2-25g/s of 55% LiNO3 solution and inlet quality varied in 0.4-12%. They were averaged over the void fraction profiles obtained from numerical solution of the equations in Section II-2 using the corresponding empirical correlation. Transport properties were assumed constant to avoid the influences of varying properties. In Fig. 4a, it can be seen that the present model is in particularly good agreement with Shah [12]. Disagreement with [13, 14] is significant. The present model and [12] agree better with the experimental data from Infante Ferreira [18] where αb=0.52kW/m2K is reported under similar conditions. For the case of mass transfer in Fig. 4b, the present model is found to be in good agreement with Infante Ferreira [18], Keiser [15] and Banerjee [16]. Kasturi [17] predicts the smallest values in the entire range. Note that an interface area density of Hibiki and Ishii [19] has been used with [16, 17] to calculate volumetric coefficients. Recall that the results in Fig. 4 are based on the particular (dp/dz)f and ε models used here without justification and therefore should be considered only as ideal references in comparison. Nevertheless it is indicated that the present model is able to make realistic predictions and may be further improved by elaborating some details described in the previous sections.

4 Conclusions A simple one-dimensional bubble absorber model has been developed based on the analogies between momentum, heat and mass transfer in a two-phase co-current upward flow. It is shown that the model with a proper choice of pressure drop and void fraction correlations can provide realistic heat and mass transfer coefficients for a bubble tube absorber. It is thought that the model may be improved by elaborating some of its details.

Modelling of a Bubble Absorber in an Ammonia-Salt Absorption Refrigerator

221

References [1] Haltenberger Jr W (1939) Enthalpy-concentration charts from vapour pressure data. Ind Eng Chem 31:783-786 [2] Chisholm D (1972) An equation for velocity ratio in two-phase flow. NEL Report 535 [3] Zivi SM (1964) Estimation of steady-state steam void-fraction by means of principle of minimum entropy production. ASME J Heat Transfer 86:247-252 [4] Smith SL (1969) Void fractions in two-phase flow. A correlation based on equal velocity head model. Proc Instn Mech Engrs 184:647-664 [5] Friedel L (1979) Improved friction pressure drop correlations for horizontal and vertical two-phase pipe flow. Paper E2, European Two-Phase Group Meeting, Ispra, Italy [6] Hibiki T, Hajuku T, Takamasa T, Ishii M (2007) Some characteristics of developing bubbly flow in a vertical mini pipe. Int J Heat Fluid flow 28:1034-1048 [7] Lee K, Mudawar I (2005) Two-phase flow in high-heat-flux micro-channel heat sink for refrigeration cooling applications: Part I. Int J Heat Mass Transfer 48:928-940 [8] Cicchitti A, Lombardi C, Silvesti M, Solddaini G, Zavalluilli R (1960) Two-phase experiments-Pressure drop, heat transfer and burnout measurement. Energi Nucl 7:407-425 [9] Lockhart RW, Martinelli RC (1949) Proposed correlation of data for isothermal twophase two-component flow in pipes. Chem Eng Prog 45:39-48 [10] McAdams WH (1954) Heat transmission, 3rd ed. McGraw-Hill, New York [11] Dukler AE, Wicks III M, Cleveland RG (1964) Frictional pressure drop in two-phase flow: A. A comparison of existing correlations for pressure loss and holdup. AIChE Journal 10:38-43 [12] Shah MM (1977) a general correlation for heat transfer during subcooled boiling in pipes and annuli. ASHRAE trans 83:205-215 [13] Akers WW, Deans HA, Crosser OK (1959) Condensation heat transfer within horizontal tubes. Chem Eng Prog Symp Ser 55:171-176 [14] Groothuis H, Hendal WP (1959) Heat transfer in two-phase flow. Chem Eng Sci 11:212220 [15] Keiser C (1982) Absorption refrigeration machines. Dissertation, Delft Univeristy of Technology [16] Banerjee A, Scott DS, Rhodes E (1970) Studies on cocurrent gas-liquid flow in hellically coild tubes. Part II. Canadian J Chem Eng 48:542-551 [17] Kasturi G, Stepanek JB (1974) Two-phase flow –IV. Gas and liquid side mass transfer coefficients. Chem Eng Sci 29:1849-1856 [18] Infante Ferreira CA.(1985) Vertical tubular absorbers for ammonia-salt absorption refrigeration. Dissertation, Delft Univeristy of Technology [19] Hibiki T, Ishii M (2001) Interfacial area concentration in steady fully-developed bubbly flow. Int. J. Heat Mass Transfer 44:3443-3461

Literature Review of Technologies and Energy Feedback Measures Impacting on the Reduction of Building Energy Consumption Eun-Ju Lee, Min-Ho Pae, Dong-Ho Kim1, Jae-Min Kim2, and Jong-Yeob Kim3 1

Integrated Simulation Unit, DASS Consultants Ltd., Seoul 143-834, Korea Energy System Research Unit, University of Strathclyde, UK 3 Building Environment & Energy Research Unit, KOREA NATIONAL HOUSING CORPORATION, 463-704, Korea [email protected] 2

Abstract. In order to reduce energy consumption in buildings, there are a number of available technologies and measures that can be adopted. Energy feedback measures enable energy endusers (e.g. households) to recognize the need for energy reduction and change their behaviour accordingly. The effects of energy feedback measures have been reported on in most North American and European industrialized countries, though little research has been conducted in Korea. This paper presents case studies of energy feedback measures and their effectiveness on the basis of a literature review of academic papers, technical reports and website sources. Energy feedback measures can be as effective (10-20% reduction rate) as innovative energy systems which require substantial capital investment. In this paper, the design strategy of universal human interfaces is also discussed in support of energy feedback measures. Keywords: Energy Feedback, Information System, Building Energy Consumption, Literature review.

1 Introduction Various technologies and measures are adopted for energy saving in buildings. Among them, the energy feedback measures are a way of energy reduction which enables energy end-users (e.g. households) to recognize the need for energy reduction and change their behaviour of energy use. Energy reduction strategy using energy feedback aims at ultimate energy saving effect via indirect way. The effects of energy feedback were reported from the mid-1970s [1]. In this study, the measures and effects of energy feedback are reviewed on the basis of publications available in academic journals and Internet World Wide Web sites. This paper also presents a proposed design strategy of universal human interfaces based on the literature review in support of the energy feedback measures.

2 Literature Review 2.1 Technologies and Measures for Building Energy Saving in Korea In the previous study [2], academic papers associated with energy saving technologies and measures in buildings were retrieved with key words of ‘energy’, ‘saving (or

224

E.-J. Lee et al.

reduction)’ and ‘buildings’ from the publications of Korean professional societies including the Society of Air-conditioning and Refrigerating Engineers of Korea (SAREK), and Architecture Institute of Korea(AIK). As shown in Table 1, 318 papers were selected from 1970 to 2008. According to the result of the search, building material, refrigeration, air-conditioning systems and building automation systems are popular subjects. While most of studies are technology-oriented, there are few studies on measures for end-user policy. The studies associated with end user policy are mostly introTable 1. Cases on building energy reduction ductory to the concept and methods of energy feedback (Moon Energy reduction factor 1980's 1990's 2000's (2006) [3], Lee (2004) [4]). Bae and Chun (2008) [5] investigated Insulation 2 2 25 how residents environmental Building materials 39 awareness and behavior change Refrigeration 1 2 35 make impact on indoor environment condition. The study reports Outdoor air cooling system 2 that providing simple information Lighting system 2 3 10 such as temperature and humidity Building automatic control 2 21 made occupiers more react to system their environment condition and Solar energy 3 1 1 control indoor air quality acGeo thermal system 1 1 tively. They stressed that it is imCogeneration system 2 portant to provide residents with Air conditioning system 1 37 education and information reguShading devices 1 13 larly to sustain the positive effect Windows and doors system 1 12 of education since it tends to decrease as time goes by. Double façade system 3 14 Although this study is not reEcological architecture 22 lated with energy reduction effect Environmental cost 6 directly, it gives and can be the Greenhouse system 1 5 4 research to be consultable in enConstruction management 29 ergy feedback study hereafter, as User policy 4 the research that achieve use9 22 297 energy information feedback effect through real monitoring. Total 318 Study of Jong-yeop Kim (2007) [6] was accomplished on purpose of the used energy amount information Web-Based program development. So, he constructed information system to acquire energy expenditure monitoring information. He has the plan to study about the saving effect according to the energy usage statistics data and user participation grade as users' attitude surveying and managing the energy information offering web-site [7]. 2.2 Review of Studies on Energy Feedback Oversea The study of the reduction of energy that with user policy is rare in Korea, but that is done in North America and Europe from a long time ago. We grasp the effectiveness

Literature Review of Technologies and Energy

225

of energy reduce with user policy based on literature review from Darby S (2006) that summarize its results. Darby set direct feedback and indirect feedback that a measure to be effective reduction of building energy that provide energy information to energy user. That is Table 2. Direct feedback research, as shown in the table is 5 to 14 percent reduction case reported, and indirect feedback- related research is majority exists between 0 to 4 percent, 10~14 percent. In addition, if use direct energy feedback that is over 20 percent would to do show that the three cases were reported in the voluntary energy saving aspects of the user can see up to expectations Table 2. Effect of energy feedback Energy Saving

0~4 5~9 10~14 15~19 20 of peak

Direct Feedback

2

Indirect Feedback

3

1987~2000 1975~2000

4 6

8 6 9

7

1

6

1

3

5 13

1 3

1 1

20

Unknown

3 3 3

1 3

2.2.1 Direct Feedback Direct feedback that was defined by Darby is the status is the user at any time to see energy meter (Display installed in places that use energy, personal computers or through the Web) to provide information you can get feedback on the way soon [8]. According to one case of realtime feedback of energy in Ontario Hydro in Canada, every 25 households reduce 13 percent of all electricity energy consumption [9]. In Japan, that saves an average 12 percent of the total energy amount, which means another application if you use the complex can expect to see a bigger effect [10]. Research of complex application feedback has been done by Harrigan & Gregory in the United States by 1994 [11]. When the American people to target the poor insulation of buildings to improve energy efficiency of buildings to support things that support the cost Weatherization apply, it is able to save about 14 percent of heating gas usage. In addition to raise awareness of energy saving policies with energy conservation education, if implemented, heating gas consumption savings of 26 percent. Education programs, Weatherization and energy feedback, all cases shall also apply to savings can lead to more than 26 percent. Recent research has been performed in Canada in 2006 [12], the actual contents of a user can carry a portable monitor in real time to provide information on using power and CO2 usage by hourly, periodic, cumulative. It was through separating thermal and the other energy load showed maximum 16.7 percent saving. In Europe, there is webbased user monitoring system research with Internet infrastructure, but the current European situation is still not enough places to study the progress report was delayed. Upland Technologies of the U.S. Fig.1 portable monitoring devices and fixed monitoring devices, energy usage statistics for the establishment of a Web interface. The Wattson made in DIY Kyoto have the most efficient visual interface in the energy feedback [13].

226

E.-J. Lee et al.

Fig. 1. Energy viewer (Upland Technoloies)

Fig. 2. Wattson (DIY, Kyoto Ltd)

Fig 2 is picture of it. Electric using data and coast is indicated to color and bright. Function of the device could be understand to get electric using data during season by USB connecter in the PC but it could not apply in another energy sources commonly. 2.2.2 Indirect Feedback In 1979, a study reported that supplying energy bills to end-users 6 days a week resulted in reductions of energy use of up to 18%14. A Norwegian study15 showed that delivering energy bills several times during the year gave rise to a change in the pattern of end-users’ energy consumption. The energy bill used in the study contained a comparison analysis report between the current and same season from the previous year. The effect of energy feedback was about a 12% reduction. The study made a major impact on establishing regulations where accurate energy bills must be delivered to end-users every 3 months. A study was conducted in the UK in 1999 16 to see effect of varying the content of energy feedback. In the study, 120 people were divided into 6 groups and provided with the following information: Group 1: Group 2: Group 3: Group 4: Group 5: Group 6:

comparison of energy use against other houses. comparison between current and last season’s energy use. comparison of cost of energy use underwent an educational programme on the environmental crisis provided information on energy saving technologies provided with a software program to check energy data on demand.

The study lasted for 9 months and showed that the most effective group in terms of energy savings was the Group 6 (software program users) followed by Group 1 and then Group 2. The study also concluded that the visual design of the energy report was important to increase the energy feedback effect in addition to general content.

3 User Interface Design for Energy Feedback There is an average energy saving rate of about 10-15% through direct and indirect feedback measures according to the literature review. The effects of energy feedback are, however, dependent on the type of energy information provided and design of

Literature Review of Technologies and Energy

227

user interface and energy report. When combining energy feedback measures with appropriate education, the energy saving effects increase. When considering the development of an information and communication system in support of energy feedback, appropriate hardware and software systems are required to provide informative content. In addition to the content, user interfaces should be carefully designed to increase human interactivity with energy systems. We propose a design guideline of human interfaces for effective energy feedback based on the findings of the literature review. Fig. 3 illustrates the schematic design guideline, which incorporates 4 principles – namely that the interface be: informative, understandable, usable and multi-functional. Recent information and communication technologies allow energy end-users to have ubiquitous energy management systems with which energy use can be monitored at appliance level. The authors have developed a prototype energy monitoring viewer. Fig. 4 shows an example of an energy monitoring viewer displaying energy and environment factors in real-time and giving warning messages to users when detecting unnecessary appliance energy use. The human interface also provides emotional image icons according to status of the appliance energy use pattern.. Fig. 5 is an example of an energy monitoring interface implemented in a website. The status of individual appliances is shown with metered consumption data. Graphic icons represent the home appliance currently being used. The web interface is designed to make an emotional impression to energy end-users as well as quantitative information according to their energy use patterns. An example of energy bills with informative reports (e.g. comparison analysis) is shown in Fig. 6.

Fig. 3. Design guideline of user interfaces for energy feedback

Fig. 4. An example of energy monitoringviewer

Fig. 5. An example of energy monitoring website..

Fig. 6. An example of energy bills

228

E.-J. Lee et al.

4 Conclusion Now we have Energy cost increasly and the Global warming by increasing Co2 emission. So we should develop energy saving technology and system. The efficient energy saving give 10~15% the saving rate by paper. But the saving rate has difference of content, visual type by expecting. So the study considerate display type and system and indicate design guideline to inspire the desire of energy saving oneself. To The energy feedback is finished, firstly technology of monitoring in indoor condition and energy consumption is comprised. Secondly energy feedback type is defined to search type of energy consumption and Korean traditional situation. And the content need be developed substantially. Finally we make connect in various expect to develop energy feedback technology using approach various spare.

References [1] Steven C Hayes and John D Cone (1977) Reducing residental electrical energy use: Payments, Information, And Feedback. Journal of applied behavior analysis 3:425-435 [2] M H Pae (2008) Literature review of technologies and energy feedback measures impacting on the reduction of building energy consumption. KIAEBS 08, 04:125-130 [3] H J Moon (2006) Research Trend about Building Energy Savings and House Performance Analysis in U.S.A., Housing & Urban 91:120-131 [4] C H Lee (2004) Power-saving program, which can follow much participation of consumers, should be designed. Energy Management 338:33-35 [5] N R Bae (2008) Changes of residents' indoor Environment control Behavior as result of provide Education and Environmental information. Architectural institute of Korea 232:285-293 [6] J Y Kim (2007) Development Web-based Building Energy Information System. 2007 Housing and Urban Research Institute Proceeding 183-200 [7] http://www.energysave.or.kr Accessed 3 July 2008. [8] Darby S (2006) Making it obvious: designing feedback into energy consumption. Environmental change Institute, University of Oxford [9] Dobson, John K., and J D Anthony Griffin (1992) Conservation Effect of immediate Electricity Cost Feedback on Residental Consumption Behavior. ACEEE 1992 summer study on Energy Efficicency in Building American Council for an energy efficient Economy, Washington D.C. [10] Ueno T, Inada R, Saeki O and Tsuji K (2005) Effectiveness of Displaying Energy Consumption Data in residential Houses. ECEEE 2005 summer study on energy efficiency in building (6), European Concil for energy efficient economy, Brussels :1289 [11] Harrigan M S and Gregory J M (1994) Do savings from energy education persist? Alliance to Save Energy, Washington DC [12] Mountain D, (2006) the impact of real-time feedback on residential electricity consumption: he Hydro One pilot. Mountain Economic Consulting and Associates Inc., Ontario [13] http://www. DIYkyoto.com Accessed 3 July 2008 [14] Bittle, Valesano, and Thaler (1979) The Effects of Daily Cost Feedback on Residential Electricity Consumption,Behavior Modification 3(2):187-202 [15] Wilhite H and R Ling (1995) Measured energy savings from a more informative energy bill. Energy and buildings 22:145-155 [16] Gendolyn Brandon and Alan Lewis (1999) reducing Household Energy donsumption : A Qualitative And Quantitative Field Study. Joural of Environmental Psychology 19:75-85

Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography Ki-hwan Kim1, 2, Na-young Song1, Byung-kwon Choo1, Didier Pribat2, Jin Jang1, and Kyu-chang Park1 1

Department of Information Display, Kyung-Hee University, Dongdaemoon-gu, Seoul, Korea [email protected] 2 Laboratoire de Physique des Interfaces et Couches Minces, École Polytechnique, 91128 Palaiseau CEDEX, France

Abstract. This paper is a study about mechanical characteristics of hard-polydimethylsiloxane (h-PDMS), we compared and analyzed physical properties of h-PDMS with polydimethylsiloxane (PDMS) which in non-photolithography patterning process. As a next generation (or advanced) patterning process, various methods of soft lithography for non-photolithography and low-cost is actively being researched, especially, PDMS uses are increasing with the material which the adhesiveness and formation is superior Recently, there is a report about a new process using h-PDMS that is improved the low Young’s modulus which is a mechanical weak point of PDMS will get good quality for nano-scale patterning. According to changing composition ratio of h-PDMS, we measured the crack density per unit length with radius of curvature, also measured the strain with a radius of curvature when starts creations of crack due to hardness of h-PDMS. With these experiments, we studied that it is possible to control mechanical characteristics of hard-PDMS with composition ratio, we showed possibility of improving at pattern collapse and twist which weak point of PDMS, and also showed that it is possible to fabricate desired hardness of h-PDMS.

1 Introduction For the past several decades, the photolithography process that has been used dominantly in the semiconductors and information display industries is continuously developing for lower cost in semiconductor and increase the size of display panel. However, improved lithography technologies such as ink-jet printing [1 and 2], soft lithograph [3], imprinting technology [4 and 5], self assembly [6] and atom lithography [7] have been brought to public attention as candidates to challenge against conventional photolithography process in order to lower process cost and minimize material wasting. Especially, soft lithography processes is very promising technology due to lower cost than photolithography process and higher processability [3]. Using polydimethylsiloxane (PDMS) has several merits like possibility in multiple printing, precise patterning and improving the quality of patterning with surface modification with technology like FLEPS (Flexibile letterpress stamping method) [8], μCP (micro contact printing) among the soft lithography method, but on the other hand, some problems can happen like a paring, sagging, swelling, and shirinking of PDMS due to essential problems of PDMS [10, 11, 12]. Furthermore, because when

230

K.-h. Kim et al.

we want to make patterns on the substrate using PDMS mold, some problems are able to be caused by collapse of mold, hard PDMS that complement mechanical characteristics of PDMS is newly introduced [13]. Because the surface of hard PDMS is much harder than soft PDMS which is made of Sylgard 184A and Sylgard 184B, probability to form roof collapse or lateral collapse is much lower, when we make the patterns, it is possible to make patterning precisely [14]. Therefore, hard PDMS can be used alternatively in the soft lithography method, moreover, it is possible to be raised as a most promising method that hard PDMS will substitute the photolithography process. In this paper, we report a study on mechanical characteristics of hard PDMS for feasible application, we compared with soft PDMS when we want to make specific patterns on the substrate. Therefore, we studied some surface modification method that usually is treated on the soft PDMS with hard PDMS.

2 Experiment Fig. 1 shows the comparison of fabrication process between soft PDMS and hard PDMS. The fabrication of hard-PDMS is more complicated than soft-PDMS. First of all, we mixed 4 types of materials, a vinyl PDMS prepolymer (VDT-731, Gelest Corp., www.gelest.com), Pt catalyst (platinum divinyltetramethyldisloxane, SIP 6831.1, Gelest Corp.), modulator (1, 3, 5, 7 tetravinyl- 1, 3, 5, 7 tetramethyl cyclotetrasiloxane, SIT-7900, Gelest Corp.) and hydrosilaneprepolymer (HMS-301, Gelest Corp.) with certain composite ratio. VDT-731 plays a role like a Sylgard 184A, SIP 6831-1 plays a role of reaction agent, SIT 7900.0 plays a role of adhesion promoter, and HMS 301 plays a role like a Sylgard 184B. Here, we fixed composite ratio of 3 kinds of materials as 9μL of a SIP 6831.1, 0.1g of a SIT 7900.0, and 0.5g of a HMS-301 respectively, but we varied amount of a VDT-731 for studying a relationship between VDT-731 content and crack density of h-PDMS.

Fig. 1. Fabrication process of soft PDMS & hard PDMS

Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography

231

After mixing all of composite, we coated the mixed composite on cleaned master stamp using spin coating process at 1000rpm for 1 minute, and then it was cured in a thermal heater at 60°C for 30 minutes. After that, we used the soft PDMS stamp fabrication process. At first we mixed Sylgard 184A and Sylgard 184B as a ratio of 10:1. Here, the Sylgard 184B plays a role of curing agent, and it was poured on the hard PDMS coated film, and then it was cured in a thermal heater at 60°C for 2hr. After that, we obtained hard PDMS & soft PDMS composite stamp. As a matter of fact, this stamp is comprised of 2 layer PDMS, soft PDMS acts a role of substrate, and hard PDMS is actual surface layer. Therefore, when we analyze this composite stamp, we have to focus on mechanical characteristics of hard PDMS. To characterize the mechanical properties of hard PDMS, at first, we tested the crack density of hard PDMS with various VDT-731 contents ratio. As a matter of fact, because VDT-731 plays a role of Sylgard 184A at soft PDMS, hardness of hard PDMS is depends on VDT-731 contents ratio, so we studied about the radius curvature dependent crack creation with different contents ratio of VDT-731. When we carry out this study, we fixed the dimension of composite stamp that is made with hard PDMS and soft PDMS of a 1cm x 1cm x 3mm. There are two types of film stress as shown in fig. 2, tensile stress and compressive stress, but in this study, because we used soft PDMS as a substrate and hard PDMS as a target film, the stress applied to hard PDMS. The film stress was measured by using FLX-2320-S (Toho Technology Corporation). The principle of measurement is that when we make a film on the substrate, physical coefficient between substrate and thin film deviate. Therefore, stress on the substrate is created. This stress is introduced by the bending of substrate, it can be defined with the variation of radius of curvature. At this moment, the radius of curvature is able to be calculated by the angle of reflection of laser that is induced by the substrate, so we are able to obtain the variation of radius of curvature with calculating the difference of radius of curvature after measuring of radius of curvature with before and after coating of thin film.

R is the variation of the radius of curvature, R1 is the radius of curvature before coating of thin film, and R2 is the radius of curvature after coating of thin film.

Fig. 2. Two types of Film stress on the substrate

232

K.-h. Kim et al.

Fig. 3. Computational film stress

A case of the sample is fabricated like a fig. 3,

with this equation, we are able to extract the strain of the substrate.[14] After that, we fabricated samples for studying of tensile strength of the hard PDMS with the UV-O3 treatment. There are several method to modify the surface energy of the substrate, for example, by creating of self-assembled monolayer (SAM) using chemical materials like a TMCS (trimethylchlorosilane, Sigma Aldrich) [15], OTS (octyltrichlorosilane, Sigma Aldrich), [16], and Teflon AF [17] or by changing the surface energy using O2 plasma treatment, but in this study, we were able to change the surface energy of the surface of hard PDMS by using UV-O3 treatment [18]. For the UV-O3 treatment, we made up the equipment as shown in fig. 4, in this case, experiments are carried out with atmosphere in the room temperature, and we supplyed the O2 gas except the N2 gas. The experiments are proceeded with the case of no treatment, 1 minute treatment, 3 minutes treatment, 6 minutes treatment, 9 minutes treatment, 12 minutes treatment, and 15 minutes treatment. The hard and soft PDMS composite stamp treated using UV-O3 method is fabricated, and is measured by the KS M 6158 method proposed by Korea Environment & Merchandise Testing Institute (KEMTI).

Fig. 4. Schematic of UV-O3 equipment

Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography

233

With this mechanical characteristics of hard PDMS analyzed like above methods, studying actual difference with soft PDMS at patterning, we studied about whether collapse exist or not. For the experiments, when we fabricate the composite stamp comprised with hard PDMS and soft PDMS, hard PDMS is formed on the target master stamp, and then the mixing materials of Sylgard 184A and Sylgard 184B were poured to make the soft PDMS as substrates. When we fabricate the soft PDMS, we formed soft PDMS was formed directly on the target master stamp, and then for the patterning using μCP (micro contact printing) method, the Novolac resin and PGMEA (propylene glycol methly ether acetate) were mixed, and patterning was done on the substrate after spin coat at 4500rpm for 60 seconds on each mold. Formed pattern was measured by optical microscope.

3 Results and Discussion We report a study on the variation of radius of curvature for measuring the number of cracks of hard PDMS surface as changing VDT 731 contents ratio of fabricated composite stamp. For the convenience to calculate crack density easily, the surface area is fixed as a 1cm x 1cm and is fabricated. There are some examples of samples in table 1 that is fabricated with fixed contents ratio of SIP 6831.1, SIT 7900.0, and HMS 301 to 9μL, 0.1g, and 0.5g respectively, and various VDT 731 contents ratio. Moreover, fig. 5 shows the relationship between the radius of curvature and crack Table 1. The hard PDMS fabrication examples with VDT-731 content ratio variation

Fig. 5. Crack density of the h-PDMS as increasing the radius of curvature as a function of VDT wt. % in hard PDMS mold

234

K.-h. Kim et al.

Fig. 6. Strain of the hard PDMS as increasing the negative and positive radius of curvature. When the radius of curvature goes to the negative direction, the compressive stress exist on the surface of hard PDMS, when the radius of curvature goes to positive direction, the tensile stress exist on the surface of hard PDMS.

density with changing VDT 731 contents ratio. Generally, if contents ratio of VDT 731 is more and more contained, we are able to know that the creation of the crack become less as decreasing the radius of curvature. As a matter of fact, when we touch large amount of VDT 731 contained hard PDMS, we can know that it is soft as a soft PDMS. Moreover, in general case, the crack is created due to the surface hardness of hard PDMS, but in composite stamp case, if the contents ratio of VDT 731 is high, because it shows the flexibility like a soft PDMS, the stress between hard PDMS and soft PDMS in the stamp is relatively low, so less cracks are created. Therefore, if when the crack is created through measuring the strain of hard PDMS, we can obtain the result as shown in fig. 6. In this experiment, samples were created with coating the hard PDMS on the 15cm x 15cm glass substrate. Here, we measured when the applied stress on the coated hard PDMS film on the glass substrate is case of tensile strength and when the stress is compressive strength. As graphs are shown also, when the stress is compressive strength, crack does not exist, but when the stress is tensile strength, if the εsurface is over the 39.8%, crack is created. With above experimental results, because surface of hard PDMS is relatively harder, when we fabricate the mold, pairing occurred less, and the situation that is also touched other pattern to substrate due to the collapse of PDMS is less happened. Because Young’s modulus of soft PDMS is relatively low (3MPa) [12], when we make the pattern, the probability of collapse is high at the intaglio of fabricated mold, so as shown in fig. 7-a and 7-b, we were able to know that other patterns are also touched to substrate, but in hard PDMS case, it has a relatively higher Young’s modulus (9MPa) than soft PDMS [12], the probability of collapse is less at the mold, so as shown in fig. 7-c and 7-d, when we make the pattern, little patterns are also touched. In fig. 7-a and 7-c, because mold is fabricated with master stamp that pattern trench is 5μm, even if collapse is happened, relatively small area is touched to substrate, but in the case of fig. 7-b and 7-d, mold is fabricated with the master stamp that pattern trench is 2μm, so relatively large area is touched to substrate. Therefore, in figure 7-b and 7-d cases has a less quality than case of fig. 7-a and 7-c [19]. In above experiments, we were able to confirm that if we make the pattern, when the mold fabricated with the hard PDMS has small amount of problems than when the

Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography

235

Fig. 7. Optical images after patterning using soft PDMS and hard PDMS (a) 5μm trench pattern, soft PDMS, (b) 2μm trench pattern, soft PDMS, (c) 5μm trench pattern, hard PDMS, (d) 2μm trench pattern, hard PDMS

mold is fabricated with soft PDMS. However, when we want to use the hard PDMS in a field of soft lithography such as a soft PDMS, it has to be possible to treat surface modification like soft PDMS. So far, in previous reports, there are some chemical treatments like trichloromethylsilane (TMCS, Sigma Aldrich) [15] or octyltrichlorosilane (OTS, Sigma Aldrich) [16], and Teflon AF [17]. Moreover, few reports used the O2 plasma method for changing surface energies of substrates or thin films, but in this reports, we changed the surface energy of thin film by using UV-O3 treatment [18]. When we treat the UV-O3 treatment on the PDMS surface, SiO2 is created on the PDMS surface [20], so the surface energy is increasing, as a result, PDMS is going to hydrophilic. Hence, the desired resist is coated well, we are able to make the pattern well. In this reason, when we use the soft lithography method, we have to do surface modification on the mold. Among established several treatment to soft PDMS, we treated the UV-O3 treatment to hard PDMS surface for surface modification. As shown in fig. 8, as a result, surface energy variation of the hard PDMS surface is similar with the soft PDMS, so we confirmed that hard PDMS is able to treated surface modification like a soft PDMS. Furthermore, we studied about variation of hardness of surface with treating UV-O3 treatment to hard PDMS. For Studying difference of hardness, we used the two kinds of sample for comparison data, UV-O3 non-treated sample and UV-O3 sufficiently treated sample. Here, we fabFig. 8. UV-O3 treatment time dependent variation of surface energies of hard PDMS and soft PDMS ricated the composite stamp with soft

236

K.-h. Kim et al.

PDMS and hard PDMS for fabricating sample to measure using method that is proposed by KIMTI. However, because this study is for knowing variation of the surface hardness of hard PDMS, exposed surface of stamp is fabricated with hard PDMS. As a result, UV-O3 non treated composite stamp represented 3MPa, and UV-O3 treated composite stamp represented 8Mpa. The reason where the difference happens like this, it seems to increase the surface hardness due to the amount of SiO2 on surface is increasing as increasing UV-O3 treatment time onto the surface. With these experiments, we studied that hard PDMS is able to treat using surface treatment like soft PDMS case, and if we treat the surface modification using UV-O3 treatment, hardness of surface is also increases, so we confirmed that it is possible to prevent collapse.

4 Conclusion In this paper, we report a study on the mechanical characteristics of hard PDMS for feasible application. As a result, because the one of materials that is composed hard PDMS, the surface hardness is changed with the contents ratio of VDT 731 that play a role like a Sylgard 184A in PDMS, we made clarify that composite ratios of hard PDMS have to be optimized for practical usage, and we studied that when crack is created on hard PDMS surface with applying the compressive stress and tensile stress to hard PDMS film. As a result, the cracks wasn’t created with applying compressive stress onto hard PDMS surface, but, when applying tensile stress onto the surface, if strain is above of 39.8%, it creates crack. Moreover, because of the hardness of hard PDMS surface, we confirmed that it is possible to make precise patterning because collapse is less happened than when we use the soft PDMS for patterning. Furthermore, for knowing the possibility of usage of surface modification that is inevitable to practical application like a soft PDMS, we studied that surface energy change is possible or not by using UV-O3 treatment, and we confirmed that hardness of hard PDMS surface increase with increasing treatment time. In actual fabrication processes, it can be processed by using composite stamp that is formed hard PDMS and soft PDMS, the mechanical characteristics of composite stamp is much better because the stamp surface is hard PDMS. Hence, when we use the composite stamp to application, it is possible to form much better patterns. Acknowledgments. This work was supported by Seoul Research and Business Development program (Grant no. CR 070054). And also, we thank for the assistance and support of all those who contribute to the EKC2008.

References [1] P Calvert (2001) Inkjet Printing for Materials and Devices. Chemistry of Materials 13(2):3299-3305 [2] WS Wong, S Ready, R Matusiak, SD White, J-P Lu, J Ho, RA Street (2002) Jet-printed fabrication of a-Si:H thin-film transistors and arrays. Non-Crystalline Solids 299302(2):1335-1339

Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography

237

[3] Y Xia and G M Whitesides (1998) Soft Lithography. Angewandte Chemie International Edition 37(5):550-575 [4] S Y Chou, P R Krauss, P J Renstrom (1995) Imprint of sub-25 nm vias and trenches in polymers. Applied Physics Letters 67(21):3114-3116 [5] S Y Chou, P R Krauss, P J Restrom (1996) Imprint Lithography with 25-Nanometer Resolution. Science 272(5258):85-87 [6] A Kumar and G M Whitesides (1993) Features of gold having micrometer to centimeterr dimensions can be formed through a combination of stamping with an elastomeric stamp and an alkanethiol “ink” followed by chemical etching. Applied Physics Letters 63(14):2002-2004 [7] M Mutzel, S Tandler, D Haubrich, D Meschelde, K Peithmann, M Flaspohler, K Buse (2002) Atom Lithography with a Holographic Light Mask. Physical Review Letters 88(8):083601 [8] S M Miller, S M Troiana, S Wagner (2003) Photoresist-free printing of amorphous silicon thin-film transistors. Applied Physics Letters 83(15):3207-3209 [9] E Delamarchem, H Schmid, B Michel, H Biebuyck (1997) Stability of molded polydimethylsiloxane microstructures. Advanced Materials 9(9):741-746 [10] A Bietsch and B Michel, (2000) Conformal contact and pattern stability of stamps used for soft lithography. Journal of Applied Physics. 88(7):4310-4318 [11] T W Lee, Oleg Mitrofanov, Julia W P Hsu (2005) Pattern-Transfer Fidelity in Soft Lithography: The Role of Pattern Density and Aspect Ratio. Advanced Functional Materials 15(10):1683-1688 [12] H Schimid and B Michel (2000) Siloxane Polymers for High-Resolution, High-Accuracy Soft Lithography. Macromolecules 33(8):3042-3049 [13] T W Odem, J C Love, D B Wolfe, K E Paul, G M Whiteside (2002) Improved Pattern Transfer in Soft Lithography Using Composite Stamps. Langmuir 18(13):5314-5320 [14] H Gleskova, S Wagner, Z Suo (1999) Failure resistance of amorphous silicon transistors under extreme in-plane strain. Applied Physics Letters 75(19):3011-3013 [15] B K Choo, K H Kim, K C Park, J Jang (2007) Surface Modification of Thin Film using Trimethylcholorosilane. IMID 2007 proceeding [16] B K Choo, J S Choi, G J Kim, K C Park, J Jang (2006) Self-Organized Process for Patterning of a Thin-Film Transistor. Journal of the Korean Physics Society 48(6):1719-1722 [17] B K Choo, J S Choi, S W Kim, K C Park, J Jang (2006) Fabrication of amorphous silicon thin-film transistor by micro imprint lithography. Journal of Non-Crystalline Solids 352(9-20):1704-1707 [18] B K Choo, N Y Song, K H Kim, J S Choi, K C Park, J Jang, (2008) Ink stamping lithography using polydimethylsiloxane stamp by surface energy modification. Journal of NonCrystalline Solids 354(19-25):2879–2884 [19] W Zhou, E Menard, N R Aluru, J A Rogers, A G Alleyne, Y Huang (2005) Mechanism for stamp collapse in soft lithography. Applied Physics Letters 87(25):251925-251927 [20] B Schnyder, T Lippert, R Kotz, A Wokaum, V M Graubner, O Nuyken (2003) UVirradiation induced modification of PDMS films investigated by XPS and spectroscopic ellipsometry. Surface Science 532-535:1067-1071

ITER Plant Support Sytems Yong Hwan Kim and G. Vine ITER Organization, Joint Work Site, Cadarache, France [email protected] Abstract. Fusion energy features essentially limitless fuel available all over the world, without greenhouse gases, with intrinsic safety, without long-lived radioactive waste, and with the possibility for large-scale energy production. The overall objective for ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. A unique feature of ITER is that almost all of the machine will be constructed through in kind procurement from the Parties (CN, EU, IN, JA, KO, RF, US). The long-term objective of fusion research and development is to create power plant prototypes demonstrating operational safety, environmental compatibility and economic viability. ITER is not an end in itself: it is the bridge toward a first plant, DEMO, which will demonstrate the large-scale production of electrical power. In this paper, the main features of ITER plant support systems: Tritium Plant, Vacuum Systems, Fuelling and Wall conditioning, Cryoplant and distribution, Electrical Power Supply, Cooling Water Supply, Radwaste Management System, Hotcell facility, will be introduced.

1 Introduction As one of the few options for large-scale, non-carbon, future supply of energy, fusion has the potential to make an important contribution to sustainable energy supplies. Fusion can deliver safe and environmentally benign energy, using abundant and widely available fuel, without the production of greenhouse gases or long-term nuclear waste ITER is an international research project with a programmatic goal of demonstrating the scientific and technological feasibility of fusion energy for peaceful purposes, an essential feature of which would be achieving sustained fusion power generation. In terms of its design, it is a unique international collaboration among seven participating teams namely China, the EU, India, Japan, Korea, Russian Federation and the United States of America. Partners of the project will contribute in kind to the various sub systems which will be finally constructed and commissioned at site by the International team in collaboration with the member countries. ITER means “the way” in Latin. It is an intermediate step between the experimental studies presently known in plasma physics and electricity-producing fusion power plants of the future. It is an effort to build the first FUSION SCIENCE EXPERIMENT which will be capable of producing a self-sustaining fusion reaction, the “burning plasma.”

2 Main Features of ITER ITER is a tokamak, a type of magnetic confinement device in which strong magnetic fields confine a torus-shaped fusion plasma that consists of a hot ionized gas of hydrogen isotopes (hydrogen, deuterium, tritium).

240

Y.H. Kim and G. Vine

The fuel - a mixture of deuterium and tritium, two isotopes of hydrogen - is heated to temperatures in excess of 100 million degrees, forming a hot plasma. The plasma is kept away from the walls by strong magnetic fields produced by superconducting coils surrounding the vessel and an electrical current driven in the plasma. The ITER machine will have a major radius of 6.3m and a minor radius of 2.0m. The toroidal magnetic field of 5.3T will be created with the help of a set of large,

Fig. 1. Three dimensional cutaway view of ITER showing the main elements of the tokamak core Table 1. Main Parameters of ITER Total fusion power

500 MW

Additional heating power

50 MW

Q - fusion power/ additional heating power

≥ 10

Average 14MeV neutron wall loading

≥ 0.5 MW/m2

Plasma inductive burn time

300-500 s *

Plasma major radius (R)

6.2 m

Plasma minor radius (a)

2.0 m

Plasma current (Ip)

15 MA

Toroidal field at 6.2 m radius (BT)

5.3 T

* Under nominal operating conditions.

ITER Plant Support Sytems

241

super conducting, coils encompassing the ultra high vacuum vessel. This vessel will have volume of more than 800m3. The plasma will carry a maximum current of 15 Mega Amperes. The plasma will be shaped with the help of a set of super-conducting poloidal coils. All these components will be housed within a high vacuum chamber, the cryostat, which will be 28 meters in diameter and about 26 meters in height. To further raise the plasma temperature to the conditions necessary for fusion, multi megawatt heating systems in the form of highly energetic neutral beams and RF waves will be introduced through special openings in the inner vacuum vessel. The plasma conditions will be studied and measured using numerous, highly sophisticated, diagnostic systems. The entire facility will be controlled and protected with the help of a complex central control system. The device is designed to generate 500 megawatts of fusion power for periods of 300 – 500 seconds with a fusion power multiplication factor, Q, of at least 10 (Q ≥10).

3 Plant Support Systems 3.1 ITER Organization The ITER Central Engineering and Plant Support Department is responsible for all activities related to plant engineering, fuel cycle engineering, electrical engineering, and CAD/engineering design. Specifically, it is responsible for assuring that the designs of all of these plant systems are completed, that they satisfy the requirements, that the hardware and systems interface correctly and satisfy technical specifications (particularly quality), that they are assembled, installed and tested correctly in accordance with the integrated on-site installation and assembly plan. 3.2 Tritium Plant ITER is the first fusion machine fully designed for operation with equimolar deuterium-tritium mixtures. The tokamak vessel will be fuelled through gas puffing and Pellet Injection (PI), and the Neutral Beam (NB) heating system will introduce deuterium into the machine. The ITER tritium systems constitute a rather complex chemical plant. As shown schematically in the diagram in Fig.2, pure deuterium and tritium are introduced into Storage, where tritium is kept in metal hydride beds for immediate use. Through the Fuelling system DT is fed into the Tokamak vessel. Off-gases from the torus collected in the cryopumps or tritiated gases from diagnostics or first wall cleaning are moved by roughing pumps into exhaust processing (“Detritiation, Hydrogen Isotope Recovery”). Recovered hydrogen isotopes are transferred to the Isotope Separation system, while the remaining waste gas is sent to Detritiation Systems for decontamination purposes before release into the environment. The inner fuel loop is closed by the return of unburned deuterium and tritium from Isotope Separation to Storage. The systems are designed to process considerable and unprecedented deuterium-tritium flow rates with high flexibility and reliability. Multiple barriers are essential for the confinement of tritium within its respective processing components, and Detritiation Systems are crucial elements in the concept.

242

Y.H. Kim and G. Vine

Fuelling Systems Neutral Beam Heating

Storage and Delivery System

Tritium / Deuterium from External Sources

Long Term Storage Neutral Beam Injector Cryo Pumps

Isotope Separation System

Torus Water Detritiation Tritium Breeding Test Blanket

Tokamak Exhaust Processing Atmosphere and Vent Detritiation Systems

Torus Cryo Pumps Roughing Pumps

Diagnostics First Wall Cleaning

Analytical System Off-gas Release

Protium Release

Automated Control System & (Hard Wired) Safety System

Fig. 2. Outline flow diagram of the ITER fuel cycle

The basis of detritiation is the catalytic oxidation of tritium and tritiated species into water, followed by transfer to Water Detritiation for recovery of tritium and to allow for release of a decontaminated gas stream. The recovery of tritium from tritiated water collected in Detritiation Systems is unique feature of ITER tritium confinement. ITER operation is divided into four phases. Before achieving full deuterium-tritium (DT) operation, which itself is split into two phases, ITER is expected to go through two operation phases, a hydrogen phase and a deuterium phase, for commissioning of the entire plant. 3.3 Vacuum Systems ITER has one of the largest and definitely the most complex vacuum system ever to be built. The large vacuum volumes of ITER are: Cryostat Vacuum Volume: 8500m3, Pressure: 0, i = 1, 2, ..., n. It is easy to prove that the minimiser of this combined function is Pareto optimal and it is up to the user to choose appropriate weights. This approach gives an idea of the shape of the Pareto surface and provides the user with more information about the trade-off among the various objectives. 2.2 Intelligent Swarm-Based Optimisation Swarm-based optimisation algorithms (SOAs) mimic nature’s methods to drive the search towards the optimal solution. Many of nature’s methods have been utilised by the various species which form swarms and develop as SOAs such as Genetic Algorithm (GA), Ant Colony Optimisation (ACO) and Particle Swarm Optimisation (PSO). The GA is based on natural selection and genetic recombination and works by choosing solutions from the current population and then applying a genetic operator – such as mutation and crossover – to create a new population. ACO algorithm mimics the behaviour of real ants which are capable of finding the shortest path from the food source to their nest using a chemical substance called a pheromone. The pheromone is deposited on the ground as the ants move and the probability that a passing stray ant will follow this trail depends on the quantity of pheromone laid. The PSO is an optimisation procedure based on the social behaviour of groups or organizations such as flocking of birds or as in a fish school. Individual solutions are represented by “particles” which evolve or change their positions over time in a search space according to its own experience and also its neighbours’ experience, thus combining local and global search methods. The main difference between SOAs and direct search algorithms such as hill climbing and random walk is that SOAs have a number of solutions equal to the number making up the population in every iterations, whereas a direct search algorithm has only a single solution.

Multi-objective EED Using the Bees Algorithm with Weighted Sum

269

3 The Bees Algorithm Bees are well known as social insects with well organized colonies. Therefore many researchers have studied their behaviour such as foraging, mating and nest site location to solve many difficult problems. The Bees Algorithm is inspired by honey bees’ foraging behaviour and it was developed by D. T. Pham in 2006[7, 8]. 3.1 Bees in Nature In a colony, the foraging process conducted by scout bees which are sent to flower patches to search for a food source. When they return to the hive, they deposit their nectar or pollen and then go to the “dance floor” to perform what is called the “waggle dance” which contains important information regarding a flower patch which they have found. The information helps worker bees to go precisely to the most promising flower patches without guides or maps. More worker bees are sent to more promising patches and less bees are sent to other less fruitful sites. Therfore the colony gathers food more quickly and more efficiently. 3.2 Proposed Bees Algorithm As mentioned, the Bees Algorithm is an optimisation algorithm to find the optimal solution. Fig. 1 represents a pseudo code for the Bees Algorithm. 1. Initialise population with random solution 2. Evaluate fitness of the population While (Stopping criterion not met) //Forming new population 3. Select patches for neighbourhood search 4. Recruit bees for selected patches(more bees for best patches) and evaluate their fitness 5. Select the fittest bee from each patch 6. Assign remaining bees to search randomly and evaluate their fitness 7. End while Fig. 1. Pseudo code for the Bees Algorithm

The algorithm requires 6 parameters to be set, namely: 1) number of scout bees (n), 2) number of selected flower patches (m) among the visited patches by scout bees, 3) number of the best patches (e) among the m, 4) number of recruited bees (n2) for e patches, 5) number of recruited bees (n1) for (m-2) patches and 6) initial patches size (ngh). The algorithm starts with n scout bees which are randomly sent to the search space. After evaluating fitness, those with the highest fitness scores are chosen as “selected bees” and their patches are also chosen as “selected patches” for the neighbourhood search. When a neighbourhood search is conducted among the selected patches, more bees are assigned to the best patches which represent more promising solutions. Both this differential recruitment and scouting concept are the key operations for the Bees Algorithm. After finishing a neighbourhood search, only one bee which obtains the highest fitness is elected in each of the selected patches to form the next bee population. Although there is no such restriction like this in nature, the Bees Algorithm introduces it to reduce the number of points to be explored. The

270

J.Y. Lee and A.H. Darwish

remaining bees in the population will be randomly assigned in the search space, in case new potential solutions are scouted. All these steps are repeated until a stopping criterion is met.

4 Environmental/Economic Dispatch The EED problem involves the conflicting objectives of emission and optimisation of fuel cost and it is formulated as described below. 4.1 Objective Functions Fuel Cost Objective. The economic dispatch problem in power generation is to minimise the total fuel cost while satisfying the total required demand. The equation used the optimal combination for this problem is shown as follows: [1, 4, 5, 6] n

(

)

C = ∑ ai + bi × PGi + ci × PGi2 $/hr i =1

(2)

Where C: total fuel cost ($/hr), ai, bi, ci: fuel cost coefficients of generator i, PGi: power generated(p.u.) by generator i, and n: number of generators. NOx Emission Objective. The total NOx emission caused by fossil-fuel is expressed as: n

(

)

E NOx = ∑ aiN + biN × PGi + ciN × PGi2 + d iN × exp(eiN × PGi ) ton/hr i =1

(3)

Where aiN, biN, ciN, diN, and eiN are NOx coefficients of the ith generator emission characteristics. 4.2 Constraints The optimisation problem is bounded by the following constraints: Power balance constraint. The total power generated must supply the total load demand and the transmission losses expressed as: n

∑P

Gi

− PD − PL = 0

i =1

Where

PD : total load demand(p.u.) and PL : transmission losses(p.u.). However, for this paper

PL is taken as 0.

(4)

Multi-objective EED Using the Bees Algorithm with Weighted Sum

271

Maximum and minimum limits of power generation. The power generated PGi by each generator is constrained between its minimum and maximum limits stated as:

PGi min ≤ PGi ≤ PGi max

(6)

Where PGimin: minimum power generated, and PGimax: maximum power generated.

Fig. 2. Single-line diagram of IEEE 30-bus test system

4.3

Multi-objective Formulation

The multi-objective Environmental/Economic Dispatch optimisation problem is therefore formulated as:

Minimise[C , E NOx ] Subject to :

n

∑P

Gi

(7)

− PD = 0 (power balance), and

i =1

PGi min ≤ PGi ≤ PGi max (generation limits) Table 1. Fuel Cost coefficients Unit i

ai

bi

ci

PGi min

PGi max

1

10

200

100

0.05

0.50

2

10

150

120

0.05

0.60

3

20

180

40

0.05

1.00

4

10

100

60

0.05

1.20

5

20

180

40

0.05

1.00

6

10

150

100

0.05

0.60

272

J.Y. Lee and A.H. Darwish

4.4 System Parameters Simulations were performed on the standard IEEE 30-bus 6-generator test system using the Bees Algorithm. The power system is interconnected by 41 transmission lines and the total system demand for the 21 load buses is 2.834p.u. Fuel cost and NOx emission coefficients for this system are given in Table 1 and 2 respectively. Table 2. NOx Emission coefficients Unit i

aiN

biN

ciN

d iN

eiN

1

4.091e-2

-5.554e-2

6.490e-2

2.0e-4

2.857

2

2.543e-2

-6.047e-2

5.638e-2

5.0e-4

3.333

3

4.258e-2

-5.094e-2

4.586e-2

1.0e-6

8.000

4

5.326e-2

-3.550e-2

3.380e-2

2.0e-3

2.000

5

4.258e-2

-5.094e-2

4.586e-2

1.0e-6

8.000

6

6.131e-2

-5.555e-2

5.151e-2

1.0e-5

6.667

Table 3. Parameters for the Bees Algorithm Value n : Number of scout bees

50

m : Number of selected patches

8

e : Number of elite patches

3

ngh : Initial patch size

0.1

n2 : Number of bees allocated to elite patches

5

n1 : Number of bees allocated to other selected patches

3

Table 4 Parameters for Weighted Sum Technique Value W1: weight for fuel cost

1

W2: weight for NOx emission

1 50 100 500 1,000 5,000 10,000 50,000 100,000 500,000

Multi-objective EED Using the Bees Algorithm with Weighted Sum

273

For all simulations, the following parameters in Table 3 and Table 4 are used to an accuracy of 1.0e-5.

5 Results Fig. 3 shows a good diversity in Weighted Sum solutions obtained by the Bees Algorithm after 200 itereations. Tables 5 and 6 show the best fuel cost and best NOx emission obtained by the Bees Algorithm as compared to Linear Programming (LP), Multi-Objective Stochastic Search Technique (MOSST), Nondominated Sorting Genetic Algorithm (NSGA), Niched Pareto Genetic Algorithm (NPGA) and Strength Pareto Evolutionary Algorithm (SPEA). In Table 5, the best fuel cost is 600.1320$/hr by the Bees Algorithm with a corresponding NOx emission of 0.2212ton/hr. Although the best emission by the Bees Algorithm in Table 6 is the same as SPEA and NSGAII, the corresponding fuel cost is much lower than the others at 636.0475$/hr. It is quite evident that the proposed approach gives superior results. Therefore the Bees Algorithm optimises minimum fuel cost with minimal NOx emission as compared to the other previous approaches. 0.2250

NOx Emission (ton/hr)

0.2200

0.2150

0.2100

0.2050

0.2000

0.1950

0.1900 600

605

610

615

620

625

630

635

640

Fuel Cost ($/hr)

Fig. 3. Weighted Sum solutions for EED problem using the Bees Algorithm Table 5. Best fuel cost LP

MOSST

NSGA

NPGA

SPEA

NSGA- II

Bees Algorithm

PG1

0.1500

0.1125

0.1567

0.1080

0.1062

0.1059

0.1132

PG2

0.3000

0.3020

0.2870

0.3284

0.2897

0.3177

0.2977

PG3

0.5500

0.5311

0.4671

0.5386

0.5289

0.5216

0.5372

PG4

1.0500

1.0208

1.0467

1.0067

1.0025

1.0146

1.0024

PG5

0.4600

0.5311

0.5037

0.4949

0.5402

0.5159

0.5284

PG6

0.3500

0.3625

0.3729

0.3574

0.3664

0.3583

0.3549

Best cost

606.314

605.889

600.572

600.259

600.15

600.155

600.1320

Corresp. Emission

0.22330

0.22220

0.22282

0.22116

0.2215

0.22188

0.221239

274

J.Y. Lee and A.H. Darwish Table 6 Best NOx emission LP

MOSST

NSGA

NPGA

SPEA

NSGA- II Bees Algorithm

PG1

0.4000

0.4095

0.4394

0.4002

0.4116

0.4074

0.3886

PG2

0.4500

0.4626

0.4511

0.4474

0.4532

0.4577

0.4495

PG3

0.5500

0.5426

0.5105

0.5166

0.5329

0.5389

0.5376

PG4

0.4000

0.3884

0.3871

0.3688

0.3832

0.3837

0.3986

PG5

0.5500

0.5427

0.5553

0.5751

0.5383

0.5352

0.5395

PG6

0.5000

0.5142

0.4905

0.5259

0.5148

0.5110

0.5199

Best Emission 0.19424

0.19418

0.19436

0.19433

0.1942

0.19420

0.1942

Corresp. Cost 639.600

644.112

639.231

639.182

638.51

638.269

636.0475

6 Conclusion In this paper, the multi-objective Environmental/Economic Dispatch problem was solved using the Bees Algorithm with Weighted Sum. The algorithm tested on the standard IEEE 30-bus system and the minimum cost and minimum emission solutions found are better than those found by the previous approaches. Therefore the Bees Algorithm confirms its potential to solve the multi-objective EED problem and whilst offering great financial saving is also contributing to the reduction of greenhouse gases in the atmosphere.

References [1] R T F A King, H C S Rughooputh and K Deb Evolutionary Multio-Objective Environmental/Economic Dispatch: Stochastic vs. Deterministic Approaches [2] M A Abido (2000) A New Multiobjective Evolutionary Algorithm for Environmental/Economic Power Dispatch IEEE [3] M A Abido (2003) Environmental/Economic Power Dispatch Using Multiobjective Evolutionary Algorithms. IEEE Transactions on power system 18(4) [4] R T F A King (2006) Stochastic Evolutionary Multiobjective Environmental/Economic Dispatch. IEEE Congress on Evolutionary Computation, Canada [5] A Farag, S Al-Baiyat and T. C. Cheng (1995) Economic load dispatch multiobjective optimisation procedures using linear programming techniques. IEEE Transactions on Power System 10(2) [6] R Yokoyama, S H Bae, T Morita and H S asaki (1998) Multiobjective Optimal generation dispactch based on probability security criterion. IEEE Transactions on Power Systems 3(1) [7] D T Pham, A Ghanbarzadeh, E Koc, S Otri, S Rahim and M Zaidi (2006) The Bees Algorithm, A Novel Tool for Complex Optimisation problems. Proc 2nd Int Virtual Conf on Intelligent Production Machines and Systems (IPROMS 2006) Oxford:Elsevier 454-459 [8] D T Pham, E Koc, J Y Lee and J Phrueksanat (2007) Using the Bees Algorithm to schedule jobs for a machine. Laser Metrology and Performance VIII:430-439

Experimental Investigation on the Behaviour of CFRP Laminated Composites under Impact and Compression After Impact (CAI) J. Lee1 and C. Soutis1,* 1

Aerospace Engineering, The University of Sheffield, Mappin Street, Sheffield, S1 3JD * [email protected]

Abstract. The importance of understanding the response of structural composites to impact and CAI cannot be overstated to develop analytical models for impact damage and CAI strength predictions. This paper presents experimental findings observed from quasi-static lateral load tests, low velocity impact tests, CAI strength and open hole compressive strength tests using 3mm thick composite plates ([45/-45/0/90]3s – IM7/8552). The conclusion is drawn that damage areas for both quasi-static lateral load and impact tests are similar and the curves of several drop weight impacts with varying energy levels (between 5.4J and 18.7J) follow the static curve well. In addition, at a given energy the peak force is in good agreement between the static and impact cases. From the CAI strength and open hole compressive strength tests, it is identified that the failure behaviour of the specimens was very similar to that observed in laminated plates with open holes under compression loading. The residual strengths are in good agreement with the measured open hole compressive strengths, considering the impact damage site as an equivalent hole. The experimental findings suggest that simple analytical models for the prediction of impact damage area and CAI strength can be developed on the basis of the failure mechanism observed from the experimental tests. Keywords: Quasi-static lateral load, low velocity impact, CAI and open hole compressive strength.

1 Introduction The damage caused to carbon fibre composite structures by low velocity impact, and the resulting reduction in compression after impact (CAI) strength has been well known for many years [1,2] and is of particular concern to the aerospace industry, both military and civil. Typically the loss in strength may be up to 60% of the undamaged value and traditionally industrial designers cope with this by limiting compressive strains to the range of 0.3% to 0.4% (3000 to 4000με). Provided buckling is inhibited by good design of the compression panels the material is capable of withstanding more than double these values. This punitive reduction in design allowables is also a result of the fact that we cannot simulate the impact damage. Testing coupons will not simulate the behaviour of larger realistic structures because their dynamic response to low velocity impact may be quite different. It is not economic to perform impact tests on relatively large panels in order to evaluate impact behaviour and damage development. Thus there is a clear need for a

276

J. Lee and C. Soutis

modelling tool which avoids such blanket limitations and addresses the real nature of the damage and the physics of the failure mechanisms when a realistic structure is loaded in compression. An impact damage site in a composite laminate contains delaminations, fibre fracture and matrix cracking. A model to predict the damage area taking into account all these factors would be complex and take considerable time and funding to develop. It was recognised that the problem could be simplified by making some assumptions about the nature of the impact damage. In the present work, experimental findings are presented to develop simple analytical models for impact damage and CAI strength predictions based on the failure mechanism observed from the quasi-static lateral load tests, impact tests and compression-after-impact (CAI) tests in future work. It will be identified that the low velocity impact can be modelled as a quasi-static lateral load problem3 and provide a measure of how good an approximation we have, comparing the quasi-static response with dynamic response. In addition, the failure behaviour of composite specimens under static compressive loading after impact will be compared to the compressive behaviour of open hole specimens. Instrumented drop weight impact tests are carried out on quasi-isotropic circular plates made from IM7/8552 composite system to simulate real-life dropping tool events, i.e. low velocity impact condition. Quasi-static lateral load tests are also performed with the same indenter and jig used in the impact tests. The test results are presented and discussed. Finally compression after impact (CAI) strength tests and open hole compressive strength tests are performed using a Boeing Compression Rig. The details are described in the following sections.

2 Experimental 2.1 Materials and Lay-Up The material is IM7/8552 supplied by Hexcel Composite Ltd. as a roll of preimpregnated tape of epoxy matrix (8552) reinforced by continuous intermediate modulus unidirectional carbon fibres (IM7). The roll was 350mm wide and about 0.125mm thick. The prepreg was cut into 500mm wide by 500mm long sheets using a metal template and laid up in the quasi-isotropic stacking sequence ([45/-45/0/90]3s) giving a total thickness of about 3mm thickness. The in-plane stiffness and strength of the IM7/8552 unidirectional laminate are given in Table 1. These parameters were obtained by BAE system from material strength tests. Table 1. Stiffness and strength properties for the IM7/8552 composite system E11, GPa E22, GPa G12, GPa ν12 σ11T/σ11C MPa σ22T/σ22C MPa τ12, MPa Value 155

10

4.6

0.3

2400/1300

50/250

85

(σ11T/σ11C are longitudinal tensile and compressive strength, σ22T/σ22C are transverse tensile and compressive strength and τ12 is in-plane shear strength).

Experimental Investigation on the Behaviour of CFRP

277

Table 2. Impact, CAI and OHC test results ([45/-45/0/90]3s – IM7/8552)

Impact Results

Incident Energy (J)

17.8 18.2 18.7

Peak Force (kN)

9.7

a/W

0.13 0.17 0.18

CAI

280

243

242

271

-

229

Compressive Failure Strengths (MPa) Open Hole

10.1 10.3

Unimpacted 685 (a = width of impact damaged area; W = laminate width, 100mm).

2.2 Quasi-static Lateral Load Tests Quasi-static lateral load test was carried out to measure the maximum deflection at the centre of a circular plate and strain data on the top and bottom surface of the plate. For the test, the 150mm diameter circular plates were cut with a diamond saw and their edges were carefully machined. All specimens were first C-scanned before testing in order to check for damage. The bolts for the fully clamped jig were tightened using a torque wrench. The internal diameter of the jig is 102mm. A series of tests were performed using a flat nose loading rod. A single plate was loaded in a number of increments until ultimate failure. The loading was transferred from the cross-head to the plates through a flat nose with a diameter of 12mm, using a screw-driven Zwick 1488 universial testing machine with a load capacity of 200KN. The cross-head speed used in this study was 0.5mm per minute. For measuring the central deflection of the plate, an LVDT displacement transducer was used. The experimental setup and specimen jig are shown in Fig. 1.

(a)

(b)

Fig. 1. (a) Setup of quasi-static lateral load tests with (b) a jig on circular plates

2.3 Low Velocity Impact Tests Circular plates cut from a 3mm thick IM7/8552 multidirectional laminate ([45/45/0/90]3s) are subjected to low-velocity impact using a drop-weight test rig with two different impactor masses (1.58kg and 5.52kg). The specimen dimension and jig are the same as those used in the quasi-static lateral load test (see Fig. 1). The test setup is shown schematically in Fig. 2. An impactor with a flat-ended nose of 12mm in diameter is instrumented with a strain-gauged load cell providing a record of force-time

278

J. Lee and C. Soutis

history. The impactor was dropped at the centre of the specimen from a selected height and was captured after the first rebound. The velocity of the impactor carriage is measured by means of a ruled grid attached to the impactor side and which passes a photo-emitter/photo-diode device mounted on the fixed channel guides to give a pulse form of output every time a dark line is crossed. Knowing the spacing of the grid lines and time for each to pass, the velocity can be calculated before and Fig. 2. Schematic diagram of the impact after the impact event. test rig The output of the device is recorded to the Microlink data capture unit, which provides an interface between the rig instrumentation and a PC compatible computer. The data capture unit has it’s own internal timer and up to eight channels for data collection, i.e. the impactor velocity, contact force and six strain histories. The maximum sampling rate of the unit is 250sample/millisecond. The software provided on the computer allows the collection parameters of the data capture unit to be varied as well as viewing and storing the test data. The test data is then converted by the software into a Lotus format file and then transferred to a personal computer. A Microsoft Excel spreadsheet on the personal computer using the physics of motion detailed in the following section is then used for further data reduction and analysis. After impact, the damage of each specimen was visually inspected for surface damage. The examination of interior damage was carried out using primarily ultrasonic C-scan and occasionally by X-ray radiography. 2.4 CAI Strength Tests Post-impact tests were performed to determine CAI strengths for a range of impact levels which induce fibre damage. For the test various methods have been used [11]. A side-supported fixture developed by the Boeing Company for compression residual strength tests was used in the current study. The fixture does not need any special instruments and is easy to use. Fig. 3 (a) and (b) show the fixture and the specimen geometry. The fixture was placed on a fixed compression platen in a screw-driven Zwick 1488 universal testing machine with a load capacity of 200kN. The specimens were loaded until failure at a rate of 0.5mm/min. When performing compression of the impacted specimens, several plain specimens which had not been impacted were tested in compression to provide baseline undamaged specimen data with the modified ICSTM fixture used in the static compressive test [16]. The dimensions of the plain specimen are 30mm x 30mm in gauge length and specimen width. In addition, open hole compression tests were carried out in accordance with the specimen dimensions for CAI test. Hole diameters obtained from X-ray radiographs by measuring the size of the darkest region of impacted specimens were used (see Fig. 7 (b)). This data would effectively show what the compressive strength would be if the stiffness property of the damaged region is zero.

Experimental Investigation on the Behaviour of CFRP

(a) Boeing fixture

279

(b) specimen geometry

Fig. 3. (a) Boeing fixture and (b) specimen geometry for CAI strength test

3 Results and Discussion 3.1 Quasi-static Lateral Load Tests Clamped circular specimens were loaded at the centre by a normal load. Fig. 4 shows a typical force-deflection curve for a circular plate, where the deflection was measured by an LVDT at the centre of the plate. During the testing, the first load drop was observed around 10kN with an audible acoustic event (see Fig. 4). The force gradually recovered up to around 11.5kN. Then the load fell again but did not recover up to 11.5kN. This clearly indicates the reaching of a maximum contact force once fibre fracture occurs and also illustrates the large amount of energy that is lost as the fibre fracture and the loading rod penetrates the plate. A visual and C-scan inspection of the specimens were carried out to examine specimen damage from the applied load, 6kN to 11.5kN with the increment of 1kN. No visual damage was found at the top and bottom surface of the specimen up to an applied load of 10kN. Significant internal damage was, however, detected from C-scan examination with the applied load, 10kN, as shown in Fig. 5 (a). The damage could be considered as the combination of delamination and matrix cracks in the specimen. With the applied load, 11kN, any damage was not visually observed on the bottom surface but some circular damage on the top surface due to the contact force of the loading rod existed with tiny cracks around the contact area. Finally, much severer damage was observed around 11.5kN just prior to ultimate failure which is caused by tensile fibre fracture on the back face of the specimen allowing the flat nose loading rod to eventually penetrate the specimen (See Fig. 5 (b) and (c)). Fig. 5 (a) and (b) show C-scan images taken at the applied loads, 10kN and 11.5kN. Fig. 5 (c) presents photographs taken from loading face to show penetration in the specimen and Fig. 4. Typical static load-deflection curve for from back face to show fibre fracture the circular plate with a diameter of 102mm after an applied load of 11.5kN. ([45/-45/0/90]3s – IM7/8552)

280

J. Lee and C. Soutis

3.2 Low Velocity Impact Tests Two sets of impact tests were performed with different impactor masses. Firstly, 3mm thick circular plates of the IM7/8552 system were subjected to low-velocity impact with the impactor mass of 1.58kg under the range of incident energy between 5J and 11J. Secondly, impact test was carried out with the impactor mass of 5.52kg under the range of incident energy between 16J and 19J. The impact response of laminated plates is commonly described in terms of force-time and incident kinetic energy-time traces (a) C-scan images at 10kN [4]. This allows the ability of a (b) C-scan images at 11.5kN material to resist impact damage and the absorbed kinetic energy to be assessed. In particular, the shape of these curves usually indicates the onset of damage and its propagation. If the response of the plate were purely elastic, with no form of damage or dissipation, (c) Photograph at loading face and back face in the and if the plate mass was very specimen small compared with the impactor, then a pure fundamental simple Fig. 5. Static damages taken by C-scan with the apharmonic response would be explied load, (a) 10 kN and (b) 11.5 kN and (c) photopected with sinusoidal force and graphs of the loading face and back face after an applied load of 11.5 kN ([45/-45/0/90]3s – displacement histories. IM7/8552)

(a)

(b)

Fig. 6. (a) Force-time and (b) force-displacement curves for representative impacts on quasiisotropic ([45/-45/0/90]3s) IM7/8552 laminates, fully clamped with a 102 mm diameter and incident energies of (a) 10.85 J and (b) 18.7 J

Fig. 6 (a) shows a typical force versus time history for impact without damage. It is for a 3mm thick circular plate with clamped edges under incident energy of 18.7J (impactor mass: 5.52kg). Fig. 6 (a) exhibits fluctuation around the peak force, making

Experimental Investigation on the Behaviour of CFRP

281

it possible to identify the onset of damage. A slower recovery also indicates a decrease in the structural stiffness due to damage. By integrating the force-time curve, the displacement can be calculated. On the base of the displacement data, the forcedisplacement curves can be plotted as shown in Fig. 6 (b). Fig. 6 (b) show the forcedisplacement curves for the tests of Fig 6 (a). Once the impactor energy is exhausted, the load starts to drop and reaches zero at a permanent displacement value as shown in Fig. 6 (b). These force-displacement curves will be compared with the curve measured from the quasi-static lateral load test in the next section. After drop tests all the circular specimens were inspected using ultrasonic C-scan to assess the extent of their internal damage caused. The specimens subjected to lowvelocity impact with the impactor mass of 1.58kg did not show any damage. However the specimens impacted with the impactor mass of 5.52kg show significant internal damage with or without visible damage on the specimen surface depending on the amount of incident energy. Fig. 7 (a) and (b) present the impact damage of the specimen for the test of Fig. 6 (incident energy: 18.7J) using ultrasonic C-scan and X-ray radiograph, respectively. For the incident energy of 17.88J, damage on the top and bottom plies was visually observed; tiny cracks around the impact area on the top surface but severe damage on the bottom plies including matrix cracking, delamination, fibre splitting and fibre fracture developed. The intensity of the dark region shown in Fig. 7 (b) is a (a) C-scan (b) X-ray measure of the extent of severe damage in Fig. 7. Impact damage of a quasi-isotropic the specimen; sectioning/polishing and de([45/-45/0/90]3s) IM7/8552 laminate, taken ply studies [5,6] revealed that in the very from (a) C-scan and (b) X-ray radiograph dark area fibre breakage and delaminations with an incident energy of 18.7 J exist in almost all interfaces through the thickness of the laminate. The lighter region in Fig. 7 (b) corresponds to the splitting and delamination of the back face rather than internal damage. The circled dark region will be used as the replacement of the impact damage with an equivalent open hole for the compression after impact (CAI) strength prediction and compared to the estimated impact damage area in future study. 3.3 Comparison of Quais-static Lateral Load and Low Velocity Impact Test Results Impact events that involve a high mass impacting a relatively small target at low velocities can usually be thought of as quasi-static events [3, 4, 7-10]. The most direct way to determine whether an impact event can be considered quasi-static is to compare two cases experimentally. Figure 8 shows the static and dynamic responses of 3mm thick quasi-isotropic IM7/8552 panels. In the figure the force-displacement curves of several drop weight impacts with varying energy levels (between 5.4J and 18.7J) are superposed on a static deflection test. The quasi-static deflection test was carried to complete perforation of the specimen. The loading paths of all the impact events follow the static curve quite well. Obviously, the dynamic response contains

282

J. Lee and C. Soutis

vibrations, but the major features are still clearly distinguishable. The vibrations are caused by the inertia forces in the early portion of the impact. The amplitude of these vibrations depends on the velocity and the mass of the plate. Increasing the velocity, therefore, will increase the amplitude of the vibration. Because of these vibrations, we would expect more scatter in a dynamic test than in a quasi-static test. The difference between the static and dynamic responses shown in Figure 8 is within the scatter of the data. In Fig. 9, the quasi-static response is again indicated for the same static and impact events as shown in Fig. 8, where peak contact force for each test is plotted as a function of impact energy. This is compared with the force against energy absorbed for a quasi-static test, the energy absorbed being calculated by integrating the area under the load-displacement curve. It can clearly be seen that at a given energy the peak force is in good agreement between the static and impact cases. The Fig. 8. Comparison of the force-displace- first fall in load that occurs at an energy of ment curves of impact tests at increasing en- approximately 14J on the static curve is ergy levels with the curve from a continuous due to the interior damage (delamination quasi-static lateral loading test for circular and matrix cracks) of the specimen without plates of a 102mm diameter ([45/-45/0/90]3s visual damage on both surfaces as shown – IM7/8552) in Fig. 5 (a). The third fall in load at an energy of approximately 26J on the static curve is due to the initiation of tensile fibre fracture on the back face of the plate caused by the onset of penetration under the indentor (see Fig. 5 (b)). The impact data is in good agreement with the static curve for impact energies of 15J – 17J where interior damage is only detected without visual damage on both surfaces of the plate (see Fig. 10). At higher impact energy of 18.7J (peak force 10.3kN), the data are also in reasonable Fig. 9. Peak contact force against impact agreement with the quasi-static curve energy for tests on clamped circular plates (peak force 11.5kN) where fibre fracture of 102 mm diameter ([45/-45/0/90]3s – on the bottom face is clearly visible. This IM7/8552) is a very good indication that the damage mechanism, from the point at which damage first initiates to the point that the indentor has penetrated the plate, are similar to the quasi-static and impact loading considered in this study. In addition, the good agreement between the impact peak forces and the static load as a function of impact energy would suggest that the damage areas should be in good agreement between the static and dynamic test cases (see Fig. 10). Figure 10 shows C-scan images taken

Experimental Investigation on the Behaviour of CFRP

283

from the static and impact test at a peak force of approximately 10kN. It can be seen that damage areas are similar regardless of the test method. These results were also identified by Sjoblem, P. and Hartness, J. [7]. They performed quasi-static and impact tests on 48-ply quasi-isotropic Fig. 10. C-scan images taken from (a) static AS-4/3502 circular plates of 123mm and (b) impact test at a peak force of approxiin diameter. Both the statically tested mately 10 kN for the circular plates of 102 mm diameter ([45/-45/0/90]3s – IM7/8552) and the impacted specimens showed similar conical shape of damage under the indenter and similar damage areas. They suggested that rate effects on the failure behaviour are minor. Any effect of elastic wave propagation through the thickness of the specimen is totally negligible for the typical contact times experienced in tests simulating dropped tools or runway debris kicked up during takeoff or landing. 3.4 CAI Strength Tests Each circular plate was impacted with a known energy level of between 5 and 19J. An energy level between 5 and 16.8J was too insignificant to encourage the test-piece to fail at the impact site. Most data for fibre breakage has been obtained between 17 and 19J. As explained in Section 3.2, impact damage induced in the large plates has both local and global effects. The former consists of matrix cracking, fibre-matrix debonding and surface microbuckling, all surrounding the slightly dented impact contact area, whereas the latter consists of extensive internal delamination, and fibre breakage. The damage results in the lower matrix resin stiffness of composite materials and local changes in fibre curvature. They may contribute to the initiation of local compression failure by shear with a kink band. During the CAI testing, clear cracking sounds were heard around the damaged area due to matrix cracking, fibe-matrix debonding, delamination and fibre breakage. As the applied load is increased, damage in the form of local buckling like a crack grows laterally from the impact damage region. In addition, delaminated regions continued to propagate, first in short discrete increments and then rapidly at the failure load. Examination of the failed specimens removed from the test fixture confirmed that the local delaminations extended completely across the specimen width but extended only a short distance in the axial direction with a kink shear band through the laminate thickness (see Fig. 11 (b)). This pattern of damage growth [12-15] is similar to that observed in specimens with open holes under uniaxial compression as described in Refernce16. Fig. 11 (a) and (b) show a typical impacted specimen before and after CAI strength test taken by X-ray radiography. Specimen failure after CAI strength test shows fibre kink band shear through its thickness. The residual compressive strengths and impact results are summarized in Table 2. In the table the extent of damage caused by the impact was observed by X-ray radiographs as shown in Fig. 7 (b). The compressive strengths of unimpacted plain specimens and open hole specimens measured to provide reference values are also

284

J. Lee and C. Soutis

(a) X-ray before CAI strength test

(B) X-ray after CAI strength test

Fig. 11. Impacted CAI specimen (a) before and (b) after CAI strength test showing compression failure with a kink-band shear ([45/-45/0/90]3s – IM7/8552)

included in the table. For the open hole specimens, the observed impact damage is replaced with an equivalent open hole. It can be seen that the residual strengths are reduced up to 64% of the unimpacted compressive strength between an energy level of 17.8J and 18.7J. In the case of an equivalent open hole specimens, the failure strengths (OHC) are in good agreement with those of the residual strengths (CAI), the difference is less than 10%. Soutis et. al. [12, 15, 17] have also performed this strategy, which considers impact damage site as an equivalent hole to predict the CAI strength of different composite systems and lay-ups. They used damage width measured from X-ray radiographs for the prediction. The theoretical predictions are in a good agreement with the experimental measurements.

4 Concluding Remarks A quasi-static and dynamic series of tests were performed using 3mm thick circular plates ([45/-45/0/90]3s - IM7/8552) with a flat-ended impactor to compare the static response with the dynamic response and identify damage patterns between them. In the dynamic test, two different impactor masses were used with varying impact energy level, i.e. impactor mass of 1.58kg under the range of incident energy between 5J and 11J and 5.52kg under the range of incident energy between 16J and 19J. During the quasi-static and impact testing, the development of damage was monitored using C-scan and X-ray radiography. Significant interior damage was detected at the similar applied peak load, 10kN from the quasi-static test and 9.8kN from the impact test prior to initiation of tensile fibre damage on the tensile face of the plates under the indenter. In addition, it has been found that damage areas for both tests are similar (see Fig. 10). From the investigation of damage patterns performed by Sjoblem, P. and Hartness, J [7] using microscopy, it is identified that both the statically tested and the impacted circular plate specimens have similar conical shape of damage under the indenter. In comparison of the force-displacement responses obtained from both tests, it was confirmed that the curves of several drop weight impacts with varying energy levels (between 5.4J and 18.7J) follow the static curve quite well (see Fig. 9). In addition, the peak contact force-impact energy graph plotted to be compared with the force against energy absorbed for a static test (see Fig. 10) showed that at a given energy the peak force is in good agreement between the static and impact cases.

Experimental Investigation on the Behaviour of CFRP

285

Finally CAI tests were conducted to determine residual compressive strength of the plates impacted at an energy level between 17J and 19J. The failure behaviour of the specimens was very similar to that observed in laminated plates with open holes under compression loading. The residual strengths between an impact energy level of 17J and 19J varied from 280MPa to 242MPa and reduced to 64% of the unimpacted compressive strength. The measured open hole compressive strengths were in good agreement with the residual strengths, considering the impact damage site as an equivalent hole. The size of the hole was determined from X-radiograph images. The experimental results above indicate that the low velocity impact response for the plates tested in this study is close to quasi-static behaviour. This means that inertia effects are negligible and hence the plate response is the fundamental, or statically deflected, mode. It is also indicated that impact damage site for CAI strength can be modelled as an equivalent hole. On the basis of these experimental findings, simple analytical models will be developed to predict impact damage area, reduced elastic properties due to the impact load and CAI strength in future work. The results obtained in this study will be compared to the predict results. Acknowledgments. This work was carried with the financial support of Structural Materials Centre, QinetiQ, Farnborough, UK. The authors are grateful for many useful discussions with Professor G. A. O. Davies of the Department of Aeronautics, Imperial College London and Professor P. T. Curtis of the Defence Science and Technology Laboratory (DSTL), UK.

References [1] Whitehead R S (1985) ICAF National Review, Pisa 10-26 [2] Greszczuck L B (1982) Damage in Composite Panels due to Low Velocity Impact. Impact Dynamics Ed. Zukas Z.A. J. Wiley [3] Dahsin L (1988) Impact-induced Delamination – a View of Bending Stiffness Mismatching. Journal of Composite Materials 22:674-692 [4] Zhou G and Davies G A O (1995) Impact Response of Thick Class Fibre Reinforced Polyester Laminates. International Journal of Impact Engineering 16(3):357-374 [5] Guynn E G and O’brien T K (1985) The Influence of Lay-up and Thickness on Composite Impact Damage and Compression Strength. Proc. 26th Structures, structural Dynamics, Materials Conf., Orlando, FL 187-196 [6] Hitchen S A and Kemp R M (1994) The Effect of Stacking Sequence and Layer Thickness on the Compressive Behaviour of Carbon Composite Materials: Impact Damage and Compression after Impact. Technical Report 94003, Defence Research Agency, Farnborough [7] Sjoblom P O and Hartness J T (1988) On Low-Velocity Impact Testing of Composite Materials. Journal of Composite Materials 22:30-52 [8] Delfosse D and Poursartip A (1997) Energy-Based Approach to Impact Damage in CFRP Laminates. Composites Part A 28 (7):647-655 [9] Watson S A, (1994) The Modelling of Impact Damage in Kevlar-Reinforced Epoxy Composite Structures. PhD Thesis, University of London [10] Hou J (1998) Assesment of Low Velocity Impact Induced Damage on Laminated Composite Plates. PhD Thesis, University of Reading

286

J. Lee and C. Soutis

[11] Hodgkinson J(2000) Mechanical Testing of Advanced Fibre Composites. Woodhead Publishing Ltd [12] Soutis C and Curtis P T (1996) Prediction of The Post-Impact Compressive Strength of CFRP Laminated Composites Composite Science and Technology 56 (6):677-684 [13] Zhou G (1996) Effect of Impact Damage on Residual Compressive Strength of GlassFibre Reinforced Polyester (GFRP) Laminates. Composite Structures 35(2):171-181 [14] Davies G A O, Hitchings D and Zhou G (1996) Impact Damage and Residual Strengths of Woven Fabric Glass/Polyester Laminates. Composites Part A 27(12):1147-1156 [15] Hawyes V J, Curtis P T, and Soutis C (2001) Effect of Impact Damage on The Compressive Response of Composite Laminates. Composites Part A 32(9):1263-1270 [16] Soutis C, Lee J and Kong C (2002) Size Effect on Compressive Strength of T300/924C Carbon Fibre-Epoxy Laminates. Plastics, Rubber and Composites 31(8):364-370. [17] Soutis C, Smith F C and Matthews F L (2000) Predicting the Compressive Engineering Performance of Carbon Fibre-Reinforced Plastics. Composite Part A 31(6):531-536

Development of and Research on Energy-Saving Buildings in Korea Hyo-Soon Park1, Jae-Min Kim2, and Ji-Yeon Kim3 1

Energy Efficiency Research Department, Korea Institute of Energy Research, Deajeon, Korea [email protected] 2 Mechanical Engineering University of Strathclyde, ESRU, Glasgow, United Kingdom 3 Architectural Engineering Department, Inha University, Incheon, Korea

Abstract. Korea is sparse energy reserves, and over 97% of the total energy it consumes is imported. Furthermore, fossil fuels comprise more than 80% of the total energy consumed in Korea, resulting in the emission of greenhouse gases like carbon dioxide, which contributes to global warming. The building sector is one of the major energy-consuming sectors in Korea. The energy consumption of the buildings in the country represents about 24% of the total energy consumption of the whole nation, and it is on the rise due to the continued growth of the Korean economy. The energy use of buildings is dependent on a wide variety of interacting features that define the performance of the building system. Many different research buildings that utilize several kinds of energy conservation technologies were constructed in KIER, in Taeduk Science Town, to provide feedback regarding the most effective energy conservation technologies. This paper intends to introduce the energy conservation technologies of new residential houses, passive and active solar houses, super-low-energy office buildings, “green buildings,” and “zero-energy houses,” whose utilization will help protect the quality of the environment and conserve energy.

1 Introduction In Korea, amount of energy consumed is, as shown in Table 1, 163,995ktoe as of 2006. Building energy consumption occupies 22.9% and transportation occupies 21.0% of the total national energy consumption. The most recently developed energy technologies have the potential for promoting the efficient use of energy, for enhancing the country’s energy self-sufficiency rate, and for preventing environmental pollution due to energy consumption. Since its establishment in 1977, the Korea Institute of Energy Research, a nonprofit scientific-research institute supported by the government, has been committed to conducting research in the areas of energy conservation policy: the thermal insulation standard (K value), typical-energy-consumption criteria, ESCO (Energy Service Company) business, and building energy rating system for the reduction of the energy consumption and development in the energy conservation areas (i.e., alternative energy and various environmental technologies related to fossil energy use). With its accumulated technological capacity and professionalism, Korea (KIER) will continue to contribute to addressing the country’s many difficult energy-related problems. Korea (KIER) will also concentrate its efforts on the research and development of new energy technologies for the 21st century.

288

H.-S. Park, J.-M. Kim, and J.-Y. Kim Table 1. Final energy consumption by demand Total Energy Consumption [ktoe] Industry

Building

Transportation

173,584

97,235

39,822

36,527

(100%)

(56.0%)

(22.9%)

(21.0%)

2 Energy Conservation in Residential Houses 2.1 Retrofitting of the Existing Detached Houses The existing detached houses are being retrofitted for energy conservation. It is important to evaluate the energy-saving effect of retrofitting a non-insulated house and to show the importance of retrofitting to home owners by presenting reliable data to them.

Fig. 1. Perspective of an existing detached test house

The energy performance and consumption of a house were measured and analyzed after retrofitting. Cost-benefit analysis was conducted in each of the retrofitting measures. The following results were obtained from the experiments that were conducted on the test house: The total reduction in the heating energy requirement achieved by retrofitting was found to be 51.9%, and the payback period of the initial investment for retrofitting was estimated to be five to six years. 2.2 New Residential Houses The energy conservation problems of residential houses have been viewed macroscopically and microscopically in every detailed field related to the design, construction, Table 2. Before-and-After Comparison Pre-Retrofit [%] Post-Retrofit [%] Improvement Rates [%] Boiler efficiency

79.8

85.2

5.4

Heating Efficiency 66.8

75.2

8.4

Piping Losses

13.1

10.0

3.1

Boiler Losses

20.2

14.8

5.4

Development of and Research on Energy-Saving Buildings in Korea

289

Fig. 2. New residential houses ( the former : A type , the other : B type)

auditing, and maintenance of houses. Especially, studies on energy conservation in a newly constructed house clearly emphasize the need to consider energy savings from the initial stage of house design formulation. The solar-saving fractions of the house are as follows: – –

43~46% (set the indoor temperature at 18 39~42% (set the indoor temperature at 20

℃); and ℃).

The modified heating load of the A-type experimental model house was reduced to 5.55% compared with the B-type house, due to heavy insulation. In the case of the attachment of the insulated door, the difference between the indoor and outdoor temperatures was decreased to 1.58 compared with the case when an insulated door was not attached. The difference between the daytime and nighttime indoor temperatures in the former case was smaller than in the latter one. The indoor temperature in the southern direction was 1.29~1.71 higher than that in the northern direction.





Table 3. K-value comparison of test houses A-Type Insulation Part

B-Type k-value



Thickness

[kcal/m²h ]

Thickness

k-value



[kcal/m²h ]

Ceiling

Styrofoam(50mm) + Glass 0.246 wool(50mm)

Styrofoam (25mm)+ Glass 0.399 wool(25mm)

Wall

Styrofoam(100mm)

Ureafoam(50mm)

0.277

0.465

Floor

Ureafoam(100mm)

0.338

Styrofoam(50mm)

0.346

Window

Pair glass

2.483

Pair glass

2.483



In one room, the temperature in the upper level increased to 0.17~0.34 , and that in the lower level decreased to 0.51~0.61 , based on the central level of the room. The efficiency of the boiler B-type experimental model house for one year was about 1,889 ℓ (20 ) and 1,496 ℓ (18 ), and the natural air change was observed to be 0.3 times/h (B type).







290

H.-S. Park, J.-M. Kim, and J.-Y. Kim Table 4. Heating loads & energy saving in each house 2-Story House(area : 121.86m2 ) Non-Insulated New Residential New Residential House

Houses (A-Type) Houses (B-Type)

Hourly Heating Loads [kcal/h] 22,675

7,447.7

8,591.7

Energy Saving Rate [%]

67.1

62.1

100

3 Development of a Model House Using Energy-Efficient Design Methods Since 1980, the results of researches have been used to establish energy-saving methods in residences. Model energy-efficient housing plans were prepared for the demonstration of energy-efficient design methods in residences to architects, clients, and constructors, and for nationwide dissemination.

Fig. 3. Model houses ( left: A type , right : B type)

The objective annual heating consumptions in dwellings and the thermal-comfort criteria of indoor environments in Seoul were also estimated. Model designs of energy-efficient residences, and their specifications, were made after investigating the applicability of the current energy-saving methods in dwellings. After this, the annual heating loads and annual heating energy consumptions, and the costs of the construction of these buildings, were estimated using the DOE-2 building energy analysis computer program for the Seoul climatic region. The objective annual heating load of the model houses for the Seoul climatic region (100 Mcal/m2․y) can be achieved even with a 50 mm insulation thickness in each building envelope. In addition, a thermal insulating material should be attached to the basement wall to prevent surface condensation in summer. The case study that was conducted in this research showed annual heating loads of 117.4 Mcal/m2․y and 116.0 Mcal/m2․y for the single-story and two-story residences, respectively.

Development of and Research on Energy-Saving Buildings in Korea

291

Table 5. Comparison of annual heating loads 82.5m2

132m2

(A-Type)

(B-Type)

151.5

145.5

[Mcal/m2․y]

117.5

116.0

Energy Saving Rate [%]

22.5

20

Type Non Insulated House [Mcal/m2․y] Model House by the Energy Efficient Design Methods

4 Development of a Solar House 4.1 Development of a Passive Solar House A passive solar house was developed, with priority on the development and application of a passive technology for energy conservation in houses. The principles of the Passive System (Trombe Michel Wall) are as follows: –

The thermal storage mass for the building is a south-facing wall of masonry or concrete with a glazed surface to reduce heat losses outside. Solar radiation falls on the wall and is absorbed by it and transferred by conduction from where it radiates, and heat is transferred by convection to the living spaces. – Through the openings or vents at the top and bottom of the storage mass, hot air will rise and enter the living space, drawing cooler room air through the lower vents back into the collector air space. Fig. 4. A passive solar house – The solar wall may be used as a solar chimney in summer, when the continued air movement exhausts hot air from the solar wall and draws in cooler air from the north side of the house for ventilation.

The thermal performances of passive solar systems were evaluated through actual experiments, and the problems that can arise in relation to their implementation were discussed. Computer simulation programs were also developed for the theoretical performance prediction of passive buildings. The criteria were prepared by examining all the existing design schemes and synthesizing the performance evaluation methods that have been developed up to the present time. The solar-saving fraction was found to be 27.3%.

292

H.-S. Park, J.-M. Kim, and J.-Y. Kim Table 6. Description of a passive solar house No Item

1-1 1

Description of Installation Living Room

Direct Gain System

Bed Room

Trombe-Wall System

Hall

Day lighting

Insulation Thickness

1-2

Ceiling

200mm

Wall

100mm

Floor 2 3

100mm 2

2

Heating Area

97.6m (105ft )

Building Structure

Masonry

Storage Material

Cement Brick

4

Auxiliary Heating System Oil Boiler + Floor Radiant Heating

5

Pay-Back Period

6 ~ 10 years

4.2 Development of an Active Solar House The concepts of active solar systems are well known to the public. Past experiences reveal, however, that this technology is not yet ready for massive commercialization in Korea, since it is not yet economical (has a high initial investment) and has difficult maintenance problems. The 1981 project “Development and Improvement of the Active Solar-heating System,” was conducted as a basic work to develop low-cost solar-heating systems, which are expected to show economical solar utilization as a low-thermal source of energy for use in space or water heating. To improve the chances of attaining its aim or objective, the work was divided into two parts: the software and the hardware aspects. Fig. 6. Active solar house The scope of these studies includes the following four major works: – –

performance evaluation and formulation of a design method for liquid-type flat-plate solar collectors; utilization of a computer simulation program for designing active solarheating systems;

Development of and Research on Energy-Saving Buildings in Korea





293

construction of a small-scale water-heating system and improvement of the existing active solar space-heating system for demonstration and performance evaluation; and preparation of a test procedure, an evaluation scheme, and criteria for installation/performance. Table 7. Description of an Active Solar House No

Item

Description of Installation

1

Heating Area

127m2(136ft2)

2

Use

Space Heating & Water Heating

3

Collector Type

Flate-Plate Type Liquid Collector

4

Collector Area

24m2

5

Working Fluid

Water(Drain DownSystem)

6

Storage Tank Volume

2.4m3

7

Auxiliary Heating System

Oil Boiler

8

Annual Solar Saving Fraction

50 ~ 60%

5 Super Low Energy Office Building Residential buildings represent more than 70% of all the buildings in Korea. From this point of view, the spread of newly developed technologies and the carrying out of a demonstration project to prove energy savings as well as other parameters are essential for this study. The project was carried out from July 1994 to December 1998. The focus of this study was the demonstration of the developed technologies that are not being used commercially in Korea, to promote them. The scope of this study was divided into two categories: research work to provide detailed data for the design, and the discussion of the design and construction work with the project manager of six subprojects. The contents of the study were as follows: Fig. 7. Super-Low-Energy Office Building

-

the design of a double-skin structure; a seasonal thermal storage system; cool-tube-related technologies; a co-generation system;

294

-

H.-S. Park, J.-M. Kim, and J.-Y. Kim

a vacuum tube solar collector for heating and cooling; and a PV for building application.

The super-low-energy office building was constructed in November 1998, with three stories and an underground floor. It was a reinforced-concrete type, with a total floor area of 1,102m2. It is now on the way to test operation. A total of 74 kinds of elementary technologies were applied in the building: 23 kinds of energy load reduction methods through architectural planning, 35 kinds of mechanical-system-related technologies, and 16 kinds of electrical-system fields. The following are brief constructional descriptions of six of these major technologies. Table 8. Comparison of major energy-saving tools Energy Savings [Mcal/m2․y]

Notes

Design of Double Skin Structure

22.9

Width : 1.5m, Height : 10.8m

Cool Tube Related Technologies

13.0

Small Scale Co-generation System

56.4

Major Technologies

Vacuum Tube Solar Collector for Heating 26.0 and Cooling Total

Length : 70m, Buried Depth:3m, Diameter of the Pipe : 30cm Effective Area : 265m2 Effective Area : 50m2

106.3

6 Development of on the Green Building A “green building” is a building that was designed, constructed, operated, and eventually demolished so that it would have a minimum impact on the global and internal environment.

Fig. 8. Exterior and interior of Green building

Development of and Research on Energy-Saving Buildings in Korea

295

Sustainable development is the challenge of meeting growing human needs for natural resources, industrial products, energy, food, transportation, shelter, and effective waste management while conserving and protecting the environment quality and the natural-resource base that are essential for future life and development. The concept recognizes that long-term human needs cannot be met unless the earth’s natural physical, chemical, and biological systems are also conserved. Adopting green-building technologies into buildings will not only decrease the energy consumption of buildings but will also improve their environmental conditions. The following are the contents of such technologies: – – – – – – – – –

double envelope on the south façade; an atrium for day lighting and natural ventilation; movable shading devices on the west façade; a gray-water recycling system; a rainwater collection system; an energy-efficient HVAC system; environmentally friendly building products; solar collectors on the roof; and solar cells.

7 Development of Zero Energy House Both energy conservation and the use of an alternative technology must be applied for the construction of an energy-sufficient and zero-energy house. Zero-energy and solar technologies must be developed to overcome the energy crisis in the near future. This cannot be realized with the separate application of unit technologies because a building is an integrated system of several energy conservation technologies. As such, related energy technologies, including solar energy, must be developed and gradually adopted, considering the installation cost. The objective of this project was to develop the net energy of 78% self-sufficient demonstration house in 3 years (January 2001 –December 2003), the commercialization in 6 years and 100% in 10 years. From four years ago (January 2001), we started to research their integration something like the super-insulation and air-tightness, solar collector, heat recovery ventilation system and so on. The most important factors in superinsulated thermal envelope design are the low heat transfer, air leakage and moisture damage. Heat transmission coefficient of thermal envelope is under 0.15 (kcal/m2h℃). It was selected exterior insulation Fig. 9. Zero Energy House system for Zero Energy House. The air/vapor barrier should be installed

296

H.-S. Park, J.-M. Kim, and J.-Y. Kim

inside insulation and best material is 0.03~0.05mm polyethylene film. At the joint between two sheets of polyethylene, the two sheets should be overlapped by 50~150mm.

8 Conclusion The energy consumption of the building sector in Korea has been on the rise due to the growth of the Korean economy. At the same time, the demand for a comfortable indoor environment is also increasing. It is thus very important to consider not only building energy conservation but also IAQ (indoor air quality). Finally, by constructing, investigating, surveying, and testing experimental and model buildings, this study showed the criteria and measures for energy conservation in buildings (existing and new) and suggested the construction of “green buildings” to protect the environmental quality and the country’s natural-resource base, which are essential for future life and development.

References [1] Soo Cho at all (2005) The Evaluation of Thermal Performance on the High Efficiency Windows, The Society of Air-conditioning and Refrigeration Engineers of Korea: 1095-1100 [2] Korea Research Council of Public Science and Technology (2006) Energy Technology Transfer and Policy Research [3] Nam-Choon Baek at all (2002) Development of Zero Energy House [4] KIER (1982) Development of Passive Solar Systems, KE-81T-22 [5] KIER (1996) R&D and Demonstration project on the Super Low Energy Office Building, kier-953220 [6] KIER (2000) Planning for promoting the dissemination of Green Building, kier-993628 [7] KIER(2003) Development of Zero Energy House for distributing, kier-A32406

Non-iterative MUSIC-Type Imaging Algorithm for Reconstructing Penetrable Thin Dielectric Inclusions Won-Kwang Park1, Habib Ammari2, and Dominique Lesselier1 1

Département de Recherche en Electromagnétisme Laboratoire des Signaux et Systèmes (CNRS-Supélec-Univ. Paris Sud 11), 3 rue Joliot-Curie, 91192 Gif-sur-Yvette cedex, France [email protected] 2 Centre de Mathématiques Appliquées (CNRS-Ecole Polytechnique), 91128 Palaiseau cedex, France

Abstract. We consider a non-iterative MUSIC-type imaging algorithm for reconstructing thin, curve-like penetrable inclusions in a two-dimensional homogeneous space. It is based on an appropriate asymptotic formula of the scattering amplitude. Operating at fixed nonzero frequency, it yields the shape of the inclusion from scattered fields in addition to estimates of the length of the supporting curve. Numerical implementation shows that it is a fast and efficient algorithm.

1 Introduction From a physical point of view, an inverse scattering problem is the problem of determining unknown characteristics of an object (shape, internal constitution, etc.) from scattered field data. Throughout the literature, various algorithms for reconstructing an unknown object have been suggested, most based on Newton-type iteration schemes. Yet, for successful application of these schemes, one needs a good initial guess, close enough to the unknown object. Without, one might suffer from large computational costs. Moreover, iterative schemes often require regularization terms that depends on the specific problem at hand. So, many authors have suggested noniterative reconstruction algorithms, which at least could provide good initial guesses. Related works can be found in [2, 3, 4, 9, 11, and 12]. In this contribution, we consider a non-iterative MUSIC-type algorithm for retrieving the shape and estimating the length of thin inclusions the electric permittivities of which differ from the one of the embedding (homogeneous) space. The application foreseen is imaging of cracks (seen as thin screens) from electromagnetic measurements formulated as an inverse scattering problem for the two-dimensional Helmholtz equation. In general, cracks have different electrical parameters from their surroundings, and it is not necessary to get their exact values. Useful information about cracks might simply be shape, with length as by-product. The contribution is organized as follows. An asymptotic expansion of the scattering amplitude associated to penetrable thin inclusions is introduced by approaches in

298

W.-K. Park, H. Ammari, and D. Lesselier

harmony with those recently developed for small bounded inclusions in electromagnetics, e.g., [3] and [4]. This enables us to derive a MUSIC-type non-iterative imaging algorithm. Numerical simulations then illustrate its pros and cons. Let us notice that we consider a pure dielectric contrast between inclusions and embedding medium in the Transverse Magnetic (TM) polarization, yet the analysis can be extended to a pure magnetic case and combination cases as well, in both TM and TE (Transverse Electric) polarizations [12].

2 Asymptotic Expansion of the Scattering Amplitude Let us consider the two-dimensional electromagnetic scattering by a thin penetrable inclusion, Γ, in a homogeneous space R2. The inclusion, illustrated in Fig. 1, is curvelike, i.e., it is in the neighborhood of a curve: Γ={x+η(x): x σ, η (-h,h)}, where supporting σ is a simple, smooth curve of finite length, and where h is a positive constant which gives the thickness of the inclusion. All materials involved are non-magnetic (permeability μ≡1) and characterized by their dielectric permittivity at frequency of Fig. 1. Illustration of a two-dimensional operation ω; we define the piecewise conthin penetrable inclusion stant permittivity 0 0 then: Cv(τ) = α gv(τ), where α is some constant • Otherwise, the node stops sending encoded packets until gv(τ) • becomes > 0, • where

g v (τ ) = max v∈H u

Dv (τ ) − Du (τ ) | Hu |

This heuristic has some strong similarities, with previous heuristics: IRON and IRMS. Consider the local received rate at one destination v: then DRAGON ensures that every node will receive a total rate at least equal to the average gap of one node and its neighbors scaled by α, that is the local received rate at time τ verifies: ⎛ 1 ⎞ C (V \ {v}) ≥ α ⎜⎜ Du (τ ) − Dv (τ ) ⎟⎟ ∑ ⎝ | H v | u∈H v ⎠ , which would ensure that the gap would be

closed in time≈≤1/ α, if the neighbors did not receive new innovative packets. 4.3 Termination Protocol A network coding protocol for broadcast requires a termination protocol in order to decide when retransmissions of coded packets should stop. Our precise terminating condition is as follows: when a node (a source or an intermediate node) itself and all its known neighbors have sufficient data to recover all source packets, the transmission stops. This stop condition requires information about the status of neighbors including their ranks. Hence, each node manages a local information base to store one hop neighbor information, including their ranks.

Wireless Broadcast with Network Coding: DRAGONCAST

339

Algorithm 3. Brief Description of Local Info Base Management Algorithm

1. Nodes’ local info notify scheduling: The nodes start notifying their neighbors of their current rank and their lifetime when they start transmitting vectors. The notification can generally be piggybacked in data packets if the nodes transmit a vector within the lifetime interval. 2. Nodes’ local info update scheduling: On receiving notification of rank and lifetime, the receivers create or update their local information base by storing the sender’s rank and lifetime. If the lifetime of the node information in the local information base expires, the information is removed. In order to keep up-to-date information about neighbors, every entry in the local information base has lifetime. If a node does not receive notification for update until the lifetime of an entry is expired, the entry is removed. Hence, every node needs to provide an update to its neighbors. In order to provide the update, each node notifies its current rank with new lifetime. The notification is usually piggybacked in an encoded data packet, but could be delivered in a control packet if a node does not have data to send during its lifetime. A precise algorithm to organize the local information base is described in algorithm 3. The notification of rank has two functions: it acts both as a positive acknowledgement (ACK) and as a negative acknowledgement (NACK). When a node has sufficient data to recover all source packets, the notification works as ACK, and when a node needs more data to recover all source packets, the notification has the function of an NACK. In this last case, a receiver of the NACK could have already stopped transmission, and thus detects and acquires a new neighbor that needs more data to recover all source packets. In this case, the receiver restarts transmission. The restarted transmission continues until the new neighbor notifies that it has enough data, or until the entry of the new neighbor is expired and therefore removed.

5 Theoretical Analysis of DRAGON 5.1 Overview In effect, DRAGON is performing a feedback control, which intends to equalize the rank in one node to the rank in its neighbors, by adapting the rate of the nodes. However, notice that the precise control-theoretic analysis of DRAGON is complicated by the fact the rank gap does not behave like a simple “physical” output, and that we have the following properties, for two neighbor nodes u, v: • If Du>Dv, then every transmission of u received by v will increase the rank in v (is innovative). • If Du≤Dv, then a transmission of u received by v, may or may not, increase the rank in v, both cases may occur. As a result, there is some uncertainty in the control about the effect of increasing the rate of a neighbor that has rank than another node. A refined dynamic approach would make detailed statistics about the innovation rate and use this information in

340

S.Y. Cho and C. Adjih

the control; however, the approach used in DRAGON is simpler and more direct. If a node has lower rank than all its neighbors, it will stop sending packets – these amounts to pessimistically estimate that transmissions from nodes with higher rank are non-innovative. Although this would tend to make the rates less stable with time, intuitively this property might allow DRAGON to be more efficient. 5.2 Insights from Tandem Networks Although the exact modeling of the control is complex, some insight may be gained from approximation. Consider one path (s = v0, v1, . . . Fig. 2. Line Network vn) from the source s to a given node vn as seen in Fig2. Denote Dk =Dvk , Ck =Cvk . Assume now that the ranks in the nodes verify Dk+1(τ) < Dk(τ), for k = 0, . . . , n − 1, and then a fluid approximation yields the following equations: dDk +1 (τ ) α = Ck (τ ) + I k (τ ) withα k =| | ; Ck (τ ) = α k ( Dk (τ ) − Dk +1 (τ )(τ − δ k (τ ))) (1 ) dτ H vk +1

where Ik(τ) is the extra innovative packet rate at vk from other neighbors than vk−1 and δk(τ) is the delay for node k +1 to get information about the rank in its neighbor k. If rank is piggybacked on each transmitted packet, then δk(τ) ≈1/Ck (τ). If we make the approximation that δk(τ) and if we consider a linear network composed exclusively of the path, then αk = 1/α, and Ik(τ) = 0. Let β= α/2, and then (1) yields a sequence of first order equations for Dk, solvable with standard resolution methods: kM Dk (τ ) = Mτ − (1 − e −βτ ) − e −βτ Pk (τ ) ,

β

where Pk(τ) is a polynomial of τ ( Pk(0) = 0). This result shows that for a line network, when τ→∞, the rank in the nodes v0, v1, . . . , vk are such as the dimension gap between two neighbors is M/β, and this occurs after time on the order of magnitude of 1/β=2/α: the rank of the nodes decreases linearly from the source to the edge of the network.

6 Experimental Results 6.1 Evaluation Metrics 6.1.1 Metric for Energy Efficiency To evaluate the performance of our broadcasting protocol DRAGONCAST, we measured the energy-efficiency of the method using Eref−eff: the ratio between Ecost and Ebound, where Ecost is a total number of transmissions to broadcast one data packet to the entire network and Ebound is one lower bound of the possible value of Ecost. The metric for efficiency, Eref−eff is always larger than one and may approach one only when the protocol becomes close to optimality (the opposite is false). As

Wireless Broadcast with Network Coding: DRAGONCAST

341

indicated previously, Ecost, the quantity appearing in the expression of Eref−eff is the average number of packet transmissions per one broadcast of a source packet. We compute directly Ecost as: • Ecost = Total number of transmitted packets / Number of source packets The denominator of Eref−eff , Ebound is a lower bound of the number of transmissions to broadcast one unit data to all N nodes, and we compute it as N/Mavg−max,where Mavg−max is an average of the maximum number of neighbors. The value of Ebound comes from assumption that a node has Mavg−max neighbors at most and one transmission can provide new and useful information to Mmax nodes at most. Notice the maximum number of neighbors (Mmax) evolves in a mobile network, and hence we compute the average of Mmax as Mavg−max for the whole broadcast session after measuring Mmax at periodic intervals. 6.1.2 Energy Efficiency Reference Point for Routing In order to obtain a reference point for routing, we are using the upper bound of efficiency without coding (Ebound−ref−eff) of Fragouli et al. [12]. Their argument works as follows: consider the broadcasting of one packet to an entire network Fig. 3. Ebound without coding and consider one node in the network which retransmits the packet. To further propagate the packet to network, another connected neighbor must receive the forwarded packet and retransmit it, as seen in Fig. 3. Considering the inefficiency due to the fact that any node inside the shared area receives duplicated packets, a geometric upper bound of for routing can be deduced: 6.2 Simulation Results In this section, we start the analysis of the performance of various heuristics, by considering their efficiency from Ecost. Simulations in Fig.4 were performed on several graphs with default parameters but M=10 – relatively sparse networks – and with three rate selections: optimal rate selection, IRMS, and DRAGON. In addition to default parameters of NS-2 Fig. 4. Ebound comparison of different heuristics (version 2.31), we use following simulation parameters: number of nodes = 200; range is defined by M, expectation of number of nodes; position of the nodes: random uniform i.i.d; generation size = 500; field Fp with p = 1078071557; α = 1 (for DRAGON). 6.2.1 Theory and Practice The first 2 bars in Fig.4 represent the gap between theory and practice. The first bar (label: opt(th)), is the optimal Ecost as obtained directly from the linear program solution without simulation. The second bar (label: opt), is the ratio of actual measured

342

S.Y. Cho and C. Adjih

Ecost in NS-2 simulations and theoretical optimal Ecost. The comparison of these two value shows that the impact of the physical and mac layer, as simulated by NS-2 (with 802.11, two ray ground propagation, omni-antenna), is limited as ≈20%. 6.2.2 Efficiency of Different Heuristics The third bar and the last bar in Fig.4 represent the efficiency of DRAGON and IRMS respectively. As one may see in this scenario, the ratio between the optimal rate selection and DRAGON is around 1.5, but without reaching this absolute optimum, DRAGON still offers significantly superior performance to IRMS. The gain in performance comes from the fact that the rate selection IRMS, has lower maximum broadcast rate (in some parts of the network), than the actual targeted one, and hence than the actual source rate. As a result, in the parts with lower min-cut, the rate of the nodes is too high compared to the innovation rate, whereas with DRAGON such phenomena should not occur for prolonged durations. This is one reason for its greater performance. The fourth bar (E(no-coding)) represents bound without coding explained in section 6.1.2. Relative efficiency of DRAGON is higher than bound without coding. This result experimentally confirms that DRAGON outperforms routing on some representative networks. 6.3 Closer Analysis of the Behavior of DRAGON 6.3.1 Impact of α In DRAGON, one parameter of the adaptation is α, and is connected to the speed at which the rates adapt. The table 1 indicates the total number of transmissions made of simulations with default parameters described in 6.2 for the Table 1. Impact of α with graph in Fig 2 reference graph Fig. 1. Dragon is simulated with different α. 1 5 10 IRMS α. As one might see, first, the Ecos 32.166 32.544 35.468 128 efficiency of IRMS on this graph is rather low (1/4 of DRAGON): indeed, the topology exemplifies properties found in the cases of networks where IRMS was found to be less efficient: two parts connected by one unique node. For various choices of α, it appears that the performance of DRAGON decreases when α increases. This evidences the usual tradeoff between speed of adaptation and performance.

Fig. 5. Propagation vs. distance

6.3.2 Comparison with Model Fig. 5 represents information propagation speed of DRAGON in a network of Fig.1 depending α=1, 5, 10. The x coordinate is the distance of the node to the source and the y-coordinate is the time at which the node has received exactly half of the generation size. Hence, ycoordinate yields the propagation of the

Wireless Broadcast with Network Coding: DRAGONCAST

343

coded packets (new information) from the source. First, we see that there is a large step near the middle of the graph: this is the effect of the center node, which is the bottleneck and which obviously induces further delay. Second, linear lines represent that a node far from the source needs more time to get the same amount of information and the required time linearly augments. This result confirms the intuitions given by the models in section 5.2, about a linear decrease of the rank (amount of new information) in the node from the source to the edge of the network. Finally we can find the difference of time to get information between nearest node from the source, and furthest node. It is around 35 for α.=1, 6 for α=5 and 4 for α.=10: roughly, it is invertly proportional to α., as expected from our model in section 5.2. In addition, we can find that with higher values of α (greater reaction to gaps), the impact of the bottleneck in a center node, is dramatically reduced.

7 Conclusion We have introduced a wireless broadcast protocol with a simple heuristic for performing network coding: DRAGON. The heuristic is based on the idea of selecting rates of each node, and this selection is dynamic. It operates as a feedback control, whose target is to equalize the amount of information in neighbor nodes, and hence indirectly in the network. The properties of efficiency of DRAGON are inherited from static algorithms, which are constructed with a similar logic. Experimental results have shown the excellent performance of the heuristics. Further work includes addition of congestion control methods.

References [1] R Ahlswede, N Cai, S-Y R Li, R W Yeung (2000) “Network Information Flow”, IEEE Trans. on Information Theory, 46(4) [2] T Ho, R Koetter, M M´edard, D Karger, M Effros (2003) “The Benefits of Coding over Routing in a Randomized Setting”, International Symposium on Information Theory (ISIT 2003) [3] D S Lun, M M´edard, R Koetter, M Effros (2007) “On coding for reliable communication over packet networks”, Technical Report #2741, MIT LIDS [4] Z Li, B Li, D Jiang, L C Lau (2005) “On Achieving Optimal Throughput with Network Coding” Proc. INFOCOM [5] Y Wu, P A Chou, S-Y Kung (2005) “Minimum-Energy Multicast in Mobile Ad Hoc Networks using Network Coding”, IEEE Trans. Commun. 53(11):1906-1918 [6] D S Lun, N Ratnakar, M M´edard, R Koetter, D R Karger, T Ho, E Ahmed, F Zhao (2006) “Minimum-Cost Multicast over Coded Packet Networks”, IEEE/ACM Trans. Netw. 52(6) [7] P A Chou, Y Wu, K Jain (2003) “Practical Network Coding”, Forty-third Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL [8] S Chachulski, M Jennings, S Katti, D Katabi, “Trading Structure for Randomness in Wireless Opportunistic Routing”, SIGCOMM’07 [9] S-Y R Li, R W Yeung, N Cai (2003) ”Linear network coding”. IEEE Transactions on Information Theory [10] Y Wu, P A Chou, S-Y Kung (2005) “Minimum-Energy Multicast in Mobile Ad HocNetworks using Network Coding”, IEEE Trans. Commun. 53(11):1906-1918

344

S.Y. Cho and C. Adjih

[11] R Koetter and M Medard (2003) “An algebraic approach to network coding”, IEEE/ACM Transactions on Networking, 11(5) [12] C Fragouli, J Widmer, J-Y L Boudec, “A Network Coding Approach to Energy Efficient Broadcasting”, Proceedings of INFOCOM 2006 [13] C Fragouli, J-Y L Boudec, J Widmer (2006) “Network Coding : an Instant Primer”, Proceedings of ACMSIGCOMM Computer Communicatino Review 36(4) [14] P A Chou, Y W, K Jain (2003) “Practical Network Coding”, Forty-third Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL [15] A Dana, R Gowaikar, R Palanki, B Hassibi, M Effros (2006) “Capacity of Wireless Erasure Networks”, IEEE Trans. on Information Theory, 52(3):789-804 [16] D S Lun, M M´edard, R Koetter, M Effros (2005) “Further Results on Coding for Reliable Communication over Packet Networks” International Symposium on Information Theory (ISIT 2005) [17] B Clark, C Colbourn, D Johnson (1990) “Unit disk graphs”, Discrete Mathematics, 86(1-3) [18] C Fragouli and E Soljanin (2004) ”A connection between network coding and convolutional codes”, IEEE International Conference on Communications (ICC), 2:661-666 [19] S Deb, M Effros, T Ho, D Karger, R Koetter, D Lun, M Medard, R Ratnakar, “Network Coding for Wireless Application A Brief Tutorial”, IWWAN05. [20] J S Park, M Gerla, D S Lun, Y Yi, M M´edard (2006) IEEE Personal Communications. 13(5):76 - 81 [21] J S Park, D S Lun, F Soldo, M Gerla, M M´edard (2006) Performance of network coding in ad hoc networks. Proc. IEEE Milcom [22] SY Cho and C Adjih (2007) Heuristics for Network Coding in Wireless Networks. Proc. Wireless Internet Protocol (WICON) [23] C Adjih, SY Cho, P Jacquet (2007) Near Optimal Broadcast with Network Coding in Large Sensor Networks. Proc. workshop on information theory for sensor networks (WITS)

A New Framework for Characterizing and Categorizing Usability Problems Dong-Han Ham School of Computing Science, Middlesex University, London, UK [email protected] Abstract. It is widely known that usability is a critical quality attribute of IT-based systems. Many studies have developed various methods for finding out usability problems and metrics for measuring several dimensions underlying usability concept. Usability professionals have emphasized that usability should be integrated into the development life cycle in order to maximize the usability of systems with minimal cost. To achieve this, it is essential to classify usability problems systematically and connect the usability problems into the activities of designing user interfaces and tasks. However, there is a lack of framework or method for these two things and thus remains a challengeable research issue. As a beginning study, this paper proposes a conceptual framework for addressing the two issues. We firstly summarize usability-related studies so far, including usability factors and evaluation methods. Secondly, we review seven approaches to identifying and classifying usability problems. Based on this review and opinions of usability engineers in real industry, this paper proposes a framework comprising of three viewpoints, which can help usability engineers characterize and categorize usability problems.

1 Introduction As one of critical quality attributes of IT systems, usability has much been studied during recent decades. It is well known that systems showing a high degree of usability can ensure tangible and measurable business benefits [14]. Usability can be defined as ‘the capability of IT systems to be understood, learned, used and be attractive to the user, when used under specified conditions’ [17]. Important usability research topics include: usability factors, usability evaluation methods and metrics, user interface design principles and guidelines, usability problem classification, and usercentred design methodology [18]. Many studies have examined several factors characterizing the concept of usability, which makes it difficult to give an absolute definition of usability. For example, ISO/IEC 9241 [8] specifies three dimensions: effectiveness, efficiency, and satisfaction. Nielsen [13] gives another example of such factors: learnability, efficiency of use, memorability, errors, and satisfaction. Usability factors can be categorized into two groups (objective and subjective). The objective factors are concerned with the assessment of how users perform their tasks, whereas the subjective factors attempt to evaluate how users actually feel the usability of the system [2]. There are a lot of usability evaluation methods or techniques, and there are several ways of classifying them. But it seems that they are usually divided in three ways: usability testing, usability inquiry, and usability inspection [20]. Usability testing makes users conduct a set of tasks by using a system or a prototype and then evaluates how the users to do their tasks. Co-discovery learning, question-asking protocol, and

346

D.-H. Ham

shadowing method are typical examples of usability testing. Usability inquiry observes how users use a system in real work and asks them questions in order to understand users’ feelings about the system and their information needs. Field observation, focus groups, and questionnaire survey are categorized into inquiry methods. Usability inspection methods examine usability-related aspects in an analytic manner. Typical methods are cognitive walkthrough, pluralistic walkthrough, and heuristic evaluation. As there is no absolute best method for all situations, it is necessary to choose an appropriate method, taking into account evaluation purposes, available time, measures to be collected, and so on. Although several design features influence on the degree of usability, user interface would be the most important design factor affecting usability. For this reason, a lot of user interface design principles and guidelines have been developed to help interface designers’ development activities. Design principles are high-level design goals that hold true irrespective of task or context, whereas design guidelines are more specific rules that serve as means implementing design principles, depending on task and context [18]. Consistency is one example of design principles, and one guideline corresponding to this is ‘always place home button at top left hand corner’. Usability engineering is an organized engineering process and a set of methods that specify and measure usability quantitatively throughout the development lifecycle [14]. It emphasizes that usability characteristics should be clearly specified from the very early stage, and that usability should lie at the centre of other development activities. There is no doubt that all the usability engineering activities are important and should be integrated under a single, unified framework. In other words, usability should be integrated into the whole systems development lifecycle. A key essential activity, however, to make it possible is to diagnose the causes of usability problems [4]. Being affected from the software engineering community that developed software defect classification scheme, usability engineering community has proposed several usability problem classification schemes, which are reviewed in the next section.

2 Usability Problem Classification Studies COST action 294-MAUSE (towards the MAturation of information technology USability Evaluation) is European consortium that is organized to study usability evaluation methods and usability problems in a more scientific manner (www.cost294.org) [11]. MAUSE provides a good overview of problems of usability evaluation studies and usability problem classification schemes. This section is described by referring mainly to the studies having been done by MAUSE. 2.1 Problems of Usability Evaluation Studies MAUSE identified several significant problems related to usability evaluation studies as follows. From the list of these problems, we can understand that how important it is to classify usability problems systematically and to connect them to the design process in order to improve the quality of IT systems. – A lack of a sound theoretical framework to explain the phenomena observed; – A lack of a set of empirically based and widely accepted criteria for defining usability problems;

A New Framework for Characterizing and Categorizing Usability Problems

347

– A lack of a standard approach to estimating values of key usability test parameters; – A lack of effective strategies to manage systematically the user/evaluator effect; – A lack of a thoroughly validated defect classification system for analyzing usability problems; – A lack of widely applicable guidelines for selecting tasks for a scenario-based usability evaluation; – A lack of a sophisticated statistical model to represent the relationships between usability and other quality attributes like reliability; – A lack of a clear understanding about the role of culture in usability evaluation 2.2 Usability Problem Classification Schemes We present seven different schemes for classifying and organizing usability problems. Some of which are originally defect classification schemes developed in software engineering community; but they appears to be used flexibly for usability problem classification and thus included here. – – – – – – –

Orthogonal Classification Scheme (ODC) [5] Root Cause Analysis (RCA) [12] Hewlett Packard Defect Classification Scheme (HP-DCS) [7] Usability Problem Taxonomy (UPT) [9] User Action Framework (UAF) [1] Classification of Usability Problem Scheme (CUP) [19] Usability Problems Classification using Cycle Interaction (CI) [16]

ODC was developed by IBM in order to give software developers meaningful feedback on the progress of the current project [5]. It is aimed to bridge the gap between statistical defect models and causal analysis; thus it strives to find out welldefined cause-effect relationships between software defects found and their effects on development activities. The ODC provides a basic capability to extract signatures from defects and infer the health of the development process. The classification of defects is based on the objective findings of a defect such as Defect Type or Trigger (to be explained later), not the subjective opinions regarding where it was injected. It has eight dimensions or factors to describe the meaning of defects. These factors are organized according to the two process steps, where defect classification data are collected [6]. The process step Open is carried out when a defect was found and a new defect report is opened in the defect tracking system; however, the process step Close is performed when the defect has been corrected and the defect report is closed. The step Open has three factors: Activity (when did you detect the defect?), Trigger (how did you detect the defect?), and Impact (what would have user noticed if defect had escaped into the field?). The step Close consists of five factors: Target (what high level entity was fixed?), Source (who developed the target?), Age (what is the history of the target?), Defect Type (what had to be fixed?), and Defect Qualifier (indication of whether the defect type was an omission, a commission, or extraneous). Each factor has its own values. For example, the values of Defect Type contain assignment, checking, algorithm, function, timing, interface, and relationship. All the factors are

348

D.-H. Ham

necessary to provide the exact semantics of a defect; however two factors Defect Type and Trigger play a significant role in the ODC. RCA is a classification scheme that was used for retrospective analysis of the defect modification requests discovered while building, testing, and deploying a release of a network element as part of an optical transmission network [12]. In order to capture the semantics of a defect from the multiple points of view, the RCA has five categories: Phase Detected, Defect Type, Real Defect Location, Defect Trigger, and Barrier Analysis. The Phase Detected refers to a phase on the development life cycle including ten phases, which begin from system definition and end with deliveries. The Defect Type divides defects into three classes: implementation, interface, and external. Each class is again composed of several defect types. The Real Defect Location specifies where a defect was located by using three values: document, hardware, and software. The Defect Trigger means the actual root causes in the RCA. The RCA stances a position that there may be several underlying causes rather than just one. Four inherently non-orthogonal classes of root causes are provided: phase-related, human-related, project-related, and review-related. The Barrier Analysis suggests measures for ensuring earlier defect detection and preventing defects as well. HP-DCS was developed to improve the development process by minimizing the number of software quality defects over time [7]. It has three descriptors for characterizing a defect: Origin (the first activity in the lifecycle where a defect could have been prevented, not where it was actually found), Type (the area, within a particular origin, which is responsible for the defect), and Mode (designator of why the defect occurred). Each descriptor is composed of several factors or factor groups, of which combination classifies defects. The Origin has six factors: specification/requirements, design, code, environment/support, documentation, and other. The Type has six factor groups, and one example is a group comprising logic, computation, data handling, module interface, and so on. The Mode explains the reason of defects with five factors, which are concerned with whether information was missing, unclear, wrong, changed, or done in a better way. One important thing is that the choice of a factor for the Origin constrains the possible set of factors for the Type. UPT is a taxonomic model in which usability problems detected in graphical user interfaces with textual components are classified from two perspectives: artefact and task [9]. The UPT was developed on the basis of systematic review of 400 usability problems collected in real industry projects. It is made up of three hierarchical levels. The UPT has an artefact component and a task component, which are located at the top level. The artefact component focuses on difficulties observed when users interact with individual interface objects, whereas the task component is concerned with difficulties encountered when users conduct a task. The two components are divided into five primary categories. The artefact component comprises of three categories (visualness, language, and manipulation), and the task component has two categories (taskmapping and task facilitation). Each primary category is again composed of multiple subcategories. For example, visualness consists of five subcategories: object (screen) layout, object appearance, object movement, presentation of information/results, and non-message feedback. UAF is an interaction model-based structure for organizing usability concepts, issues, design features, usability problems, and design guidelines [1]. It thus aims to be

A New Framework for Characterizing and Categorizing Usability Problems

349

an integrated framework for usability inspection, usability problem reporting, usability data management, and effective use of design guidelines. Another main purpose of the UAF is to support consistent understanding and reporting of the underlying causes of usability problems. Usability problem classification in the UAF uses the interaction design cycle, which is adapted and extended Norman’s ‘stage of action’ model. The interaction cycle is all about what users think (cognitive actions), do (physical actions), and see (perceptual actions) during cycle of interaction with computer. It is composed of four activities: Translation (determining how to do it with physical actions), Planning (determining what to do), Assessment (determining, via feedback, if outcome was favourable), and Physical Action (doing it). This cycle model provides high level organization and entry points to the underlying structure for classifying usability problems. Finding the correct entry point for a usability problem is based on determining the part of the interaction cycle where the user is affected. Examples of relating usability problems to relevant part of the interaction cycle are: ‘unreadable error message’ and Assessment, ‘user does not understand master document structure’ and Planning, ‘user cannot directly change a file name in an FTP program’ and Translation, and ‘user clicks on wrong button’ and Physical Actions. CUP was developed for the purpose of classifying usability problems further to give developers better feedback on how to correct the problems, on the basis of collective review of previous defect classification schemes [19]. The CUP specifies 10 attributes to characterize a usability problem. They include: Identifier (ID), Description, Defect Removal Activity, Trigger, Impact, Expected Phase, Failure Qualifier, Cause, Actual Phase, and Error Prevention. As in the other schemes, most of these attributes have a set of values of their own. For example, the Cause has five values: Personal, Technical, Methodological, Managerial, and Review. Ryu and Monk [16] developed a classification scheme based on cyclic interaction model, which is similar to the interaction cycle in the UAF. It is aimed at examining low-level interaction problems and thus they also developed a simple walkthrough method. The cyclic interaction model strives to model a recognition-based interaction between users and systems, by considering three paths in an interaction cycle: actioneffect path, effect-goal path, and goal-action path. These three paths result in three kinds of usability problems: action-effect problems, effect-goal problems, and goalaction problems. The action-effect problems are deeply related to mode problems that lead same action to different system effects. In general, the mode problems can be classified into three groups: hidden mode problems, partially hidden mode problems, and misleading mode signals. The effect-goal problems are concerned with goal reorganization process. Ineffective or improper goal reorganization can be explained by four types: missing cues for goal construction, misleading cues for goal construction, missing cues for goal elimination, and misleading cues for goal elimination. The goalaction problems occur when users should perform unpredictable actions to achieve a goal, which can be explained in terms of affordance. Typical two unpredictable actions are: weak affordance of a correct action and strong affordance of an incorrect action. Classification schemes originated in software engineering community, such as ODC and HP-DCS, tended to understand usability problems within a broad development context and from the side of developers. However it seems that classification

350

D.-H. Ham

schemes based on interaction model or users’ cognitive task model tended to understand usability problem from the point of view of users. From the review of existing classification schemes, we could find that there is not yet a systematic framework or process to link usability problems found into the development context and activities.

3 Proposed Framework The comparative review of existing classification schemes and opinions of usability engineers in real industry indicated a set of requirements that serves as a conceptual basis for the proposed framework. First, the scope of usability concept should be exactly defined, depending on the purpose of usability problem classification. Traditionally, usability concept does not include aspects related to system functions (usefulness). But it is a trend to incorporate usefulness into the concept of usability, as well as satisfaction. If the purpose of classification is to improve design process, the broad concept of usability should be adopted. However, if the purpose is just related to usability testing of certain interface features and interface-based operation, the narrow concept would be better. Second, a usability problem should be characterized in terms of basic 5W1H questions as follows: – – – – – –

What are the phenomena resulted from the usability problem? Who experienced the usability problem? When did users meet the usability problem? How did users undergo the usability problem? Which part of user interfaces (Where) is related to the usability problem? Why did the usability problem occur?

Third, usability problems mainly resulted from design deficiencies need to be differentiated from those specific to a particular user (group). As the causes of the latter type usability problems are likely to be related to subjective nature of the user (groups) rather than design features, their implication to design process should be considered from different perspectives. Fourth, related to the second requirement, it should be noted that usability problems can be categorized by several criteria. The corresponding relationship between the criteria and the basic questions can be: system function criteria (what), user criteria (who), task criteria (when), interaction step criteria (how), interface object/feature criteria (where), and design principle criteria (partly why). For example, if usability problems of a word processor are classified in terms of task, some can be related to editing tasks while others can affect formatting tasks. If they are categorized by interface object criteria, some can be regarded menu-related problems and others can be information architecture-related problems. Fifth, if we want usability problems to be used more meaningfully during design process, design activities and usability problems should be explained by same language or modelling concept. As the proposed framework suggests, the most recommendable concept is the abstraction level of design knowledge.

A New Framework for Characterizing and Categorizing Usability Problems

351

Taking into account these requirements above, we propose a conceptual framework in which existing classification schemes can be compared and usability problems can be interpreted in a new way (Fig. 1). AH Model

‘Design Knowledge’ Perspective

Relate problems to activities

‘Design Activities’ Perspective

Diagnose usability problems

Framework for Usability Problems

‘Context of Use’ Perspective

Prevent usability problems FBS Model

AUTOS Model

Fig. 1. Framework for characterizing and classifying usability problems

The framework is composed of three perspectives: Context of Use, Design Knowledge, and Design Activities. Within each perspective, it is recommended to use a modelling tool to address usability problems. Context of Use perspective helps capture the broad semantics of usability problems. For this, Artefact-Users-TasksOrganization-Situation (AUTOS) [3] model can be effectively used, which makes it possible to consider several criteria (task, user, interface object, interaction step) simultaneously. The other two criteria (system function and design principle) are concerned with Design Knowledge perspective. In order to interpret usability problems in terms of designed system functions and their relevant design principles, abstraction hierarchy (AH) model [15] can be a good modelling tool. Design Activities perspective, together with Design Knowledge perspective, allows us to improve the quality of design activities, referring to usability problems. As Function-Behaviour-Structure (FBS) model is effective when reasoning about and explaining the nature of design activities, it is a recommended modelling tool in this perspective. FBS model assumes that there are eight abstract engineering design processes (e.g., formulation, analysis, and documentation), and they link five elements (function, expected behaviour, exhibited behaviour, structure, and design description). From the FBS model, we can also assume that the cause of a usability problem is highly related to one of the eight processes. Interestingly, the relationship between the five elements in the FBS model can

352

D.-H. Ham

be reasonably interpreted from the perspective of abstraction level of design knowledge. This gives an insight on how to bridge two perspectives (Design Knowledge and Design Activities), which can provide more scientific method of linking usability problems to design process.

4 Conclusion A key element for enhancing usability of IT systems is to diagnose the causes of usability problems detected and to give a meaningful feedback to design process in association with the problems. In order to help usability engineers assess usability problems more systematically, this paper proposed a framework, which consists of three viewpoints (context of use view, design knowledge view, and design activity view). This framework itself does not offer any specific ways for usability engineers to easily follow, but just serves as a thinking tool for dealing with usability problems. Therefore more specific methods or guidelines to categorize usability problems are being studied. In particular, the area of linking usability problems into design activities, which are characterized by transformation between abstraction levels of design knowledge, is of primary concern to the author’s study. Acknowledgments. The authors wish to acknowledge the assistance and support of all those who contribute to EKC 2008.

References [1] Andre T, Hartson H, Belz S, and McCreary F (2001) The user action framework: a reliable foundation for usability engineering support tools. International Journal of HumanComputer Studies 54(1):107-136 [2] Bevan N (1999) Quality in use: Meeting user needs in quality. The Journal of Systems and Software 49(1):89-96 [3] Boy G (1998) Cognitive function analysis, Ablex Publishing Corporation [4] Card D (1998) Learning from our mistakes with defect causal analysis. IEEE Software. 15(1):56-63 [5] Chillarge R, Bhandari I, Chaar J, Halliday M, Moebus D, Ray B, and Wong M (1992) Orthogonal defect classification-A concept for in-process measurements. IEEE Transactions on Software Engineering 18(11):943-956. [6] Freimut B (2001) Developing and using defect classification schemes (IESE-Report No. 072.01/E), Fraunhofer IESE [7] Huber J (1999) A Comparison of IBM’s Orthogonal Defect Classification to Hewlett Packcard’s Defect Origins, Types, and Modes. Hewlett Packard Company [8] ISO/IEC 9241 (1998) Ergonomic requirements for office work with visual display terminal-Part 11: Guidance on usability [9] Keehan S, Hartson H, Kafura D, and Schulman R (1999) The usability problem taxonomy: A framework for classification and analysis. Empirical Software Engineering 4(1):71-104 [10] Kruchten P (2005) Casting software design in the function-behaviour-structure framework. IEEE Software 22(2):52-58. [11] Law E (2004) Proposal for a New COST Action 294: Towards the Maturation of IT Usability Evaluation. COST Office

A New Framework for Characterizing and Categorizing Usability Problems

353

[12] Leszak M, Perry D, and Stoll D (2002) Classification and evaluation of defects in a project retrospective. The Journal of Systems and Software 61(3):173-187 [13] Nielsen J (1993) Usability engineering, AP Professional [14] Peuple JL and Scane R (2004) User interface design, Crucial [15] Rasmussen J (1986) Information processing and human-machine interaction-An approach to cognitive engineering, Elsevier Science [16] Ryu H and Monk A (2004) Analysing interaction problems with cyclic interaction theory: low-level interaction walkthrough. Psychnology Journal 2(3):304-330 [17] Schoeffel R (2003) The concept of product usability: a standard to help manufacturers to help consumers. ISO Bulletin, March:5-7 [18] Te’eni D, Carey J, and Zhang P (2007) Human-computer interaction: Developing effective organizational information systems, John Wiley [19] Vilbergsdottir S, Hvannberg E, and Law E (2006) Classification of usability problems (CUP) scheme: Augmentation and exploitation. Proceedings of NordiCHI 2006. Oslo, Norway, 281-290 [20] Zhang Z (2003) Overview of usability evaluation methods. Retrieved 20 May 2008, from http://www.usabilityhome.com Accessed 3 July 2008

State of the Art in Designers’ Cognitive Activities and Computational Support: With Emphasis on the Information Categorization in the Early Stages of Design Jieun Kim, Carole Bouchard, Jean-François Omhover, and Améziane Aoussat New Product Design Laboratory (LCPI), Arts et Metiers ParisTech, Paris, France jieun.kim@ paris.ensam.fr

Abstract. Nowadays, the growth of interest is the analysis of designers’ cognitive activities [2][10].This interest has become a major interdisciplinary topic not only in design science, but also in cognitive psychology, computer science and artificial intelligence. In this context, the analysis of the designer's cognitive activity aims to describe more clearly the mental strategies for solving design problems and to develop computational tools to support designers in the early stages of design. Insofar as much design research depends on findings from empirical studies [11], the results of these studies have shown neglected the cognitive aspects involving in construction, categorization and effective application of design information for idea generation in the early stages of design which is relatively implicit. So, in this paper, we provide the state of the art in designers’ cognitive activities and computational support with emphasis on the information categorization in the early stages of design through a review of a wide range of literature. We also present a case study of the ‘TRENDS’ system which is positioned in the core of these research tendency. Finally, we discuss the limitations of current research and perspectives for the further work.

1 General Introduction The paradigm of ‘design as a discipline’ in the 1980s, has led to a vigorous discussion on the view that design has its own things to know and its own ways of knowing them [1]. While in the past, the design research community has focused on the former (related to products), nowadays the growth of interest is the analysis of designers’ cognitive activities [2-10]. In fact, the view of designers’ activities as being primarily cognitive has been put forth by European researchers in the 1950s and 1960s. At that time it did not widely appeal to researchers in design science, due to the lack of communication between the two disciplines; cognitive psychology and design science as well as the lack of reference to European work [11, 12]. Since then research on design cognition has steadily increased, this interest has become a major interdisciplinary topic not only in design science, but also in cognitive psychology, computer science and artificial intelligence. In this context, the aim of the research is to describe more clearly the mental strategies for solving design problems and to develop computational tools to support designers in the early stages of design. The early stages of design, also called ‘conceptualization’, are characterized by information processing and idea generation [4, 8]. They are also some of the most cognitively intensive stages in the whole design

356

J. Kim et al.

process [13]. But some phases of the early stages of design still remain incompletely understood. Thus having a full understanding of designers’ cognitive activities underlying the early stages of design is of great interest, to formalize the designer’s cognitive process and to encompass computational support in the early stages of design. In this paper, we provide the state of the art on the study of designers’ cognitive activities (Part II) and also take into account a worldwide study about computational support for designer’s activity through a review of a wide range of literature (Part III). Following this, we present a case study of the ‘TRENDS’ system (Trends Research Enabler for Design Specifications) which the authors contributed to and developed in the 6th framework of a European community project (FP6) [14]. In part IV, we discuss the limitations of current research and perspectives for the further work.

2 The Nature of Designer’s Cognitive Activity in the Early Stages of Design 2.1 Cognitive Aspect of Design as an Information Processing Activity According to Jones (1970) [15], the designer has been describes as a ‘black box’, because it was thought that designers generated a creative solution without being able to explain or illustrate how the solutions came about. However early studies have been develFig. 1. Description of an informational cycle [4] oped tools and techniques in order to capture and implement the empirical data related to designers’ cognitive activities [11], The dominant research interest was ‘what and where’ designers retrieve and collect inspirational sources and ‘how’ they represented their ideas using physical representations, such as in sketching activities. It is very important as for the design research community and as for developing computational support in the early stages of design. In this respect, many researchers agreed that designers use various levels of information in reducing abstraction through integrating more and more constraints in the early stages of design [5, 16]. So the designer’s cognitive activity can be seen as information processing activity. As Figure 1 shows, that information processing activity can be described as a information cycle which consists of informative, generative and decision-making phases (evaluation-selection) whose outcome is an intermediate representation. This information cycle evolutionally iterated [4]. 2.2 Information Categorization Toward Bridging between Informative and Generative Phase The design information can be divided into external information, such as visual information conveyed by photos and images; and mental representations of design. The former comes from designers collecting inspirational information and the latter can be structured by cognitive mechanisms [2, 17, 39]. Both types of information interact each other in generating ideas and are considered very important in the activity of

State of the Art in Designers’ Cognitive Activities and Computational Support

357

expert designers [18]. Insofar as much design research depends on findings from empirical studies [11], it has been focused on specific activities such as collection of information [10, 19-21] and sketching [22-24]. By contrast, the results of these studies have shown that information categorization phases were neglected, although they play an important role in bridging the above two activities, but are relatively implicit. Information categorization is the way in which design information can be externalized in the sketching activity, i.e. cognitive tasks of construction, categorization and application of design information for idea generation in the early stages of design [6, 25]. According to Howard’s study [26, 27] on the comparison between the ‘engineering design process’ and the ‘creative design process’, the ‘creative process’ is defined as ‘a cognitive process culminating in the generation of an idea.’ Thus, this view on creativity brings insight in understanding designer’s cognitive activities particularly in order to bridge between informative and generative phase in the early stages of design. As following the literature on creativity from cognitive psychology, there is a well-known Walla (1926) [28] four- stage model of the creative process – preparation – incubation – illumination – verification. The middle phases of how designers incubate information and how come they attain creative insight still remain incomplete as regards design in practice [29, 30]. This point raises common questions as those mentioned for information processing activities. In this respect, we believe that integrating cognitive theories related to ‘creativity’ and the results of observation in design practice can bring clear explanations on designers’ cognitive activities in categorizing information which might bridge between the informative and generative phases in the early stages of design. 2.3 Description of a Conventional Method for the Information Categorization As described above, it is still difficult to understand cognitive mechanisms of information categorisation. However in observing designers’ activities, we could find that designers try to discover a ‘new’ or ‘previously hidden’ association between a certain piece of information and what they want to design [31] through information categorization activities, especially with visual information such as images, photos etc. which come from specialised magazines, material from exhibitions and the web, and in different areas. Visual information categorization is a natural and meaningful activity and is considered one of the major stages in designers’ activity especially for expert designers. [19, 21]. In design practice, even though the purpose on information categorisation might be different depending on the context of application, the visual information categorisation stage provides an unique opportunity to see in images how the designer’s needs for visual information are shaped by the visual information already accessed [6]. Also it is very specific inasmuch as it includes the ability to diverge and to generate new categories and to converge and classify images resources to fit in existing categories at once. In addition, visual information categorization is based on the use of attributes from low levels such as formal, chromatic and textural to high levels descriptors - semantic adjectives, for instance, ‘warm colors’ to represent colors from the red series[40]. The use of semantic adjectives to link words with images and vice-versa impose a much greater cognitive load than low level attributes [20]. Here, we present the major visual information categorization method in design

358

J. Kim et al.

fields (A) KJ(Kawakita Jiro) clustering (1975), (B) MDS (Multidimensional scaling) [32,33] (C) Trend boards [3-5]. A. KJ (Kawakita Jiro) clustering – It is used for clustering images relative to each other and assigning images as keywords on image clusters. Limitations of this method include an insufficient surface for a lot of images and difficulties in revealing relationships between groups [34]. B. MDS (Multi-dimensional scaling) – Its purpose is to find trend between images, and to visualize image distribution on specified conditions [34, 35]. But it is hard to decide semantic words for the fixed axes [36] and to measure scale value between images [34]. C. Trend Boards– To finalize the informative phase, designers build trend boards with determining colours, finishing, textures, and forms and with providing closer the desired universe to the perceived feeling [3-5]. It allows stimulation of creativity and collaborative communication [10] and is useful for apprehending the physical aspect of the future product [35]. But the delimitation of parameters is subjective and it is a time consuming method [37].

Fig. 2. (From left to right) KJ clustering [34] (A), MDS positionning (B), Trend board (C)

3 Current Computational Support 3.1. Upstream Tendency to Digitalize the Design Process Nowadays, with the penetration of Information Technology (IT), there is a growing trend in using computational tools and internet centered on designers’ activities. Designers tend to build their digital databases of design, increasingly provide them with more and more importance within Fig. 3. Evolution of the computational support their activities [6, 7, 10]. In this respect, computational support encompassing the design process is very important. However, as shown in Figure.3, the evolution of the computational support has been reversed the design process. In contrast to later stages of execution phases which are primarily involved prototyping technology like as CAM (Computer-aided Manufacturing) and CAD (Computer aided Design), computational support to help idea generation and to explore designer’s creativity in the early stages of design (conceptualisation) are relatively undeveloped [4, 37]. Even though commercial image retrieval websites, for instance, ‘Google’, ‘Getty images’

State of the Art in Designers’ Cognitive Activities and Computational Support

359

and ‘Flicker’ etc. allow designers to get easily a bunch of image sources, but retrieving images sources from web is laborious and inadequate in order to become inspirational sources for designers. Moreover, given the growing size of databases, structuring the design information is increasingly difficult. So current research needs for the computational tool is to support designer’s cognitive activities in the early stages of design, especially, from seeking inspirational information to reusing them through categorization and visualization [5, 10, 34]. Table 1. Computational support for visual information categorization as a generative approach Reference

Title

Nakakoji et al. (1999) [29] EVIDII

Originality Support collective creativity Show associations between images, words and persons

Nakakoji et al. (1999) [29] IAM-eMMa

Uses knowledge-based rules The systems orders images in its library according to physical attributes of images

Stappers & Pasman (2001) Product World Expressive possibilities of dynamic and interactive [30] Büsher (2004) [7]

MDS interactive ACSA

Aesthetic categorisation support for Architecture Add and modify the annotations

Oshima N. & Harada A. (2003) [39]

3DDG Display Individual system for recollecting past work and displaying through 3 axes structural model of the memory

Keller, AI (2005) [10]

CABINET

Tangible interaction with digital images Database collection for source inspiration

Jung, H, et al. (2007) [34] I-VIDI

Folksonomy-based collaborative tagging

TRENDS consortium [8,14]

Base on the Conjoint Trends Analysis (CTA)

TRENDS

Intergrate Kanseoi Based image Retrieval (KBIR) Auto-generate image mapping

3.2 Current Computational Support for Visual Information Categorization as a Generative Approach When designers categorize visual information, naturally they identify images with not only low levels attributes, such as formal, chromatic and textural, but also high-level descriptor - semantic adjectives. Similarly the emerging field which is called Kansei engineering in Asia has been developed many computational systems which is focused on defining and evaluating subjective characteristic of product with semantic adjective. The knowledge and technology coming from Kansei engineering is very useful in enriching the study of designer’s cognitive activities. However we should make clear that the purpose of computational support which is shown in Table 1 is to dedicate for designers and to support visual information categorization for idea generation. It should also enable designers to explore their creativity. In addition to, computational tools are also emphasized the importance of graphical interfaces, because,

360

J. Kim et al.

to support the designer’s cognitive activities, a graphical interface should better matches the visual cognitive style [38] and have to be developed visual exploration of how to manage these huge databases in a more intuitive and d more "fun" or more interactive [4, 10, 29, 30]. Table 1 shows the state of the art on the computational support for visual information categorization as a generative approach. Each computational tool has been developed according to the different usage context and the evolution of technology. We discuss the limitations of current research and perspectives for the further computational support in part IV. 3.3 Case Study: TRENDS - A Content-Based Information Retrieval System for Designers The TRENDS (Trends Research Enabler for Design Specifications) was in the 6th framework of a European community project (FP6) [14]. The TRENDS content-based information retrieval system aims at improving designers’ access to web-based resources by helping them to find appropriate materials, structure these materials in way that supports their design activities and identify design trends [8]. TRENDS system developed Fig .4. Digitalization of CTA method [14] with the basis on the Conjoint Trends Analysis [3] and digitalized this manual method (Fig.5). CTA method enables the identification of design trends through the investigation of specific sectors of influence and the formalization of trend boards and pallets. These pallets put together specific attributes linked to particular datasets (e.g., common properties of images in a database) so that they can be used to inspire the early design of new products. CTA results in the production of trend boards that can represent sociological, chromatic, textural, formal, ergonomic and technological trends [3, 8]. To develop TRENDS-tool interface, we used a specific methodological approach including both a highly user centred approach and creative collaborative thinking. Currently TRENDS prototype is being assessed its interface and functionalities [14].

4 Discussion and Conclusions Within this paper, research related designer’s cognitive activities and computational support with emphasis on the information categorization in the early stages of design has been reviewed. This above the state of the art raised the discussions following three issues: 1. Limitations with the use of current computational support on designers’ cognitive activities –In using computational tools for information categorization, it

State of the Art in Designers’ Cognitive Activities and Computational Support

361

allows designers to more easily communicate and to achieve creative inspiration through restructuring and categorizing activities for idea generation in the early stages of design. However, current computational tool doesn’t still overcome the limitation of the conventional method as we mentioned in part II-3. One reason might come from the ambiguity of the process that stems from the fact that the information categorization is mostly mentally and is subjective task [7, 34-36]. So it was difficult to anticipate. It is due that current method doesn’t take account on the impact of designer’s mental representations which can impact in the development of design and make a relation to external representation. In short, due to the lack of understanding the designer’s cognitive activities [5, 17, 39]. The other reason is the holistic nature of visual information including multidimensional data. Designers use high levels descriptors - semantic adjectives as for characterizing images or as for a source of creativity. These high-level description of images semantically related to emotions impact, thus it brings the problem of semantic gap between designers’ semantic descriptor and digitalized descriptor through computer algorithms [40]. 2. Perspective for further research on designers’ cognitive activities We examined limitations derived from the nature of designers’ cognitive activities and the design information in the early stages of design. In the early stages of design, they are some of the most cognitively intensive phases in the whole design process [13] and are for explorative, creative activities. So it is very important to understand designers’ cognitive activities and study for formalizing the cognitive design process with the extraction of design knowledge, rules and skills [5]. Also based on the theoretical account for cognitive psychology, we need to translate design rules into design algorithms in order to develop computational tools. In this respect, the further research should be focused on the formalisation information categorisation process toward bridging between informative and generative phase. At the same time, it could bring useful insights to understand uncertain creative process especially from incubation and illumination phases. 3. Computational tools as the further opportune area The analysis of designers’ cognitive activities has been studied to develop new computational tools, even though their evolution has been upstream direction of the design process. Many researchers agreed that computational tool is useful in the early stages of design and more and more important within their activities. Meanwhile, as the evolution of technology, even though computational tool can resolve most limitations of conventional method, we still meet some arguments on the limitations of the computational support whether this is still useful for exploring idea generation and stimulating designer’s creativity. That is why, nowadays in many pioneer researchers communities, there are many questions on the Creativity and the possibility support with Artificial Intelligence (AI), or AI and Design. In addition, further computational tool should take count of the importance of graphical interfaces which can be better matches the visual cognitive style [38] and more interactive visual exploration [4, 10, 29, 30]. Like this, the analysis of designers’ cognitive activities and computational support are recognized as a major interdisciplinary topic not only in design science, but also in cognitive psychology, computer science and artificial intelligences. Further research can also benefit through getting various insights from all these areas and communities.

362

J. Kim et al.

Acknowledgments. This study refers to TRENDS project, funded by the European Commission through the 6th Framework Program for Information Society Technologies (FP6-27916) and run from Jan 2006 to Dec 2008. www.trendsproject.org

References [1] Cross N (2007) Forty years of design research. Design Studies 28:1-4 [2] Eckert C, Stracey MK (2000) Sources of Inspiration: A language of design. Design Studies 21: 99-112 [3] Bouchard C, Aoussat A (2002) Design process perceived as an information process to enhance the introduction of new tools. International Journal of Vehicle Designer 31.2:162-175 [4] Bouchard C, Lim D, Aoussat A (2003) Development of a Kansei Engineering system for industrial design: identification of input data for Kansei Engineering Systems. Journal of the Asian Design International Conference 1:12 [5] Bouchard C, Omhover JF, Mougenot C, Aoussat A et al (2008) TRENDS: A ContentBased Information retrieval system for designers. In: J.S. Gero and A. Goel (eds), Design Computing and Cognition DCC’08 (pp. 593-611). [6] Restrepo J (2004) Information processing in design. Delft University Press, the Netherlands [7] Büsher M, Fruekabeder V, Hodgson E et al (2004) Designs on objects: imaginative practice, aesthetic categorization and the design of multimedia archiving support. Digital Creativity [8] Stapper, P J and Sanders, E -B N (2005) Tools for designers, products for users? In s n (Ed.) 2005 international conference on planning and design: creative interaction and sustainable development [9] McDonagh D, Denton H (2005) Exploring the degree to which individual students share a common perception of specific trend boards: observations relating to teaching, learning and team-based design. Design Studies 26:35-53 [10] Keller AI (2005) For Inspiration Only - Designer Interaction with informal collections of visual material. Ph.D. Thesis, Delft University of Technology, The Netherlands [11] Coley F, Houseman O, Roy R (2007) An introduction to capturing and understanding the cognitive behaviour of design engineers. Journal of Engineering Design 311-325 [12] Hubka V, Eder W (1996) Design Science. London: Springer-Verlag [13] Nakakoji K (2005) Special issue on ‘Computational Approaches for Early Stages of Design’. Knowledge-based System. 18:381-382 [14] TRENDS Website (2008) http://www.trendsproject.org/ Accessed 07 July 2008. [15] Jones JC (1992) Design Methods. 2nd ed. Van Nostrand Reinhold. New York [16] Bonnardel N, Marmèche E (2005) Towards supporting evocation processes in creative design: A cognitive approach. International Journal of Human-Computer Studies Computer support for creativity. 63:422-435 [17] Eastman CM (2001) New Directions in Design Cognition: Studies on Representation and Recall, Design Knowing and Learning. 79-103 in Cognition in Design Education, edited by C.M. Eastman, W.M. McCracken, and W.C. Newstetter. Amsterdam: Elsevier Science Press. Frigui, H and Caudill, J : 2006, Region Based Image Annotation. ICIP 2006, 953-956 [18] Gero JS (2002) Towards a theory of designing as situated acts. The Science of Design International Conference, Lyon [19] Casakin H, Goldschmidt G (1999) Expertise and the use of visual analogy: implications for design education. Design Studies 20:153-175

State of the Art in Designers’ Cognitive Activities and Computational Support

363

[20] Pasman G (2003) Designing With Precedents. Delft University of Technology, Ph.D. Thesis, Delft University of Technology, The Netherlands. [21] Goldschmidt G, Smolkov M (2006) Variances in the impact of visual stimuli on design problem solving performance. Design Studies 27: 549-569 [22] Goldschmidt G (1991) The dialectics of sketching. Creativity Research Journal 4:123–143. [23] Goldschmidt G (1994) On visual design thinking: the vis kids of architecture. Design Studies 15:158-174 [24] Yi-Luen DoE (2005) Design sketches and sketch design tools. Knowledge-Based Systems. Computational Approaches for Early Stages of Design 18:383-405 [25] Bilda Z, Gero JS (2007) The impact of working memory limitations on the design process during conceptualization. Design Studies 28:343-367 [26] Howard T, Culley S J, Dekoninck E (2007) Creativity in the engineering design process, International conference on engineering design, ICED’07 [27] Howard T J, Culley S J, Dekoninck E (2008) Describing the creative design process by the integration of engineering design and cognitive psychology literature. Design Studies 29:160-180 [28] Walla G (1926) The art of though. (Jonathan Cape 1926, London, 1926) [29] Nakakoji K, Yamamoto Y, Ohira M (1999) A Framework that Supports Collective Creativity in Design using Visual Images. In Creativity and Cognition (pp. 166-173). New York:ACM Press [30] Pasman G, Stappers PJ (2001) "ProductWorld", an Interactive Environment for Classifying and Retrieving Product Samples. Proceedings of the 5th Asian Design Conference, Seoul [31] Sharples M (1994) Cognitive Support and the Rhythm of Design. In Dartnall T.(Ed.) Artificial Intelligence and Creativity (pp. 385-402). The Netherlands: Kluwer Academic Publishers [32] Kruskal, JB, Wish M (1978) Multidimensional scaling. Beverly Hills:Sage Publications. [33] Borg I, Groenen P (1997) Modern Multidimensional Scaling: Theory and Applications. Springer Verlag [34] Jung H, Son MS, Lee K (2007) Folksonomy-Based Collaborative Tagging System for Classifying Visualized Information in Design Practice. CHI, Beijing [35] Maya Castano J (2007) What user product experiences is it currently possible to integrate into the design process. International Conference on engineering design, ICED’07 [36] Schütte S (2005) Engineering Emotional Values in Product Design – Kansei Engineering in Development, Ph.D. Thesis, Linköping studies in Science and Technology [37] Soublé L, Mougenot C, Bouchard C (2006) Elaboration d’un outil interactif de recherche d’information pour designers industriels, Actes de CONFERE 2006 [38] Stappers, P.J. and Hennessey J.M (1999), Computer Supported Tools for the Conceptualization phase. Proceedings 4th International Design, Thinking Research Symposium on Design Representation, pp. 177-188 [39] Oshima N, Harada A (2003) Design Methodology which recollects memory in creation process. 6th Asian Design Conference, Japan [40] Wang XJ, Ma WY, Li X (2004) Data-driven approach for bridging the cognitive gap in image retrieval. ICME '04. 2004 IEEE International Conference: 2231- 2234

A Negotiation Composition Model for Agent Based eMarketplaces Habin Lee and John Shepherdson 1

Brunel Business School, Brunel University West London, Uxbridge, Middlesex, UK [email protected] 2 Intelligent Systems Research Centre, BT Group CTO, Ipswich, Suffolk, UK

Abstract. Organizations that find partners and form virtual organizations (VOs) with them in open eMarketplaces are involved in many different negotiation processes during the lifecycle of the VOs. As a result, support for negotiation is one of the major requirements for information systems that facilitate the formation of VOs. This paper proposes a component-based approach to the problem, where negotiation processes are implemented as software components that can be dynamically installed and executed by software agents. This approach allows autonomous software agents to participate in various VO formation and execution processes that incorporate negotiation protocols that they were previously unaware of. A component-based approach has advantages in the management of negotiation processes in terms of version control and dynamic switching of negotiation processes.

1 Introduction Organizations that find partners and form virtual organizations (VOs) [1] with partners in open eMarketplaces are involved in many different negotiation processes during the lifecycle of the VOs. As a result, support for the negotiation process is one of the major issues for Information Systems (ISs) that support VOs in open eMarketplaces. Furthermore, due to the complexity involved in handling exceptions [2] within the inter-organizational workflows (IOWs) [3] that underpin VOs, the ISs for an eMarketplace need negotiation strategies that can effectively handle such exceptions. This paper proposes a component-based approach to solve these issues. Negotiation processes are implemented as software components that can be dynamically installed and executed by software agents. The approach allows autonomous software agents to participate in various VO formation and execution processes that use negotiation protocols that the agents have not encountered previously. A component-based approach has advantages in the management of negotiation processes in terms of version control and dynamic switching of negotiation processes. An NCM (negotiation composition model) is proposed to define the sequences of negotiation components and their (data and control) relationships to handle exceptions in the middle of IOW executions. An illustrative example of IOW is used to show the usefulness of NCM in the eMarketplace context. The organization of this paper is as follows. The next section details the negotiation composition model and section 3 applies the model to an illustrative example. Finally section 4 discusses the novelty of the paper and concludes it.

366

H. Lee and J. Shepherdson

2 NCM: A Negotiation Composition Model NCM takes a component-based approach for negotiation service composition. A negotiation service is implemented as a software component which can be plugged and played within software agents. Caller

Initiator

Controller

Result Container

Message Creator

Protocol Handler

Organizational Agent Internal

User Interface

Agent Action Library

Message Queue

ACL message

Service Description

Respondent conversation sessions



DF

Goal Engine

Ontology base

ACL message

Respondent

Service Description SD Manager

Protocol Handler

Controller

Service Action

Coordinator

Message Queue C-COM Library

Initiator conversation sessions



User Interface

DB

(a)

(b)

Fig. 1. The structure of a negotiation component (a) and the internal architecture of an agent that enables plug and play of the negotiation component (b)

Figure 1 shows the internal architecture of a negotiation component, called a CCOMnego, and the agent that uses the component on a plug and play basis. The main feature of a C-COMnego is that it enables agents to use interaction protocols that were previously unknown to them to communicate. This is achieved by the agents dynamically installing one or more role components, provided by the CCOMnego, which implement the new interaction protocol. A C-COMnego is divided in to two or more role-components, each of which is of type Initiator or Respondent. An Initiator component can be plugged dynamically in to an agent. From an agent’s point of view, the Initiator component is a black box, which hides the details of the interaction process (with its designated Respondent component) from the agent that installed the component, only exposing an interface which specifies the input and output data needed to execute the role component. A negotiation composition model is a process model in which a negotiation service can be considered as a service and the links between services as process control transitions. In formal terms, an NCM is defined as follows.

A Negotiation Composition Model for Agent Based eMarketplaces

367

Definition 1 (NCM). A negotiation composition model is a 4-tuple, NCM = ( O, N, P, L) where: O = {o1, o2, …, o2m } is a finite set of ontology items, N = { n1, n2, …, nn } is a set of negotiation services , P = { p1, p2, …, pk } is a set of places that are used to contain ontology items and routing rules, L ⊆ (P × N) ∪ (N × P) ∪ (P × P) is a set of links between negotiation services and places. A negotiation service is activated when the control of the process reaches it and is achieved by the execution of a C-COMnego. A place contains a set of ontology items that are produced as a result of the execution of C-COMnego (s) or provided from an external source. Each place is described by place type, ontology items that can be contained in it, and routing rules that define how process control is routed to its successors. A routing rule is defined for the ontology items produced in the place. A place is classified as ‘trigger’, ‘mediator’ or ‘terminator’. There is only one trigger place and one terminator place in an NCM. There is no restriction on the number of mediator places. The NCM can be created dynamically by human users when an unknown type of exception occurs so that an agent can handle the exception. Therefore an NCM is defined so it can be interpreted by a software agent.

3 An Illustrative Example Suppose an eMarketplace exists where travel-related organizations form VOs to provide flexible travel services to customers. Customers can submit their travel schedules (including destination, arrival and departure times, and maximum and/or minimum budgets) to a travel agency. On receiving the order, the travel agency transforms the schedule into an IOW schema and advertises the business opportunities for the roles in the IOW on the eMarketplace to find hotels, flights, and tickets for entertainments for each leg of the trip schedule to fulfill the requirement by the customers. Organizations interested in the roles can submit their offers to the travel agency. The hotels and flights companies selected after the negotiations (for example, reverse auction [4]) with the travel agency form a VO to provide the travel service to the customer. The execution of the IOW is realized when the customer(s) progress their schedule. For example, once the customer(s) checks out a hotel, then the instance of the IOW progresses to next service and relevant information is passed to the organization which provides the service. Suppose that the customers missed their flight from London to Paris. In this case, the customers may want to cancel their plan to have a group dinner and the hotel reservation for that night. If this is the case, the travel agency has to start a new negotiation process to re-arrange the trip by finding alternative transportation from London to Paris, cancelling the schedule of the day in Paris, modifying the booking of the hotel in Paris. Each activity requires negotiation with existing organizations (for changing

368

H. Lee and J. Shepherdson

FIPA Request

H

o1 o3

FIPA Request

Cancel dinner r2

o2

o1

o2 o4

Modify hotel booking

o4 o3

Find alternative transportation Contract -net

H

p1

H

p2

Find flight

Find train

Fig. 2. NCM diagram for the travel support IOW

pre-contract) or new organizations (for new service). Even though that kind of exception (missing a flight) is predictable, how to handle the exception is different for different cases and cannot be modeled in advance. Furthermore, considering the time limitation to find alternative transportation to Paris, the travel agency would need to use a different protocol like Contract-Net rather than reverse auction which typically takes longer. Figure 2 shows an NCM diagram that specifies the order of execution of C-COMs to handle the exception described above.

4 Discussion and Conclusion The novelty of this paper is the proposal of a design that enables ad-hoc composition of negotiation services in a VO context. To our knowledge, this is the first approach in the literature. The core of the design is implementing a negotiation service as a dynamically pluggable software component and providing an NCM which can also be provided to an agent dynamically. By referring to the provided NCMs, an agent can handle unexpected exceptions whilst executing IOWs. The future research direction is to design a mechanism to verify the soundness of dynamically provided NCMs to guarantee that their execution will always result in a sound state (resolving the exception or concluding that it is not resolvable). One possible anomaly is that an agent cannot reach the next state due to a missing ontology items or ‘communication timed out’ message.

A Negotiation Composition Model for Agent Based eMarketplaces

369

References [1] Strader T J, Lin F R and Shaw M J (1998) Information infrastructure for electronic virtual organization management. Decision Support Systems 23:75-94 [2] Strong D M, Miller S M (1995) Exceptions and exception handling in computerized information processes. ACM Transactions on Information Systems 13(2):206 [3] Van der Aalst, W M P, Kumar A (2003) XML-based schema definition for support of interorganizational workflow. Information Systems Research 14(1):23-46. [4] Wagner S M, Schwab A P (2004) Setting the Stage for Successful Electronic Reverse Auctions. Journal of Purchasing and Supply Management 10(1):11–26.

Discovering Technology Intelligence from Document Data in an Organisation Sungjoo Lee, Letizia Mortara, Clive Kerr, Robert Phaal, and David Probert University of Cambridge, Department of Engineering, Cambridge, UK [email protected]

Abstract. Technology Intelligence (TI) systems provide a mechanism for identifying business opportunities and threats. Until now, despite the increasing interest on TI, little attention has been paid to the analysis techniques and procedures for supporting TI, especially the tools which focus on the data stored in unstructured documents. With the dramatic growth of documents captured and stored by organisations in the attempt to provide purposeful intelligence, the development of an application framework for how to build such mining tools would be extremely timely. Detailed guidelines on how to analyse data and develop such tools to meet specific needs of decision makers are in urgent need. Therefore, this research aims to understand how companies can extract TI systematically from a plenty of documents. Guidance will be developed to support intelligence operatives to find suitable techniques and software for getting value from document mining.

1 Introduction With the boost of global competition and the need for continuous innovation, the industrial interests in developing effective capabilities for technology intelligence (TI) are increasing [1]. TI is a collection and delivery of information about new technologies to support the decision-making process within a company. And thus central to the search for TI is to identify, gather and organize and analyse information related to technologies [2]. Especially, as companies are capturing increasingly more data about their customers, suppliers, competitors, and business environment, it has become increasingly critical for them to structure large and complex data sets to meet their information needs [3]. Nevertheless, those data are apt to be accumulated in the storage of companies’ database systems and could not be used more effectively, because the large part of them are in the form of documents, which are multi-attribute and unstructured in nature and thus are hard to be analysed. As a result, voluminous documents might be piled in databases though they have a great potential to provide technology intelligence. To reflect the needs for extracting value from the documents, intelligence applications have been suggested. Knowledge extraction and data visualisation tools constitute one form of the techniques that present information to users in a manner that supports technology decision-making processes. However, companies still have difficulties in selecting appropriate data and techniques that can be used to support a specific decision-making.

372

S. Lee et al.

This study takes a broader perspective of TI in organisations, encompassing all data about market, product, and technology, and attempt to develop an overall framework to extract TI from a large size of documents produced in an organisation. The specific purpose will be to • identify internal and external sources of TI in the form of documents • review techniques for data extraction, analysis and visualisation for documents • develop a relational table between documents and techniques to tell what intelligence could be provided from combination of techniques and documents • give guidance to companies on how to use their documents more effectively The remainder of this paper is organised as follows. First, theoretical and methodological background of this research is briefly explained in section II. Then, development process of document mining is described in section III and the way to applying the framework is followed in section IV. Finally, limitations and further research directions are discussed in section V.

2 Background 2.1 TI and Data Mining TI provides a mechanism for capturing business and technology opportunities and preparing for threats through an effective delivery of relevant information to a company [4], which can be a critical factor for its decision-making process [5]. In order to sustain growth, a company should respond to both market-pull and technology-push forces, by deploying sustaining technologies in its product range and by adopting radical innovation strategies to counter threats by disruptive technologies [6]. During the process, TI activities could support strategic decision-makings of a firm and especially using Data-Mining (DM) approach, the activities could be improved greatly. DM enables a company to turn large volumes of data into information and eventually into knowledge and thus helps TI activities by connecting complementary pieces of information across different domains [7] or providing a broad picture of technical themes in terms of their connections, related institutions, or related people [8], which might give a possibility of technology fusion, early warnings of technological changes, and knowledge on network. Despite the increasing interest on TI, however, little attention has been paid to the analysis techniques and procedures for discovering TI. Since many companies are facing the challenges of interpreting large volumes of document data, which can be an effective sources of TI if well analysed, detailed guidelines on how to analyse those data to meet specific needs of TI are in urgent need. With the dramatic increase in the amount of data being captured by organisations, this is the time to develop an application framework for mining them for their practical use. 2.2 DM Activities DM is defined as a process of exploration and analysis, by automatic or semiautomatic means, of large quantities of data in order to discover meaningful patterns

Discovering Technology Intelligence from Document Data in an Organisation

373

and rules [9]. Especially, with the rapid advance of technologies, the volume of technological data and the need for analysis to obtain a technological intelligence are both increasing as well. Those phenomena facilitate the use of DM techniques in various areas [10] and have developed a number of methods for identifying patterns in data to provide insights and support decision-makings for users [11]. In general, the techniques to mine unstructured documents are grouped into two categories: the first one is to transform unstructured documents in a structured form and the second one is to find some implication from the structured documents. For the former, text-mining and co-word analysis are frequently used. Text mining is intended to discover useful patterns from textual [12]. Recently, it has been actively applied in information retrieval from intellectual property data [8, 12, 13, 14]. Co-word analysis utilises the frequency of the co-occurrence of keywords or classification codes in the literature to produce the relationships between them [15]. An advantage of co-word analysis is that it does not rely on the data's own classification system, allowing analysis results to be interpreted in various ways. The technique has been applied to domain analysis in the fields of chemistry and artificial intelligence etc., technology cartography [16], and also used in data retrieval systems [15]. For the latter, information visualisation techniques are applied to create insightful representations [17]. Most formal models of information visualisation are concerned with presentation graphics [18, 19] and scientific visualisation [20, 21, 22, 23]. Out of them, we use multi-dimensional visualisation techniques representing such data in a two- or three-dimensional visual graphic [24, 25, 26, 27], because results of textmining or co-word analysis in general are in a form of keyword vector or cooccurrence matrix having multi-dimensional attributes. Apart from the techniques, simple data restriction techniques such as PCA (Principle Component Analysis), SOFM (Self-Organisation Feature Map), clustering techniques, and network analysis techniques are also useful in visualisation. Until now, a number of DM techniques have been suggested, addressing the techniques themselves, their algorithms, and system developments. Another research stream has investigated a significant implication from DM results by applying the techniques to a specific domain. Based on those two streams of studies, one promising but not yet addressed issues is how to apply those techniques on large volumes of data in a company to get meaningful TI, taking a holistic approach. The research results can help facilitate the effective use of data and DM techniques and thus will practically help those who are in charge of strategic planning and managers in siteoperations as well.

3 Development of Document-Mining Framework 3.1 Overall Research Process Document mining is defined as a process of exploration and analysis, by automatic or semi-automatic means, of large quantities of document data in order to discover meaningful patterns and rules. It is not a process of searching and listing documents of concern according to their relevance this project aims to understand how companies can extract TI systematically from a large set of documents data. The overall research process is described in Fig. 1.

374

S. Lee et al.

Fig. 1. Overall research process

To develop the document-mining framework, the research involves a large volume of literature review, a series of interviews, and in-dept case studies with organisations that have an established TI system. The aim of this exercise is to capture the implicit challenges associated with establishing a TI analysis framework, sorting and communicating relevant information, and feedback mechanisms need to ensure that the framework operates effectively. From the literature review and web search, the draft of document analysis needs in a firm and the available techniques to mine documents will be identified. Based on them, document-mining framework is developed and its feasibility is verified with application scenarios. The final goal would be to suggest guidelines for document mining, which is customised to each of organisational needs. 3.2 Document Analysis Needs To identify document analysis needs in an organisation, we first identify strategic purpose of document analysis and then list possible data sources that can be used for the purpose. • Step 1: Identify Strategic Purpose of Document Analysis Document data can be used for various purposes. Generally, it can be used to extract intelligence on customers, markets, technologies and business processes [28]. At functional level, it is useful for visualisation, knowledge representation, semantic networks, human resource management, project management, knowledge engineering, information retrieval, personalisation, lesson learned systems [29]. In specific for technology management, it provides information about partners or intellectual property right [30], and also can be used in technology trend analysis, infringement risk analysis, technologist search, R&D trend analysis, researcher search, literature survey, information gathering, forecast and planning [3]. In marketing, it helps understand sales effectiveness, improving support and warranty analysis, relating customer relationship management to profitability [31]. Table 1 summarises the strategic purposes of document analysis based on its possible applications. • Step 2: List Data Sources for Document Analysis Potential data sources for document analysis have been investigated and applied from previous research and practical projects. For example, Kerr et al. (2006) mentioned

Discovering Technology Intelligence from Document Data in an Organisation

375

Table 1. Strategic purpose of document analysis CEO/business planning

R&D planning

Product planning

Market planning

Business trend

Technology trend

Product trend

Market trend

Partners information

Technology forecasting New product idea

Market forecasting

Warranty analysis

Customer relations

Competitors information New technology idea Regulations

R&D redundancy

Product defaults

Customer needs

Investment status

R&D network

Competitor products

Competitors trend

the following as important sources for intelligence; past projects and previous intelligence reports, employees expertise and past experience, and employees knowledge derived from networks of contacts for internal sources; industry and cross-industry organisations, printed or online material such as trade magazines, academic journals, and newspapers, technology-related events including conferences, trade fairs and seminars, patent records and analysis, commercially prepared intelligence reports, collaboration with universities, and other companies or government, futures studies for external sources [1]. In a similar way, Heinrichs and Lim (2003) also identified internal and external data sources; e-commerce systems, sales transaction systems, financial and accounting systems, human resource systems, plant operation systems, market research databases such as customer satisfaction information and quality performance information for internal sources; web-based business intelligence organisations, trade associations, industry organisations, government agencies for competitor’s sales and market share, regional demographics, and industry trends for external sources [32]. Without such a grouping, Cody et al. (2002) listed several data sources such as business documents, e-mail, news and press articles, technical journals, patents, conference proceedings, business contracts, government reports, regulatory filings, discussion groups, problem report databases, sales and support notes, and the Web [31]. Of course, some researchers attempt to analyse document data including patent documents, one of the most frequently utilised data source [33][34], and research literature [35] or product manual [36]. Based on the review, potential sources can be summarised as in Table 2 according to two criteria of data source (external/internal) and planning objective (market/product/technology). Table 2. Potential data sources for document analysis

External

Internal

Market

Product

Technology

Industry trend reports

Product manuals

Patent documents

News and press articles Discussions on products Academic journals Regulatory filings

Customer complaints

Published TRMs

Business contracts

New product proposals

Unpublished TRMs

Sales and support notes Failure reports Marketing reports

Customer surveys

R&D proposals Past R&D projects

376

S. Lee et al.

3.3 Document Analysis Tasks To identify feasible document analysis tasks, we review available DM techniques and then design DM tasks for document analysis based the combination of the techniques. • Step 1: List Available DM Techniques Though various techniques have been suggested, the general techniques include Decision tree (DT), case-based reasoning (CBR), neural network (NN), market basket analysis (MBA), k-means clustering, principle component analysis (PCA) [9], multidimensional scaling (MDS) [37], self-organisation map [38], hierarchical clustering [39]. Co-word analysis [15, 40] word-counting [17], network analysis, semantic analysis, text-mining [12] and database tomography [8] are specialised techniques for text analysis. Those techniques can be classified according to their functionality as shown in Table 3. Table 3. Available DM techniques for document analysis Extraction

Analysis

Visualisation

Keyword extraction (KE)

Decision tree (DT)

Word-counting (WC)

Neural network (NN)

Principal component analysis (PCA)

Summary abstraction (SA)

Market basket analysis (MBA) Multi-dimensional scaling (MDS) Self-organizing map (SOM) k-Means clustering (KMC)

Summary extraction (SE)

Case-based reasoning (CBR) Co-word analysis (CA)

Network analysis (NA)

Hierarchical clustering (HC)

Hyper-linked (HL)

Basic statistics (BS)

2-Dimensional mapping (2DM)

• Step 2: Design DM Tasks for Document Analysis We defined DM tasks as a process of applying a series of DM techniques to obtain a meaningful output. Many researchers have designed useful DM tasks in the area of document mining, which have been applied for various purposes such as information retrieval [41], summarisation [3, 42], technology trend analysis [33, 34, 43], and automated classification applications [44, 45]. In detail, Berry and Linoff (1997) described six basic tasks - classification, estimation, prediction, market basket analysis, clustering and description [9], while Kopanakis and Theodoulidis (2003) explained only three but more advanced ones - association rules, relevance analysis and classification [46]. However, in many studies, the focus of analysis has been patent documents. Tseng et al. (2007) suggested text segmentation, summary extraction – summary and abstraction, keyword extraction, term association, and cluster generation – document clustering, term clustering, topic identification, and information mapping, as a method for patent analysis [41]. Then Yoon et al. (2008) extended the scope of analysis from patents to technological documents, suggesting more generalised tasks including summarisation, information extraction for ontology generation or trend analysis, clustering, and navigating [3]. Of course, similar efforts have been taken in marketing areas as well. Information presentation, knowledge evocation and analytic capabilities are suggested as possible DM tasks [28], which can be concretised as clustering, taxonomy building, classification, information extraction, summarisation,

Discovering Technology Intelligence from Document Data in an Organisation

377

expertise location, knowledge portals, customer relationship management, and bioinformatics [31]. Those tasks can be summarised into 17 document-mining tasks based on the DM techniques that they use in each stage of data extraction, analysis, and visualisation, out of the techniques listed in Table 3. The summarisation results with their references are presented in Table 4. Table 4. DM tasks and related techniques for document analysis DM tasks

Extraction Analysis

1

Summarisation [42]

SA/SE

2

Retrieval [41]

CBR

3

Navigating [3]

Visualisation

HL

4

Association-term [40]

5

Association-document [34] KE-WC

KE

CA-NA

6

Taxonomy [3]

KE-WC

KMC

7

Clustering-multi stage [42]

KE-WC

HC

8

Ontology [3]

KE-WC

HC/KMC

9

Clustering-title [47]

KE-WC

KC-BS

MDS SOFM

10

Topic mapping [42]

KE-WC

HC

11

Trend analysis [3]

KE-WC

BS

12

Portfolio analysis [48]

KE

13

Classification [45]

KE-WC

MDS MDS 2DM

DT/NN

14

Affinity grouping [9]

MBA

15

Prediction [9]

DT

16

Estimation [9]

NN

17

Description [9]

DT, MBA

3.4 Document-Mining Framework Once potential data sources for document analysis are identified and possible tasks are designed, a relational table between documents and techniques can be developed, by identifying the expected intelligence from a combination of a specific task and a specific document. The framework enables an organisation to understand what kinds of intelligence can be extracted from documents and thus what kinds of documents are needed and what kinds of tasks should be applied to get the intelligence that is needed. The detailed process will be described in the next section.

4 Application of Document-Mining Framework 4.1 Intelligence from Documents If an organisation is interested in TI especially based on external technological documents, the relational table can be developed based on three documents - patents,

378

S. Lee et al.

academic journals, and published roadmaps (see Table 2) and 17 available DM tasks (see Table 4). Table 5 is an example of TI that can be extracted from the three documents and 9 tasks out of 17. Among the intelligence that can be expected from those combinations, if there is any meaningful TI worth analysing in meeting the needs, the organisation starts document mining. For example, if the table is developed for R&D planning and the organisation is especially interested in technology trend analysis, a Trend Analysis task on three documents would be meaningful and for the analysis, KE-WC-BS could be conducted in series (see Table 4). Table 5. Sample relational table to extract TI based on three documents and nine tasks Patents SummarisationSummary of technology Summary of an invention

Academic journals

Roadmap(published)

Summary of research

Summary of a roadmap

Keywords in research

Keywords in a roadmap

Keywords in an invention Retrieval

Retrieval of patents of concern Retrieval of article of concern Retrieval of TRM of conExtract of a part in an article cern

Extract of a part in a patent

Extract of a part in a TRM Association

Relations between technologies/patents/competitors Technology/competitors relations

Relations between technolo- Relations between technologies/organisations gies/articles/research groups/authors Technology relations Technology relations Organisations with similar

Infringement risk

Research groups with similar view research concerns

Ontology

Technology ontology (hierarchy) – available technology

Technology ontology (hierar- Technology ontology (hichy) – research areas erarchy) – to-bedeveloped technology

Clustering

Technology/patent/competitor groups based on similarity in contents

Technologies/articles/researchTechnologies/publishers groups/authors groups based groups based on similarity in contents on similarity in contents

Trend analysis Emerging and declining tech- Emerging and declining renologies (described by terms or search areas (described by areas) terms or areas) Portfolio analysis

Importance of available technologies

Development plan and realisation point of technologies

Importance of research areas Importance of to-bedeveloped technologies

Classification Patent assignment to a specific Article assignment to a spe- Technology assignment to a specific organisations technology area cific journal Technology assignment to a specific research group Affinity grouping

Principal and the subordinate re- Principal and the subordinate Principal and the subordilations between technolorelations between technolo- nate relations between technologies/publishers gies/patents/competitors gies/journals/research groups/authors

4.2 Document Analysis Examples Once the documents and tasks for mining are decided, the analysis is followed using the techniques listed in Table 4. The following three figures are the examples of

Discovering Technology Intelligence from Document Data in an Organisation

379

Fig. 2. Document-mining application to product data [49]

Fig. 3. Document-mining application to technology data [48]

document-mining analysis. Fig. 2 describes the results of applying document mining to product data. The left figure shows the customer needs map developed by adopting customer complaints documents to apply association, while the right figure describes the product specification to give the best satisfaction to customers by using customer survey documents to classification. On the other hand, Fig. 3 shows the results of applying document mining to technology data. Application of portfolio analysis and association on patent documents leads to emerging and declining technology areas and technology relations respectively as shown Fig. 3.

5 Conclusion TI provides an effective delivery of relevant information. Extracting explicit intelligence information from an internal repository and monitoring the development of new

380

S. Lee et al.

technologies identified as relevant for the future would be a very powerful competency but organizations are facing the challenge of interpreting a large volume of data, especially in the form of documents that are hard to analyse. To investigate the challenge, this research purposes to understand how organisations can extract TI systematically from a large set of documents data and finally to develop document-mining framework. To the end, organisational needs on documents data are collected and sources of TI in the form of documents are identified. At the same time, techniques for data search, extraction, analysis, and visualisation particularly for documents are reviewed and the DM tasks are designed based on the techniques reviewed, especially for document data. The techniques are then applied to data sources to develop document mining framework, which is in the form of a relational table between documents and tasks to tell what intelligence could be provided when a specific task is applied to a specific document. Finally, the tasks are customised according to organisational needs and scenarios of applications are developed to verify the framework. The research result is expected to give guidance to support intelligence operatives and thus to support the effective use of document data, helping organisational mangers establish and operate a document mining within their own organisations. With all those meaningful implications, this study is subject to several limitations and so further research is needed. Firstly, a qualitative approach such as an interview or case study is required to develop more practical framework. Organisational needs on document analysis and potential data sources should be identified not only by literature review but also in the perspectives of company. Secondly, there are a number of commercial software packages for document analysis, which need to be reviewed in their functionality. Finally, this research is mostly based on review and so a real case study should be followed to verify the feasibility of research.

References [1] Kerr C, Mortara L, Phaal R et al (2006) A conceptual model for technology intelligence. International Journal of Technology Intelligence and Planning 1(2):73-93 [2] Mothe J, Chrisment C, Dkaki T et al (2006) Combining mining and visualization tools to discover the geographic structure of a domain. Computer, Environment and Urban Systems 30:460-484 [3] Yoon B, Phaal R, Probert D (2008) Structuring technological information for technology roadmapping: data mining approach. In: WSEA Conference, Cambridge [4] Lichtenthaler E (2003) Third generation management of technology intelligence processes. R&D Management 33(4):361-375 [5] Dou H, Dou J-M (1999). Innovation management technology: experimental approach for small firms in a deprived environment. International Journal of Information Management 19: 401-412 [6] Husain Z, Sushil Z (1997) Strategic management of technology – a glimpse of literature. International Journal of Technology Management 14(5):539-578 [7] Smalheiser N (2001) Predicting emerging technologies with the aid of text-based data mining: the micro approach. Technovation 21:689-693 [8] Kostoff R, Toothman D, Eberhart H et al (2001) Text mining using database tomography and bibliometrics: a review. Technological Forecasting and Social Change 68(3):223-253 [9] Berry M, Linoff G (2000) Mastering data mining. John Wiley & Sons, NY

Discovering Technology Intelligence from Document Data in an Organisation

381

[10] Porter A, Watts R (1997) Innovation forecasting. Technological Forecasting and Social Change 56:25-47 [11] Shaw M, Subramaniam C, Tan G et al (2001) Knowledge management and data mining for marketing. Decision Support Systems 31(1):127-137 [12] Losiewicz P, Oard D, Kostoff R (2000) Textual data mining to support science and technology management. Journal of Intellectual Information System 15:99-119 [13] Feldman R, Dagan I, Hirsh H (1998) Mining text using keyword distributions. Journal of Intelligent Information Systems 10(3):281-300 [14] Zhu D, Porter A (2002) Automated extraction and visualization of information for technological intelligence and forecasting. Technological Forecasting and Social Change 69:495-506 [15] Ding Y, Chowdhury G, Foo S (2001) Bibliometric cartography of information retrieval research by using co-word analysis. Information Processing and Management 37(6):817842 [16] Engelsman E, van Raan A (1994) A patent-based cartography of technology. Research Policy 23(1):1-26 [17] Keim D (2002) Information visualization and visual data mining. IEEE Transactions on Visualization and Computer Graphics 7(1):100-107 [18] Bertin J (1983) Semiology of graphics: diagrams, networks, maps. University of Wisconsin Press, Madison [19] Mackinlay J (1986) Automating the design of graphical presentations of relational information. ACM Transactions on Graphics 5(2):110-141 [20] Beshers C, Feiner S (1993) AutoVisual: rule-based design of interactive multivariate visualizations. IEEE Computer Graphics and Applications 13(4):41-49 [21] Beshers C, Feiner S (1994) Automated design of data visualizations. In: Rosemblum L et al (ed) Scientific visualization-advances and applications. Academic Press [22] Hibbard W, Dryer C, Paul B (1994) A lattice model of data display. Proc. IEEE Visualization’ 94:310-317 [23] Roth S, Mattis J (1990) Data characterization for intelligent graphics presentations, Proc. Human Factors in Computing Systems Conf. (CHI ’90):193-200 [24] Card S, Mackinlay J, Schneiderman B (1999) Readings in information visualization, Morgan Kaufmann [25] de Oliveira M, Levkowitz H (2003) From visual data exploration to visual data mining: a survey. IEEE Transactions on Visualization and Computer Graphics 9(3):378-394 [26] Spence B (2000) Information visualization. Pearson Education Higher Education Publishers, UK. [27] Ware C (2000) Information visualization: perception for design, Morgen Kaufman [28] Lim J, Heinrichs J, Hudspeth L (1999) Strategic marketing analysis: business intelligence tools for knowledge based actions. Pearson Custom Publishiing, Needham Heights [29] Liao, S (2003) Knowledge management technologies and applications – literature review from 1995 to 2002. Expert Systems with Applications 25: 155-164 [30] Jermol M, Lavrač N, Urbančič T (2003) Managing business intelligence in a virtual enterprise: a case study and knowledge management lessons learned. Journal of Intelligent & Fuzzy Systems 14: 121-136 [31] Cody W, Kreulen J, Krishna V et al (2002) The integration of business intelligence and knowledge management. IBM systems Journal 41(4): 697-713

High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process J.M. Park Process Development, austriamicrosystems AG, A8141 Schloss Premstaetten, Austria [email protected]

Abstract. This paper describes a high-voltage IC technology. Various novel lateral highvoltage device concepts, which can be efficiently implemented in a submicron CMOS process, are explained and analyzed. It’s essential for lateral high-voltage devices to show best trade-off between specific on-resistance Rsp and breakdown voltage BV, and super-junction devices give an opportunity to achieve a best Rsp-BV trade-off for BV over 100V. Key issues for monolithic integration of high-voltage devices and low-voltage CMOS are reviewed in the paper. Finally, hot-carrier (HC) behaviour of a high-voltage 0.35μm lateral DMOS transistor (LDMOSFET) is presented. It is shown that self-heating effects during HC stress have to be taken into account for the HC stress analysis. Together with TCAD simulations and measurements, one can clearly explain the self-heating effects on the HC behaviour of an LDMOSFET.

1 Introduction High-voltage (HV) ICs implemented in a standard low-voltage (LV) CMOS platform have attracted much attention in a wide variety of applications [1]. Monolithic integration of HV and LV devices has offered efficient protection components, simple drive characteristics, and good control dynamics together with a direct interface to the signal processing circuitry on the same chip. As a result, information and telecommunication fields have greatly benefited from advances in high-voltage IC (HVIC) technology. The integration of HVIC into standard logic CMOS by exploiting RESURF principles often requires dedicated implantations and additional processing. The main issues in the development of HVICs are to obtain the best trade-off between Rsp and BV, and to shrink the feature size without degrading device characteristics. New concepts such as super-junction (SJ) are studied and extended to improve device characteristics of lateral HV devices. Section 2 provides a review of the recent developments in lateral HV device technologies. Process development and qualification of the HV CMOS process increase cost and complexity. By introducing innovative technologies, it’s essential to minimize the fabrication cost while keeping best device performance [2]. State of the art of the HVIC and LV CMOS process integration concepts are descrived in section 3. Device simulation with TCAD (technology computer-aided design) tools has proven to play an important role for the design engineers and researchers to analyze, characterize, and develop new devices [3]. Two- and threedimensional device simulations are performed to study new device concepts. Long term and short term reliabilities like a HC behaviour [4, 5], NBTI (negative bias temperature instability), and ESD (electrostatic discharge) are also key issues for practical

384

J.M. Park

application of HVICs. High electric fields in a HVIC generate hot carriers, which cause device degradation by interface trap generation and charge trapping in the oxide. New analysis like self-heating effects on the hot carrier induced degradation is described in this paper.

2 Standard CMOS Compatible Lateral HV Devices Commonly used CMOS compatible HV devices are the LDMOSFETs and lateral insulated gate bipolar transistors (LIGBTs) implemented in bulk silicon or SOI (Silicon on Insulator) [6, 7]. Over the past decade a variety of new HV devices has been suggested and commercialized. New structures such as, SJ devices [8, 9], lateral trench gate [10], and folded gate LDMOS transistors (FG-LDMOSTs) [11] have been proposed to improve the performance of conventional HV devices. SJ devices assume complete charge balance in the drift region. These results in a significant reduction of Rsp. FG-LDMOSTs have been suggested to increase the channel area without consuming more chip area. A lateral trench gate LDMOSFET uses narrow trenches as channels. Contrary to the conventional vertical trench MOSFETs with current flow in vertical direction, the lateral trench gate is formed laterally on the side wall of a trench and the channel current flows in lateral direction through the trench side walls. This gives an increased channel area compared to that of conventional LDMOSFETs. 2.1 LDMOS and LIGBT Fig. 1 (a) shows the cross section of the LDMOSFET. Normally, LDMOSFETs can be made in an optimized n-type epitaxial layer, and deep p-type diffused regions are used to isolate them. The standard n-type and p-type source and drain regions are used for contacting source/drain and body, respectively. The major limitation of LDMOSFETs is their relatively high Rsp due to the majority carrier conduction mechanism. IGBT is a relatively new HV device which is designed to overcome the high on-state loss of power (a) LDMOS MOSFETs. The device is essentially a combination of a pnp-bipolar transistor which provides high current handling capability, and an n-channel MOSFET which gives a highimpedance voltage control over the bipolar base current. It can be fabricated both as a high-power discrete vertical IGBT and a lowpower lateral IGBT, the latter of which presents interesting possibilities for integration (b) LIGBT together with control circuitry. Fig. 1 (b) Fig. 1. Cross section of the LDMOS shows a cross section of the LIGBT. LIGBT and LIGBT is similar to that of an LDMOSFET, the gate

High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process

385

is also formed by double diffusion. The main difference between LIGBT and LDMOSFET is that it has a p+-anode instead of the n+-drain of LDMOSFETs. The holes from the anode are injected into the n-drift region and electrons flow into the drift region from the source through the channel. If the hole concentration exceeds the background doping level of the n-drift region, the device characteristics are similar to those of a forward biased pin-diode. As a result, it can be operated at a higher current density compared to conventional LDMOSFETs. 2.2 SJ LDMOSFET Ron and BV are inversely related to each other. The reducing Ron while maintaining a BV rating has been the main issue of HV devices. Although much effort has been put into the reduction of Rsp while maintaining the desired BV, it has been understood that there is a limit [12]. Recently, SJ concept was suggested and studied, which achieved a significant improvement in the tradeoff between the on-resistance and the BV compared to conventional devices. Assuming complete charge balance between the n- and pFig. 2. Rsp versus BV of different column of the drift region of the SJ structure, drift device structures and their theoretical limits [12]. doping can be increased drastically. Fig. 2 shows the Rsp versus BV of four different device structures and their theoretical limits. Note that these theoretical limits are only one part to estimate the device performance, it is mainly related to the cost of the devices. From this figure SJ devices have a best trade-off between Rsp and BV (generally, voltage rating over 100V), and lower Rsp is obtained with smaller column width. Doping concentration rises with decreasing column width, however, small columns become more and more difficult to produce. Fig. 3 (a) shows the potential distribution of the SJ pn-diode, which is the basic structure for all the SJ devices, at a reverse voltage of 300V. Potential lines are uniformly distributed throughout the drift region. Fig. 3 (b) shows the electric field distribution of the SJ pn-diode. It shows a rather high electric field along the pn-junction (and p+n- and n+p-junction) compared to that

(a) Potential distribution at 300V

(b) Electric field distribution at 300V

Fig. 3. SJ pn diode

386

J.M. Park

(a) SJ SOI-LDMOSFET

(b) Current flow iso-line.

Fig. 4. SJ SOI-LDMOSFET [3]

at each side of the device, but the electric field distribution is nearly square shaped throughout the drift region. The standard SJ SOI-LDMOSFETs can be made by introducing extra p-columns in the drift region (Fig. 4). It is assumed that the charge in the n- and p-column of the drift layer should be exactly balanced. To increase further the on-state conduction area in the drift region an unbalanced SJ SOI-LDMOSFET [3], which has a larger ncolumn width than that of an n-column, was suggested. Generally, unbalanced structure shows a low on-resistance compared to the conventional SJ devices by increasing the current path area and by suppressing the mobility degradation, although doping concentration at the active region is lower than that of conventional SJ devices. The BV of SJ devices strongly depends on the charge balance condition. In practical manufacturing it is difficult to achieve perfect charge balance. Generally, it is assumed that the doping can be controlled within ±10% of the nominal charge. Therefore, it is important to reduce the sensitivity of the charge imbalance on the BV.

3 High-Voltage and Low-Voltage Process Integration 3.1 Process Integration HVICs have always used design rules and technologies which are less efficient than that used for ULSI and VLSI devices. When ULSI devices used submicron IC design rules HVICs devices were fabricated with 1.5 or 2µm design rules. This difference was essentially linked (i) to the more complex fabrication that must be taken into account: isolation, combination of different kinds of devices, CMOS, DMOS, bipolar, and (ii) to the important development of VLSI and ULSI devices driven by a large market. Recently design rules for HVICs went down from 0.8 to 0.13µm. In addition, it’s not trivial to implement monolithically HV devices and LV CMOS in a single chip. As shown in Fig. 5, state of the art of LV CMOS has a negligible thermal budget to reduce the channel length, shallow junction depth, small chip area consumption, extremely thin gate oxide (ex. 3.5nm for 1.8 V gate voltage), low-voltage operation, shallow trench isolation (STI), and channel engineering is getting important to keep the high LV COMS performance. On the other hand, HV device has a large thermal budget, deep junction depth, large chip area consumption, thick gate oxide (ex. 50nm for 20V gate voltage), high-voltage operation, and drift region engineering is getting important to have a best Rsp-BV trade-off.

High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process

-

LV CMOS Negligible thermal budget Shallow junction depth Small chip area Thin gate oxide Low-voltage operation Shallow trench isolation Channel engineering

-

-

387

HV Devices Large thermal budget Deep junction depth Large chip area Thick gate oxide High-voltage operation Field oxide (FOX) Drift region engineering

LV CMOS + HV Devices Optimum LV CMOS and HV device performance Minimum number of additional masks Minimum chip size to reduce the cost Both of the thin & thick gate oxide Isolation between LV & HV devices (junction isolation, SOI) Reliabilities (Oxide, NBTI, HC, ESD, latch-up, …) Fig. 5. Demands for LV CMOS & HV devices

(a) STI corner of the HV device

(b) CVD + thermal oxide

Fig. 6. STI corner issues of the HV device

Considering the application fields, chip cost, and process complexity, one has to decide process integration concept. Highly doped buried-layer gives a good isolation between the device and substrate together with robust ESD performance, but it increases the process complexity. For design rule below 0.25µm, STI has to be introduced instead of FOX. As shown in Fig. 6, STI corner shape and oxide thickness at the STI top corner should be carefully controlled, because they directly affect the electrical performances of the HV devices. One solution to adjust the STI corner is to introduce the “CVD + thermal oxide”, but it may degrade the oxide quality. In general, high-energy and high-dose implantations for LV wells are used to minimize the thermal budget. Then, it is necessary to introduce the dedicated wells with low-dose to form the drift region of HV devices. Therefore, well sharing between LV and HV devices is also one of the key issues to reduce the number of masks.

3.2 TCAD Simulations Device simulation with TCAD tools has proven to play an important role for the design engineers and researchers to analyze, characterize, and develop new devices. It saves time and lowers the cost of designing devices when compared to the experimental approach. It greatly helps to evaluate the validity of LV CMOS & HV devices

388

J.M. Park

process integration without real silicon process. In addition, it allows to see physical effects clearly in the semiconductor devices concerning new device concepts. Clear understanding of TCAD used for the analysis of HV semiconductor devices is essential to obtain an accurate and reliable simulation result. A great deal of effort has been put into the development of a stable and powerful TCAD tool. In this study MINIMOS–NT [13] and DESSIS [14], general purpose device simulators, are used for the simulations. There are several physical models for the purpose of numerical device simulations starting from Boltzmann’s transport equation, Monte–Carlo method, hydrodynamic, and the simple drift–diffusion (including thermodynamic) models. Depending on the type of devices under investigation and the desired accuracy of simulations, one of the physical models above mentioned can be chosen.

4 Self-heating Effects on the HC Behaviour of LDMOSFETs High electric field near the bird’s beak region is believed to be a major cause of the HC generation. One can see peak electric field at the bird’s beak region (circle between gate and field oxide in Fig. 7 (a)) at high drain voltage under low gate voltage. With increased gate voltage the peak electric field moves from the (a) High electric field regions in the bird’s beak towards the drain side, and HC LDMOSFET. generation at the bird’s beak region is suppressed at high gate voltages. HC generate defects in the oxide and at the Si/SiO2 interface. Interface traps (or charge trapping in the oxide) created by this process collect charge over time and directly affect the device operation. Extensive research has been undertaken to minimize the degradation effects by hot carrier injection. Several fabrication steps were suggested to minimize hot carrier effects in LDMOSFETs. Hot carrier generation can be minimized by (b) Substrate current using various RESURF techniques in order to Fig. 7. Substrate current versus p-body/ reduce the electric field at the interface of the n-drift junction of a LDMOSFET device. Moving the current path away from the high electric field and the impact ionization region deep into the silicon instead of the surface can help to reduce hot carrier degradation of the device. Substrate current is the indirect way to see the amount of impact-ionization by hot-carrier generation. As shown in the Fig. 7 (b), by moving the p-body/n-drift junction from (A) to (B) one can see suppressed substrate current. Two dimensional device simulations together with measurements are performed to explain clearly the HC behaviour of a HV LDMOSFET including self-heating effects. The devices used in this study were fabricated in a 0.35μm CMOS-based HV technology.

High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process

389

Gate oxide thickness of 48nm was formed by thermal oxidation. The HV p-channel LDMOSFETs (Fig. 8 (a)) for 50V applications, with a gate length 0.8 μm and a width of 20 μm, were used in the study. Fig. 8 (b) shows the potential near the Si/SiO2 interface at gate voltages VG = -10V and 25V, respectively. Direction of potential (compared to applied gate voltage) is changed in the Bird’s Beak region for the low gate voltage (VG = 10V) and in the middle of drift region for the high gate voltage (VG = -25V). Near the bird’s beak region at low gate voltage (VG = -10V), there are two adjacent regions which show electric field components normal to the current flow vector, near the Si/SiO2 interface, with a positive and a negative sign respectively. Consequently, both cases of electron and hole injection have to be considered for hot-carrier degradation behaviour. Hot carrier stress experiments (under gate voltage VG = -20V and drain voltage VD = -50V) and NBTI stress (under VG = -20V and ground on source and drain at 423K) were performed for device reliability evaluations.

(a) Schematic view

(b) Potential at the Si surface.

Fig. 8. 50V p-channel LDMOSFET for HC and NBTI stress 100

Del_Vth (%)

HC Stress (Vg=-20V, VD=-50V) NB TI (423 K)

10

1 1

10

100

1000

10000

100000

Stre ss Time (sec)

(a) Temperature distribution at VG = -20V, and VD = -50V.

(b) Percent change of the threshold voltage Vth.

Fig. 9. Temperature distribution and Vth-shift during the HC and NBTI stresses

Fig. 9 (a) shows the temperature distribution of the p-channel LDMOSFET at VG = -20V and VD = -50V. The temperature rise is highest in the middle of the drift region, and it decreases towards the bottom of substrate. The channel region also shows a high temperature over 400K. In our devices Vth shifts were found to be negative (absolute Vth was increased) under the hot carrier stress, which correspond to a net positive charge build-up at the Si/Si02 interface. Because of the increased phonon scattering with temperature rise, hot-carrier generation, generally, is suppressed in an

390

J.M. Park

n-channel LDMOSFET. However, NBTI in a p-channel LDMOSFET is greatly enhanced with temperature increase. Fig. 9 (b) shows a good agreement of ΔVth between hot carrier stress and NBTI stress at 423K. It proves that the temperature rise during hot carrier stress causes a large amount of degradation at the channel region purely due to NBTI-degradation.

5 Conclusion State of the art of the HVIC technology was described together with various lateral HV devices, which can be implemented in the submicron LVCMOS process. Novel SJ concept has been studied with 2D device simulations, and it shows that SJ devices have a best trade-off between Rsp and BV (generally, voltage rating over 100V). Finally, HC behaviour of a HV 0.35μm LDMOSFET was presented. With TCAD simulations the large amount of temperature increase by the self-heating was observed during HC stress. Therefore, self-heating effects during HC stress have to be taken into account for the HC stress analysis. Together with TCAD simulations and measurements, one can clearly explain the self-heating effects on the HC behaviour of a p-channel LDMOSFET.

References [1] T Efland 2003) Earth is Mobile – Power. Proc. Intl. Symp. Power Semiconductor Devices & Integrated Circuits 2-9 [2] V Vescoli, J M Park, S Carniello, and R Minixhofer (2008) 3D-Resurf: The integration of a pchannel LDMOS in a standard CMOS process. Proc. Intl. Symp. Power Semiconductor Devices & Integrated Circuits 123-126 [3] J M Park, R Klima, and S Selberherr (2002) Lateral Trench Gate Super-Junction SOILDMOSFETs with Low On-Resistance. Proc. European Solid-State Device Research Conference 283-286 [4] J M Park, H Enichlmair, and R Minixhofer (2007) Hot-Carrier Behaviour of a 0.35μm HighVoltage n-channel LDMOS Transistor. 12th Intl. Conference on Simulation of Semiconductor Devices and Processes, Springer 369-372 [5] V Vescoli, J M Park, H Enichlmair, K Martin, R Georg, M Rainer, S Martin (2006) Hot Carrier Degradation Behaviour of High-Voltage LDMOS Transistors. 8th International Seminar on Power Semiconductors 79-84 [6] S Merchant, E Arnold, H Baumgart, S Mukherjee, H Pein, and R Pinker (1991) Realization of High Breakdown Voltage (>700V) in Thin SOI Devices. Proc. Intl. Symp. Power Semiconductor Devices & Integrated Circuits 31-34 [7] B Murari, C Contiero, R Gatiboldi, S Sueri, and A Russo (2000) Smart Power Technologies Evolution. Proc. Industry Applications Conference 1:10–19 [8] T Fujihira (1997) Theory of Semiconductor Superjunction Devices. Jpn. J. Applied Physics 36(10):6254-6262 [9] M Saggio, D Fagone, and S Musumeci (2000) MDmeshTM: Innovative Technology for High Voltage Power MOSFETs. Proc. Intl. Symp. Power Semiconductor Devices & Integrated Circuits 65-68

High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process

391

[10] Y Zhu, Y Liang, S Xu, P Foo, and J Sin (2001) Folded Gate LDMOS Transistor With Low On-Resistance and High Transconductance. IEEE Trans. Electron Devices 48(12):2917-2928 [11] Y Kawaguchi, T Sano, and A Nakagawa (1999) 20V and 8V Lateral Trench Gate Power MOSFETs with Record-Low On-resistance. Proc. Intl. Electron Devices Meeting 197-200 [12] J M Park (2004) Novel Power Devices for Smart Power Applications. Dissertation, IUE, TU Vienna [13] (2002) MINIMOS-NT 2.0 User’s Guide, IUE, TU Vienna [14] (2007) DESSIS User’s Guide, Synopsys

Life and Natural Sciences

Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis Sungbo Cho Biohybrid Systems, Fraunhofer IBMT, St. Ingbert [email protected]

Abstract. The goal of this article is the conception, development, and evaluation of micro system-based impedance spectroscopy for the diagnosis of atherosclerosis. For this, it was investigated basically how the changes of tissue parameter on the cellular level can affect the measured impedance by using a single cell model and shown that cellular growth and distribution affect the impedance of tissues. Based on a cell layer model, it was found that a cellular alteration induced by atherosclerotic pathology is well reflected in the measured impedance of cell assemblies. For the intravascular impedance measurement, a balloon impedance catheter with integrated flexible microelectrodes was developed. From an in situ test with animal model, it was successfully demonstrated that the aortas containing atherosclerotic fatty plaques can be distinguished from normal aortas by intravascular impedance measurement.

1 Introduction Chronic diseases led by cardiovascular disease are the largest cause of death in the world since more than 17 million people a year die mainly from heart disease and stroke [1]. The main cause of heart attack or stroke is atherosclerosis, a chronic disease affecting the arterial blood vessel and forming multiple plaques within the arteries [2]. The plaque rupture of a blood vessel with the subsequent thrombus formation frequently causes the acute coronary syndromes (ACS). However, most of ACS are triggered by the rupture showing non-critical stenoses in typical X-ray angiography or intravascular ultrasound (IVUS) [3] and [4]. Hence, new methods are required to characterize the plaques in vessels for more precise diagnosis of atherosclerosis. Electrical impedance spectroscopy (IS), one of electrochemical analysis, has a potential to characterize the plaques in vessels non-destructively and quantitatively. IS is a method to measure the frequency dependent electrical properties, conductivity and permittivity, of materials [5]. For the intravascular impedance diagnosis of atherosclerotic plaques, an impedance catheter with the array of 5 annular voxels has been developed and used to detect the impedance of disk-shaped plastic droplets representing fatty lesions in a human iliac artery in vitro [6]. For more sensitive electrical characterization of atherosclerotic plaques in vessels, it was suggested to use a balloon impedance catheter (BIC) which consists of electrodes integrated with typical balloon catheter [7]. Since the microelectrodes contact with intima according to the inflation of balloon, the impedance measurement of vessels can avoid the disturbance of intravascular condition (e.g. velocity or viscosity of blood component) and therefore

396

S. Cho

can be more sensitive and stable. However, so far, the utilization of BIC for the intravascular impedance measurement of vessels has been limited due to the difficulty in fabrication of microelectrodes durable to in- and deflation of balloon catheter, and the lack of knowledge for the interpretation of IS data measured on such a thin and small vessel walls. The goal of this article is to the conception, development, and evaluation of micro system-based IS using BIC for the intravascular diagnosis of atherosclerosis. For this, it is investigated 1. how the changes of tissue parameter on the cellular level can affect the measured impedance, 2. whether cellular alteration related to atherosclerosis can be characterized by IS, 3. whether reproducible impedance measurement with BIC can be performed in vessels.

2 Electrical Impedance of Cells and Tissue The electrical impedance of a biological tissue is determined by various tissue parameters such as its structure and composition as well as cellular distribution. To understand basically how the changes of tissue parameter affect the impedance of tissue, the cellular alteration was characterized electrically by using IS with a single cell model keeping the level of complexity as low as possible. However, the impedance measurement of single cells has been limited by the electrode impedance increasing with decrease of electrode size [8]. To measure the impedance of single cells without the disturbance of electrode impedance, the use of a micro hole-based structure was considered as Fig. 1 [9]. The insulating layer with micro hole was fabricated by semiconductor process technology. Onto the one side of silicon wafer (100), a Si3N4 layer with the thickness of 800nm was deposited by plasma enhanced chemical vapour deposition (PECVD). By photolithography and reactive ion etching, a micro hole with the radius of 3µm was patterned in the insulating Si3N4 layer. On the other side of substrate, a SiO2 layer was deposited by PECVD. To make a well for the conservation of Fig. 1. Schematic of impedance measurement cell, and to connect the well with the of a single cell on micro hole hole, the SiO2 layer and Si wafer were etched in sequence. For the insulation of well, a SiO2 layer with the thickness of 200nm was deposited onto the substrate. The thickness and area of the insulated layer were 1µm and 220µm x 220µm, respectively. With experimental setup as Fig. 1, the impedance of single cells was measured by using the top and bottom electrode pairs connected to an impedance analyzer (Solartron 1260, Solartron Analytical, Farnborough, UK). By controlling a culture

Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis

397

medium (RPMI 1640, 10% FCS, 0.5% Penicillin/Streptavidin) with micro fluidic controller (Cell-Tram, Eppendorf, Wesseling-Berzdorf, Germany), a L929 single cell was positioned on the micro hole. Fig. 2 shows micrographs of a positioned single L929 cell (a) and cell cultured overnight (b), and measured impedance magnitudes of without cell (No Cell), positioned single cell (0), and cell cultured for 2 or 4 days (c). Averages and standard errors of measured impedance magnitudes were symbols and plus bars, respectively (n for each group = 3). During the aspiration, the spherical shape of suspended cell was positioned on the hole Fig. 2. Micrograph of a positioned single and a part of cell could be inserted into L929 cell on the hole (middle black) with the hole in dependence on the pressure, radius of 3µm (a) and cell cultured oversurface tension, and viscoelasticity of cell night (b), and measured impedance magnitude of without cell (No Cell), positioned [10]. A well positioned cell was reflected single cell (0), and cell cultured for 2 or 4 in an increase of impedance magnitude in days (c), averages and standard errors of the low frequency in comparison to the each group were symbols and plus bars (n impedance of a free hole. Since the cell for each group = 3) [9] positioned onto a micro hole adhered and proliferated on the surface around hole, the impedance magnitude at low frequencies increased with increase of cultivation period. The difference of impedance magnitude between the groups could not be measured when the frequency is above tens of kHz due to the stray current over the insulating layer. From the result based on the single cell model, it was found that the impedance of cells is determined by the cell growth and distribution. However, real biological tissues are not a distribution of single cells. In biological tissues, the cells interact. The interaction of cells can not be represented by the single cell model. To include cell/cell interaction, models based on cell assemblies or tissues are required. In the next chapter, a cell layer model was used to investigate the suitability of IS for the characterization of cell assemblies and tissues related with atherosclerosis.

3 Effect of Cellular Alteration on Impedance of Cells A precondition to use IS for the diagnosis of atherosclerosis is that the alteration in vessels caused by atherosclerotic pathology is reflected in the measurable impedance. To characterize the cellular alteration involved with atherosclerosis by IS, it was considered to use a cell layer model with electrode-based chip as Fig. 3. For the impedance measurement of cell layer, a planar electrode-based structure was fabricated by using semiconductor process technology [11]. After deposition of insulating Si3N4 layer on a silicon wafer by PECVD, high conductive platinum

398

S. Cho

electrodes and gold interconnection lines were patterned on the substrate. After insulation with Si3N4 layer over the substrate, the electrodes were opened by reactive ion etching. The opened area of electrode was circular shape with the radius of 500µm, and a pair of electrodes was used for the impedance measurement. A cylindrical glass dish was integrated with the electrode substrate for the conservaFig. 3. Schematic of impedance measurement tion of cells on electrodes. For the imof cell layer on electrodes pedance measurement of cells, the electrodes were electrically connected to the impedance analyzer. A herpes simplex virus (HSV) infection model was used since HSV is related with the development of atherosclerosis [12]. Vero (African green monkey kidney) cells exhibiting a wide spectrum of virus susceptibility were prepared. For the experiment, 8 x 104 Vero cells with culture medium of 3ml (D-MEM, 10% FBS, 50 units penicillin, 50µg/ml streptomycin) were added in the electrode-based chip. During the impedance measurement of cells in the frequency range of 100Hz to 1MHz, cells were infected with HSV at the different multiplicities of infection (MOI: 0.06, 0.006, or 0.0006). To interpret the measured impedance of cells, a mathematical model derived by Giaever and Keese [13] was used. By nonlinear curve fitting analysis, a parameter Rb reflecting inter cellular junction in the established model was adjusted to minimize the sum of squared deviations between the model and measured impedance spectra.

Fig. 4. Micrograph of Vero cells not infected (a) or infected with HSV for 102h at the MOI of 0.006 (b) on the platinum electrode with a radius of 500µm, arrow: exposed electrode area, scale bar: 100µm, and Rb of Giaever and Keese model [13] during the cultivation or infection with HSV at different MOI [11]

Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis

399

Without the infection, the cells well adhered and are confluent on the platinum electrode (see Fig. 4 (a)). However, the infected cells were shaped round and detached from the electrode in dependency on the time of infection and virus concentration. At 102h after infection with HSV at the MOI of 0.006, the partially exposed electrode area caused by the detachment of cells was clearly observed (see arrows in Fig. 4 (b)). The impedance measurement of cells on planar electrodes was restricted by the electrode polarization at low frequencies, and also by the stray capacitance at high frequencies. The impedance of cell layer revealed in the intermediate frequencies is mostly determined by cellular adhesion or extra cellular matrix [14]. At the beginning of measurement, the parameter reflecting inter cellular junction Rb was increased due to the cellular adhesion and spread. As the cells lost their adhesion during the infection, however, Rb was diminished dependently on MOI (see Fig. 4 (c)). The result demonstrates that the virus induced-alteration in cell assemblies involved with atherosclerotic pathology can be sensitively monitored by IS. Further, it is expected that alterations in vessel related with atherosclerosis can be electrically characterized by IS. For the impedance measurement of vessels, however, it needs to understand the influence of complex structures and properties of vessel with plaques on the intravascular impedance measurement. In the following chapter, it is reported whether the BIC-based intravascular characterization of atherosclerotic plaque in vessels can be performed with such a sensitivity and reproducibility that relevant medical parameters are determinable in animal models.

4 Intravascular Impedance Measurement of Vessel For the intravascular impedance measurement of vessels, a BIC was developed by integrating a flexible micro electrode array with a balloon catheter as Fig. 5. The flexible electrode structure was fabricated based on an insulating polyimide (PYRALIN PI 2611, Du Pont) [15]. The polyimide resin was coated on a silicon wafer by spin coating and imidized in a curing process at 350°C under nitrogen atmosphere. The platinum structures of four-rectangular electrodes, transmission lines, and terminal pads were patterned by lift-off technique. After the deposition of polyimide layer with 5µm thickness over the patterned metals, the areas of electrodes and terminal pads were exposed by reactive ion etching. The exposed area of electrodes was 100µm by 100µm, and the separation distance between centers of electrodes was 333µm. The electrodes were connected with impedance measurement system consisted of the impedance analyzer in combination with a bioimpedance interface (Solartron 1294, Solartron Analytical, Farnborough, UK). Using the fabricated BIC, the intravascular impedance of vessels was measured in situ animal models which enabled the impedance analysis of vessels in parallel to histological investigation. Animal experiments were designed in accordance with the German Law for Animal Protection and were approved by the Review Board for the Care of Animal Subjects in Karlsruhe, Germany. The experiments conform to the Guide for the Care and Use of Laboratory animals published by the US National Institutes of Health (NIH Publication No. 85–23, revised 1996). Six female New Zealand White rabbits (Harlan-Winkelmann, Borchen, Germany), 1.5kg were kept at standard conditions (temperature 21°C, humidity 55%) and fed with a 5% cholesterol-enriched

400

S. Cho

diet (Ssniff Spezialdiäten GmbH, Soest, Germany) for 17 weeks to induce atherosclerotic plaques. After preparing thoracic aorta, black points were marked on the superficial layer of aorta to guarantee an exact matching of the histology of marked aortic tissues and the impedance measurement (separation distance between marked points: 5- 10mm). After introduction of the guide wire, BIC was placed distally of the aortic arch. When the position of electrodes is exactly matched to the marked points on the surface of aorta under fluoroscopy control, the balloon was inflated with 0.5atm to ensure a close contact of the electrodes to the aortic wall and then the impedance was measured.

Fig. 5. Fabricated BIC (upper left) and polyimide-based flexible electrode array (upper right), and schematic of intravascular impedance measurement of vessel by using BIC [16]

The used four-electrode method was able to reduce the effect of electrode impedance on the total measured impedance and therefore to extend the effective frequency range of impedance measurement. The polyimide-based electrode array was flexible and ultralight enough that the property of BIC was not degraded during the expansion and contraction of balloon. The intravascular impedance measurement with BIC was determined not only by the relative vessel thicknesses to the electrode configuration [17] but also by the relative position of electrode array to the atheromatous plaque in vessel [18]. Therefore, the histology of sections was analyzed at the marked points (measured points). The plaques induced in the intima of vessels were early type according to the definition of Stary et al. [2]. Vessel without plaque was classified as P0 (n=44), one with plaque thinner than media as PI (n=12), and one with plaque thicker than media as PII (n=36). To minimize the dependence of impedance on the different thicknesses of vessels, the impedance change versus frequency (ICF=impedance magnitude at 1kHz – impedance magnitude at 10kHz) was analyzed rather than using raw data. Fig. 6 shows a micrograph of atherosclerotic aorta and ICF value versus plaque type of vessel. The symbols and lines were average and standard error. The

Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis

Fig. 6. Micrograph of aorta segment with plaque indicated by arrow (a), and ICF of plaque type with result of hypothesis t-test (b), P0: no plaque (n=44), symbols and lines were average and standard error, PI: plaque thinner than media (n=12), PII: plaque thicker than media (n=36) [16].

401

ICF of group PII (–22.2±43.29Ω) was significantly lower in comparison to PI (137.7±53.29Ω; p=0.05) and P0 (208.5±55.16Ω; p=0.002). However, there was no difference between group P0 and PI (p=0.515). From the in situ animal experiment, it was shown that the BIC is feasible for the intravascular impedance measurement of vessels and that the early type of atherosclerotic plaques can be electrically characterized by ICF analysis. Additionally, the fabricated BIC was feasible to characterize the advanced plaque type of human aorta in vitro [19]. Due to the dependence of intravascular impedance measurement on the vessel thickness or on the plaque position, it is necessary to control the distribution of electric fields in vessels especially for in vivo application. For this, in future work, it should be investigated to use a multielectrode arrangement. By selecting the required electrodes in the multi-electrodes, it is possible to control the distribution of electric fields and to image the intravascular impedance. Further, the use of BIC will be studied on atherosclerotic human models in vivo.

5 Conclusions In this article, it was described about the conception, development, and evaluation of micro system-based IS using BIC for the intravascular diagnosis of atherosclerosis. Through the approach on the cellular level to animal model, it was concluded that 1. the impedance of tissue is determined by the growth and distribution of cells unit of tissue, 2. the virus-induced disintegration of cell assemblies involved with atherosclerotic pathology can be characterized by IS, 3. the fabricated BIC is feasible for the intravascular impedance measurement of vessel, and the impedance of vessels with atheromatous plaque thicker than media is distinguished from one of normal vessels For the clinical trial in vivo, a multi-electrode arrangement can be used for BIC which can control the measurement area in vessels and increase the measurement resolution to the various positions of plaque in vessels.

402

S. Cho

Acknowledgments. The author wish to acknowledge Prof. Fuhr, director of Fraunhofer IBMT for his guidance for research, Dr. Thielecke, head of Biohybrid Systems in Fraunhofer IBMT, and PD Dr. Briesen, head of Cell Biology & Applied Virology in Fraunhofer IBMT, for their support for research, PD Dr. Süselbeck and Dr. Streitner, Department of Medicine, University Hospital of Mannheim, University of Heidelberg, for the cooperation on the research of cardiology.

References [1] Yach D, Hawkes C, Gould C L et al. (2004) The global burden of chronic diseases overcoming impediments to prevention and control. The Journal of the American Medical Association 291:2616-2622 [2] Stary H C, Chandler A B, Dinsmore R E et al. (1995) A definition of advanced types of atherosclerotic lesions and a histological classification of atherosclerosis. Arteriosclerosis, Thrombosis, and Vascular Biology 15:1512-1531 [3] Ambrose J A, Tannenbaum M A, Alexopoulos D et al. (1988) Angiographic progression of coronary artery disease and the development of myocardial infarction. Journal of the American College of Cardiology 12:56-62 [4] Fuster V, Badimon L, Badimon J J et al. (1992) The pathogenesis of coronary artery disease and the acute coronary syndromes. The New England Journal of Medicine 326:24250 [5] Grimnes S and Martinsen Ø G (2000) Bioimpedance and bioelectricity basics. Academic Press, San Diego. [6] Konings M K, Mali W P Th M, Viergever M A (1997) Development of an intravascular impedance catheter for detection of fatty lesions in arteries. IEEE Transactions on Medical Imaging 16:439-446 [7] Stiles D K and Oakley B A (2003) Simulated characterization of atherosclerotic lesions in the coronary arteries by measurement of bioimpedance. IEEE Transactions on Biomedical Engineering 50:916-921 [8] Ayliffe H E, Frazier A B, Rabbitt R D et al. (1999) Electric impedance spectroscopy using microchannels with integrated metal electrodes. IEEE Journal of Microelectromechanical Systems 8:50-57 [9] Cho S and Thielecke H (2007) Micro hole based cell chip with impedance spectroscopy. Biosensors and Bioelectronics 22:1764-1768 [10] Cho S, Castellarnau M, Samitier J et al. (2008) Dependence of impedance of embedded single cells on cellular behaviour. Sensors 8:1198-1211 [11] Cho S, Becker S, Briesen H et al. (2007) Impedance monitoring of herpes simplex virusinduced cytopathic effect in Vero cells. Sensors and Actuators B: Chemical 123:978-982 [12] Key N S, Vercellotti G M, Winkelmann J C et al. (1990) Infection of vascular endothelial cells with herpes simplex virus enhances tissue factor activity and reduces thrombomodulin expression. The Proceedings of the National Academy of Sciences 87:70957099 [13] Giaever I and Keese C R (1991) Micromotion of mammalian cells measured electrically. The Proceedings of the National Academy of Sciences 88:7896-7900 (correction: (1993) The Proceedings of the National Academy of Sciences 90:1634) [14] Cho S and Thielecke H (2008) Electrical characterization of human mesenchymal stem cell growth on microelectrode. Microelectronic Engineering 85:1272-1274

Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis

403

[15] Stieglitz T, Beutel H, Meyer J U (1997) A flexible, light-weight multichannel sieve electrode with integrated cables for interfacing regenerating peripheral nerves. Sensors and Actuators A 60:240-243 [16] Süselbeck T, Thielecke H, Koechlin J et al. (2005) Intravascular electric impedance spectroscopy of atherosclerotic lesions using a new impedance catheter system. Basic Research in Cardiology 100:446-452 [17] Cho S and Thielecke H (2005) Design of electrode array for impedance measurement of lesions in arteries. Physiological Measurement 26:S19-S26 [18] Cho S and Thielecke H (2006) Influence of the electrode position on the characterisation of artery stenotic plaques by using impedance catheter. IEEE Transactions on Biomedical Engineering 53:2401-2404 [19] Streitner I I, Goldhofer M, Cho S et al. (2007) Intravascular electric impedance spectroscopy of human atherosclerotic lesions using a new impedance catheter system. Atherosclerosis Supplements 8:139

Mathematical Modelling of Cervical Cancer Vaccination in the UK* Yoon Hong Choi** and Mark Jit Centre for Infections, Health Protection Agency, UK [email protected]

Abstract. Human papillomaviruses (HPV) are responsible for causing cervical cancer and anogenital warts. The UK considered a national vaccine program introducing one of two licensed vaccines, Gardasil™ and Cervarix™. The impact of vaccination is, however, difficult to predict due to uncertainty about the prevalence of HPV infection, pattern of sexual partnerships, progression of cervical neoplasias, accuracy of screening as well as the duration of infectiousness and immunity. Dynamic models of HPV transmission, based upon about thousands of scenarios incorporating uncertainty in these processes, were developed to describe the infection spread and development of cervical neoplasia, cervical cancer (squamous cell and adenocarcinoma) and anogenital warts. Each scenario was then fitted to epidemiological data to estimate transmission probabilities and the best-fitting scenarios used to predict the impact of twelve different vaccination strategies. Our analysis provides relatively robust estimates of the impact of HPV vaccination, as multiple sources of uncertainty are explicitly included. The most influential remaining source of uncertainty is the duration of vaccine-induced protection.

1 Introduction Human papillomavirus (HPV) infection is responsible for the development of cervical cancer in women as well as anogenital warts in both men and women. The most common forms of cervical cancer in the United Kingdom (UK) are squamous cell carcinomas and adenocarcinomas. Two HPV types (16 and 18) accounts for about 70% of squamous cell carcinomas [1] and 85% of adenocarcinomas [2], while another two types (6 and 11) cause over 90% of cases of anogenital warts [3]. Two prophylactic vaccines against HPV have been developed: a bivalent vaccine (Cervarix™) against types 16 and 18, and a quadrivalent vaccine (Gardasil™) that also includes types 6 and 11. In clinical trials, use of either vaccine in HPV-naive females resulted in at least 90% reduction in persistent infection and associated disease during 30 months of follow-up [4, 5]. The quadrivalent vaccine has shown to be highly effective at preventing anogenital warts [6]. The vaccines have the potential to reduce the substantial burden of HPV-related disease. However, they are priced at levels significantly higher than other vaccines in national vaccination schedules, so their epidemiological and economic impact needs to be carefully considered. Because of the complexity of HPV infection and pathogenesis, mathematical models are required to estimate the impact of vaccination to * **

This is a shorter version of the original paper (Choi, et. al. 2008. PLoS Medicine. Submitted). Corresponding author.

406

Y.H. Choi and M. Jit

describe such complexity and consequent examination of alternative immunisation programmes.

2 Method We developed a set of compartmental Markov models to represent acquisition and heterosexual transmission of infection, with an imbedded progression model to represent the subsequent development of HPV-related disease (different stages of pre-cancerous cervical neoplasias, squamous cell carcinomas, adenocarcinomas and anogenital warts). HPV types in the model are divided into five groups: type 16, type 18 and other oncogenic high risk types for cervical cancers, plus type 6 and type 11 for anogenital warts. For oncogenic HPV types in females, there are type-specific model compartments for being susceptible to HPV infection, infected with HPV, immune to HPV infection, having cervical intraepithelial neoplasias (CINs) of differFig. 1. Flow diagram for models of (a) ent grades (CIN1, CIN2 or CIN3), having high risk (oncogenic) HPV infection CIN3 carcinoma in situ (CIS) or undiagnosed and disease in females, and (b) low risk squamous cell carcinoma, having diagnosed (warts-related) HPV infection in females or any HPV infection in males. invasive squamous cell carcinoma and having had a hysterectomy (see Fig. 1). Adenocarcinomas were modelled separately but the same model structure was adopted. Males can only occupy the susceptible, HPV infected and immune states. Females move through the various HPV disease states at rates independent of their age and of the time already spent in the state. They may regress to less severe disease states or to the immune state, either as a result of natural regression or of cervical screening followed by treatment. They are also subject to an agedependent background hysterectomy rate. Assumptions governing disease progression have been previously described in greater detail (Jit et al., submitted manuscript). The dynamics of infection by HPV 6 and 11 are modelled using three compartments in both females and males, representing being susceptible to infection, being infected and being immune to infection. A proportion of susceptibles who become newly infected were assumed to acquire symptomatic warts and present for treatment, thereby contributing towards anogenital warts incidence. Complete model equations for Figure 1 are available upon request. Vaccines were assumed to provide 100% efficacy against vaccine type infection, but with vaccinated individuals losing vaccine protection at a constant rate. Three possible mean durations of protection (ten years, twenty years and lifetime) were

Mathematical Modelling of Cervical Cancer Vaccination in the UK

407

modelled. A scenario was also considered where vaccination provided crossprotection with efficacy of 27% against oncogenic non-vaccine HPV types as suggested by trials of both vaccines. Sexual transmission was modelled using a structure similar to that previously developed for HIV [8], with the model population is stratified into three sexual behaviour groups (low risk, moderate risk and high risk). Women were assumed to be screened at age-dependent rates, with successful screening and treatment leading to women being moved to an HPV-free state. Progression and regression rates between different disease states were determined by fitting to HPV prevalence data. For HPV 6 and 11, the key parameters governing model predictions about annual anogenital warts diagnoses are the proportion of warts diagnoses linked to HPV 6 infection and the proportion of HPV 6 and 11 infections that lead to clinically diagnosed symptoms. The proportion of infections causing clinical symptoms was determined by matching the age-dependent pattern of annual anogenital warts cases to the seroprevalence of HPV 6 and 11 in a recent UK convenience sample, assuming a seroconversion rate of 60-80% [12, 13]. For each of the 18 oncogenic HPV (12 squamous and 6 adenocarcinoma) and 6 low-risk HPV scenarios, a total of 150 combinations of assumptions governing the transmission of HPV infection, natural immunity and vaccine-induced protection were considered. These comprised of five possibilities for the duration of natural immunity (zero months i.e. no natural immunity, three years, ten years, twenty years, and lifelong), five possibilities for the duration of infection (six months, nine months, twelve months, fifteen months and eighteen months), three possibilities for the assortativeness parameter governing mixing between risk groups (0, 0.5. 0.9) and two possibilities about the probability of HPV transmission per partnership (based on high risk group values, based on low risk group values). Thus, 2,700 combinations of assumptions were investigated for each oncogenic type (16, 18 and other oncogenic types), and 900 parameter combinations were investigated for each warts type. Hence in total, 9,900 scenarios were fitted. For each of the 9,900 combination of assumptions the equilibrium prevalence of the dynamical model was fitted to a data-derived HPV prevalence curve associated with that scenario (described in Jit et al., submitted manuscript), by altering the two parameters governing HPV transmission: the probability of transmission per sexual partnership for the low risk group, and the coefficient by which this probability is multiplied to get the corresponding probability for the medium and high risk groups. These parameters were estimated by minimising the sum of squared residuals weighted by the variance of the HPV prevalence curve over all age groups. Numerical fitting was conducted using the Brent method [14]. However, only cancer scenarios with sum of squared residuals lying below a goodness of fit cut-off of 40 were used for subsequent analysis, representing 70% of all squamous cell carcinoma scenarios. This cut-off was chosen so that the remaining scenarios would match known data about cervical cancer cases. Fig. 2 shows the number of cervical cancers a year indicated by models with different values of the sum of squared residuals. For warts, a goodness of fit threshold of 4 for the sum of squared residuals was fixed, based on reports of new and recurrent cases of anogenital

408

Y.H. Choi and M. Jit

Fig. 2. Relationship between (a) estimated annual cancer incidence and sum of squared residuals for each scenario of oncogenic HPV types, (b) estimated annual warts incidence and sum of squared residuals for each scenario of non-oncogenic HPV types

warts (KC60 codes C11A and C11B). There was a strong association between goodness of fit and duration of natural immunity for warts. Only 3% of scenarios corresponding to lifelong natural immunity were eliminated; conversely, 98% of scenarios with no natural immunity were eliminated. In the base case scenario, routine quadrivalent vaccination was delivered to 12 year old girls with no catch-up campaign. Coverage of 80% for the full three doses was assumed, based on reported three dose coverage from the trial of a school-based hepatitis B vaccination programme [17]. Alternative vaccination scenarios were considered: (i) vaccinating girls at the age of 13 or 14 years instead of 12 years, (ii) vaccinating boys at age 12 years in addition to girls at age 12 years, assuming the vaccine fully protects boys against infection by all four vaccine types, (iii) vaccinating girls with three-dose coverage of 70% or 90% instead of 80%, (iv) catch up campaigns for girls up to age 14, 16, 18, 20 or 25 years old, and (v) vaccinating 12 year old girls with a vaccine that provides cross-protection against oncogenic non-vaccine types with efficacy of 27% as suggested by clinical trials. The combined effect of vaccination on cervical cancer incidence was calculated by summing results for HPV 16, HPV 18 and other high-risk HPV scenarios with the same assumptions. Similarly, the combined effect on anogenital warts was calculated by summing results for HPV 6 and HPV 11 scenarios with identical assumptions.

3 Results Fig. 4 shows the estimated impact of vaccinating 12 year old girls at 80% coverage on diagnosed HPV 6/11/16/18-related disease for different assumptions about the duration of vaccine protection. Large reductions in incidence of cervical dysplasia, cervical cancer and anogenital warts cases are expected, provided vaccine induced immunity lasts for ten years or more, although the reduction in cervical cancer takes much longer to become apparent. Increasing vaccine coverage reduces the post-vaccination incidence of disease. However, even at 70% coverage in girls it is possible to eliminate vaccine-type HPV in some model scenarios when vaccine protection is assumed to be lifelong, particularly those with short duration of natural immunity, because females who are not directly protected from being vaccinated are protected indirectly (by herd immunity).

Mathematical Modelling of Cervical Cancer Vaccination in the UK

409

Fig. 3. Percentage change in the annual number of diagnosed a) cervical cancer cases and b) genital warts following vaccination of 12 year old girls, for different assumptions about the duration of vaccine induced immunity with 80% of vaccine coverage assumption

Offering routine vaccination at a later age speeds the reduction in disease (data not shown), since vaccine is given closer to the age at which HPV is acquired. If vaccineinduced immunity is relatively short-lived (ten years), then offering vaccination later in childhood protects women for more of the highest risk period of HPV acquisition (late teens and early twenties). Thus vaccination at 14 years of age provides slightly improved outcomes compared with 12 years of age if vaccine induced immunity is not life-long. Scenarios where vaccination provides cross-protection with 27% efficacy against oncogenic non-vaccine HPV types show on average about 5% - 10% extra reduction in cancer incidence following vaccination, with a broadly similar pattern across assumptions about duration of vaccine protection. Extending vaccination to boys provides additional benefit in terms of reduction of cervical cancer and anogenital warts compared to vaccinating girls alone (Fig. 5). The effect on anogenital warts cases is slightly greater since males acquire warts but not cervical cancer. However, there is also an indirect effect on cervical cancer incidence since vaccinating boys prevents males from infecting females with oncogenic HPV types. The effect on both disease endpoints is small because vaccinating girls alone already reduces HPV prevalence to a very low level, especially if duration of vaccine protection is long. Catch-up campaigns usually have a more dramatic short-term effect than extending vaccination to boys on cervical cancer and anogenital warts incidence (Fig. 4). However, the effect of a catch-up campaign is minimal beyond 20 years after the advent of the campaign. There are decreasing marginal returns from extending the catch-up programme to older age groups, particularly if the average duration of vaccine protection is long. In particular, if vaccine protection is lifelong, the reduction in cancer incidence of using a campaign up to the age of 25 years is not greatly different from the more limited campaigns. This is the first model to comprehensively capture the effect of HPV vaccination on both cervical cancer and warts. All HPV types in the vaccines (6, 11, 16 and 18) are modelled separately, unlike some previous studies that only looked at a single HPV type in isolation [20-22], or grouped some of the vaccine types for the purposes of analysis [23]. Also, no previous (static or dynamic) model explicitly models adenocarcinomas. However, we have combined high-risk non-vaccine HPV types into a single group instead of modelling each type separately. This could cause some inaccuracy in estimates of the impact of HPV vaccination on non-vaccine types (if any),

410

Y.H. Choi and M. Jit

Fig. 4. The estimated impact of extending vaccination to boys or including a catch up campaign on annual cervical cancer and anogenital warts cases. Three doses coverage is assumed to be 80%. Graphs show the additional percentage change in number of diagnosed cancer or warts cases prevented compared to the base case programme of vaccinating 12 year old girls only. Results are shown for lifelong vaccine protection for a) cancer, and b) warts.

since actual vaccine protection against non-vaccine types is not uniformly 27% against every type. Also, we have not modelled penile, vulval, vaginal, anal, head and throat cancers as these are poorly described in the vaccine trials and have less wellknown natural history compared to cervical cancers. Models incorporating these cancers may become more important when exploring the effect of targeted HPV vaccination for specific risk groups such as homosexual and bisexual men. Heterosexual males are protected by vaccination of girls alone. However, vaccinating males as well as females produces additional benefits both for the males themselves (by reducing their risk of warts), and to a lesser extent for females (by reducing the overall HPV prevalence). The direct effect of vaccinating males (on reducing their risk of warts) is particularly true for homosexual men, who are not included in this model and who are likely to benefit less than their heterosexual counterparts from vaccination of girls only. The marginal benefits derived from vaccinating boys depend on the extent to which HPV incidence is reduced by vaccination of girls only. In many scenarios (particularly those that assume short periods of natural immunity) vaccinating girls alone reduces the incidence of vaccine-preventable types to very low levels (elimination might even be achieved at 80% coverage). Under these circumstances vaccination of boys brings few additional benefits (although it does mean that the probability of elimination is greater). However, if the period of immunity is shorter then the additional benefits from vaccinating boys are greater. Clearly, these benefits are larger for prevention of anogenital warts as males benefit directly from vaccination against this disease. Similarly, if vaccine duration of protection is short, then vaccinating boys has a larger marginal benefit (see Fig. 7). Catch-up campaigns have no effect on long-term incidence. However in the shortterm they can bring about a more rapid reduction in disease, particularly for acute diseases (anogenital warts and low grade neoplasias) associated with HPV infection. As the effects of the campaign wear off it is possible to get a resurgence of disease some time after vaccination (a post-honeymoon epidemic), before the system settles into its new equilibrium state. There are decreasing marginal returns associated with a catchup programme as the upper age at which vaccination is offered is extended. This is because the probability of remaining susceptible to the vaccine-preventable types falls rapidly after the age of about 15 years.

Mathematical Modelling of Cervical Cancer Vaccination in the UK

411

We have developed and parameterised a family of transmission dynamic models of infection and disease with oncogenic and anogenital warts-associated HPV types. A unique feature of our approach is that instead of adopting a single model, we have generated thousands of combinations of assumptions, and then fitted them to prevalence data. By developing a large number of related models that make differing assumptions about the natural history of HPV infection and disease, we have assessed structural as well as parameter uncertainty and propagated this through our analyses. The results of these analyses provide a robust evidence base for economic analyses of the potential impact of HPV vaccination. Acknowledgments. We thank John Edmunds, Nigel Gay, Andrew Cox, Geoff Garnett, Kate Soldan and Liz Miller for their contributions to the model development and analysis.

References [1] Munoz N, Bosch F X, de Sanjose S, et al. (2003) Epidemiologic classification of human papillomavirus types associated with cervical cancer. N Engl J Med 348(6):518-27 [2] Munoz N, Bosch FX, Castellsague X, et al. (2004) Against which human papillomavirus types shall we vaccinate and screen? The international perspective. Int.J.Cancer 111(2):278-85 [3] Krogh G, Lacey C J, Gross G, Barrasso R, Schneider A (2001) European guideline for the management of anogenital warts. Int J STD AIDS 12 Suppl 3:40-7 [4] FUTURE II Study Group. (2007) Quadrivalent vaccine against human papillomavirus to prevent high-grade cervical lesions. N Engl J Med 356(19):1915-27 [5] Harper D M, Franco E L, Wheeler C, et al. (2004) Efficacy of a bivalent L1 virus-like particle vaccine in prevention of infection with human papillomavirus types 16 and 18 in young women: a randomised controlled trial. Lancet 364(9447):1757-65 [6] Garland S M, Hernandez-Avila M, Wheeler C M, et al. (2007) Quadrivalent vaccine against human papillomavirus to prevent anogenital diseases. N.Engl.J.Med. 356(19):1928-43 [7] Kahn JA, Burk RD. Papillomavirus vaccines in perspective. Lancet 2007;369(9580):2135-7. [8] Garnett G P, Anderson R M (1994) Balancing sexual partnerships in an age and activity stratified model of HIV transmission in heterosexual populations. IMA J.Math.Appl.Med.Biol. 11(3):161-92 [9] Department of Health Statistical Bulletin. Cervical Screening Programme, England: 2005-06. http://www.ic.nhs.uk/pubs/csp0506 . 20-12-2006. The Information Centre. 610-2007. [10] Nanda K, McCrory DC, Myers ER, et al. Accuracy of the Papanicolaou test in screening for and follow-up of cervical cytologic abnormalities: a systematic review. Ann Intern Med 2000;132(10):810-9. [11] Redburn J C, Murphy M F. (2001) Hysterectomy prevalence and adjusted cervical and uterine cancer rates in England and Wales. BJOG. 108(4):388-95. [12] Dillner J (1999) The serological response to papillomaviruses. Semin Cancer Biol 9(6):423-30 [13] Carter J J, Koutsky L A, Hughes J P, et al. (2002) Comparison of human papillomavirus types 16, 18, and 6 capsid antibody responses following incident infection. J Infect Dis 181(6):1911-9

412

Y.H. Choi and M. Jit

[14] Press W H; Teukolsky S A; Vetterling W T, Flannery B P (2002) Numerical Recipes in C++: The Art of Scientific Computing. 2nd ed. Cambridge: Cambridge University Press [15] Office for National Statistics. Cancer Statistics 2004: Registrations Series MB1No 35. http://www.statistics.gov.uk/statbase/Product.asp?vlnk=8843&More=N . 2006. [16] Health Protection Agency (2006) Trends in anogenital warts and anogenital herpes simplex virus infection in the United Kingdom: 1996 to 2005. CDR Weekly 16(48) [17] Wallace L A, Young D, Brown A, et al. (2005) Costs of running a universal adolescent hepatitis B vaccination programme. Vaccine 23(48-49):5624-31. [18] Garland S M, Hernandez-Avila M, Wheeler C M, et al. (2007) Quadrivalent vaccine against human papillomavirus to prevent anogenital diseases. N.Engl.J.Med. 356(19):1928-43 [19] Olsson S E, Villa L L, Costa R L, et al. (2007) Induction of immune memory following administration of a prophylactic quadrivalent human papillomavirus (HPV) types 6/11/16/18 L1 virus-like particle (VLP) vaccine. Vaccine 25(26):4931-9 [20] Hughes J P, Garnett G P, Koutsky L (2002) The theoretical population-level impact of a prophylactic human papilloma virus vaccine. Epidemiology 13(6):631-9 [21] French K M, Barnabas R V, Lehtinen M, et al. (2007) Strategies for the introduction of human papillomavirus vaccination: modelling the optimum age- and sex-specific pattern of vaccination in Finland. Br.J.Cancer 96(3):514-8 [22] Barnabas R V, Laukkanen P, Koskela P, Kontula O, Lehtinen M, Garnett G P (2006) Epidemiology of HPV 16 and Cervical Cancer in Finland and the Potential Impact of Vaccination: Mathematical Modelling Analyses. PLoS.Med 3(5):e138 [23] Elbasha E H, Dasbach E J, Insinga R P (2007) Model for Assessing Human Papillomavirus Vaccination Strategies. Emerg Infect Dis 13(1):28-41

Particle Physics Experiment on the International Space Station Chanhoon Chung Physikalisches Institut B, RWTH-Aachen University, Aachen, Germany [email protected] Abstract. We know that the universe consists of 22% dark matter. The dark matter particle has to be stable, non-relativistic and only weakly interacting. However we do not know what the dark matter is made of and how it is distributed within our galaxy. In general, the cosmic antiparticles are expected as secondary products of interactions of the primary cosmic-rays (CRs) with the interstellar medium during propagation. While the measurements of CR positrons, anti-protons and diffuse gamma rays have become more precise, the results still do not match with pure secondary origins. A comparison between background of these CRs and experimental data has been performed using CR propagation models. A phenomenological study based on the supersymmetry (SUSY) is carried out and shows a better interpretation of CR fluxes including neutralino annihilations in the galactic halo and center. The AMS-02 will be the major particle physics experiment on the International Space Station (ISS) and make a profound impact on our knowledge of high energetic CRs with unprecedented accuracy. It will extend our knowledge on the CR origin, acceleration and propagation mechanism. Especially, the measurement of the position flux may be the most promising for the detection of the neutralino dark matter since the predicted flux is less sensitive to the astrophysical parameters responsible for the propagation and the dark matter halo profile. The fully AMS-02 detector has been assembled at CERN (European Organization for Nuclear Research) located near Geneva in Switzerland. Afterwards space qualification tests it will be delivered to NASA-KSC to prepare for the launch with a space shuttle. The launch and installation of the AMS-02 detector on ISS is scheduled for 2010.

1 Introduction The universe is composed of 22% non-baryonic cold dark matter (CDM) and its nature is one of the outstanding questions in modern physics. The existence of dark matter from the Big-Bang to the present-day is confirmed by the various astrophysical observations including the recent Wilkinson Microwave Anisotropy Probe (WMAP) experiment of the cosmic microwave background [1]. The dark matter is considered to be a stable, neutral, weakly and gravitational interacting massive particle (WIMP). In spite of the fact that the Standard Model (SM) of particle physics is well established by various experiments with an extreme accuracy, it could not provide any viable candidate of dark matter. The SUSY theories predict the existence of relic particles from the Big-Bang. The lightest SUSY particle (LSP) in the R-parity conserved models is a good candidate for non-baryonic cold dark matter [2]. Its signals have been actively explored both in the collider and astrophysics experiments. Direct detection relies on observing the elastic scattering of neutralinos in a detector. On the other hand, indirect detection depends on observing the annihilation products

414

C. Chung

from cosmic rays, such as neutrinos, positrons, antiprotons or gamma rays. Present experiments are just reaching the required sensitivity to discover or rule out some of the candidates, and major improvements are planned over the coming years. The Alpha Magnetic Spectrometer (AMS) is a high energy particle physics experiment to be operated on the ISS in 2010 for at least three years [3]. One of the main physics motivations is the indirect search for dark matter from cosmic rays. The SUSY dark mater particles could annihilate each other and produce positrons, antiprotons and gamma-rays as an additional primary CR sources. The observation of a deviation from a simple power law spectrum would be a clear signal for dark matter annihilation.

2 SUSY Dark Matter Search from CRs The recent data from the WMAP experiment on the CMB anisotropies has confirmed the existence of CDM. However the nature of the CDM still remains a mystery. Among the CDM candidates, WIMPs are promising candidates since their thermal relic abundances are naturally within the cosmological favoured range if they weight less than 1TeV [4]. In most SUSY models, the lightest neutralino is a well studied candidate of WIMPs. It is a superposition of the super-partners of the gauge and Higgs fields. Great effort has been devoted to detect the neutralino dark matter directly or indirectly. The indirect detection experiment relies on pair annihilation of neutralinos into SM particles with significant cross sections through the decay of short lived heavy leptons, quarks and gauge bosons. Various experiments are designed to observe the anomalous CRs such as high energy neutrinos from the Sun or Earth, gamma rays from the galactic centre, and positrons and anti-protons from the galactic halo, which are produced by the neutralino annihilations. Protons and helium, electrons as well as carbon, oxygen, iron, and other nuclei synthesized in stars, are primaries accelerated by astrophysical sources. Nuclei such as lithium, beryllium, and boron are secondaries which are not abundant end-products of stellar nucleosynthesis. Most of anti-protons and positrons are also in large part secondaries produced in interactions of the primaries with the interstellar medium (ISM) during propagation. Accurate measurements of cosmic anti-protons (or anti-deuterons) at the low energy region, positrons and gamma rays at high energies with efficient background rejection are necessary to search for primary contributions from the neutralino annihilations. Its flux has a unique spectral peak around 2GeV due to the production threshold and decreases sharply toward lower energies as shown in Fig. [1]. It provides an opportunity to test exotic contributions since the neutralino-induced components do not drop as fast at low energies [5]. The recent BESS [6] balloon-borne experiment has well measured the anti-proton flux below a few GeV where the neutralinoinduced signal is expected to be detectable. But the experimental errors are still too large to reach any final conclusion. The major uncertainties affecting the neutralinoinduced antiproton flux come from nuclear physics cross sections, propagation parameters and the thickness of the diffuse halo size. However, the AMS-02 experiment can take full advantages from the precise measurements of cosmic nuclei such as B/C and 10Be/ 9Be to constrain the astrophysical parameters, and will disentangle the

Particle Physics Experiment on the International Space Station

415

signal from the background with much higher statistics and reduced systematic uncertainties A subtle excess in the cosmic positron fraction above 5 GeV up to 50 GeV observed by HEAT [7], has stimulated numerous calculations and interpretations with dark matter annihilations in the galactic halo. However it needs a significant flux enhancement factor which could be explained with clumpy dark matter since the flux is proportional to the square of the dark matter density. The confirmation of this excess requires further sensitive measurements with good proton background rejection power, higher statistics and covering the wide energy range between 1 GeV and several hundreds of GeV. The shape also depends on the annihilation cross section and the degree of local inhomogeneity in dark matter halo profiles. High energy diffuse gamma-ray emission is produced primarily by CR protons interacting with the ISM via nucleon-nucleon processes, high energy electron bremsstrahlung through interactions with the interstellar gas and inverse Compton

(1)

(2)

(3)

(4)

Fig. 1. Compiled measured CR fluxes with background calculation using GALPROP program. The upper-left panel (1) shows the χ2 fit from the GALPROP models with recent experimental data. The primary proton and electron injection index are considered as main uncertainty. The upper-right panel (2) shows the anti-proton data with model expectation considering different primary injection spectra. The solid line is modulated and yellow band indicates an uncertainty of the primary proton and electron injection rate. The lower-left panel (3) shows a compilation of positron fraction measurements with model prediction. The lower-right panel (4) shows in case of diffuse gamma-ray for the galactic centre region.

416

C. Chung

scattering with low energy photons. However, galactic or extragalactic diffuse gamma rays could also be generated mainly from the decay of neutral pions (π0) produced in jets from neutralino annihilations. The public GALPROP [8] code is used to investigate the galactic CR propagation model. It is based on the known galactic matter density and the CR rate and measured spectrum at the Earth. The proton and helium injection spectra and the propagation parameters are chosen to reproduce the most recent measurements of primary and secondary nuclei, antiprotons, electrons and positrons, as well as gamma-rays and synchrotron radiation. Fig. [1] shows the prediction of the conventional GALPROP model. The observed anti-protons, positron fraction and diffuse gamma-rays show an excess over predictions at less than 2 GeV for anti-proton and above 8 GeV for positron and a few GeV region for diffuse gamma-ray emission.

Fig. 2. SUSY interpretation to the CR excess based on the mSUGRA model. A numerical χ2 minimization is displayed in the (m0,m 1/2) plane. In order to reflect gμ -2 constraint, tanβ = 40 is specially chosen. The relevant common parameters are set for , A0 = 0, mt = 172.5 GeV and sign(μ) = +1.

Particle Physics Experiment on the International Space Station

417

Several authors have presented each spectrum with a specific exotic model [9]. However the successful explanation is still in a debate. In this paper, an optimized model is investigated to explain anti-proton, positron and diffuse gamma-rays simultaneously based on the the minimal supergravity (mSUGRA) model [10]. The number of free parameters is reduced to five in this scenario (four continuous and one discrete); common gaugino mass (m1/2), scalar mass (m0), triliniear scalar coupling (A0), ratio of the neutral Higgs vacuum expectation (tanβ) and sign of Higgsino mass parameter (μ). As well as the SUSY breaking input parameters, the bottom mass (mb), the strong coupling constant (αs), and the top mass (mt) could also have strong effects on mSUGRA predictions. In order to scan the mSUGRA parameter space, the relic dark matter density is calculated to be consistent with recent WMAP data [1]. Other strong constraints on mSUGRA models include the lower bounds on Higgs boson and sparticle masses [11], the branching ratio of the b → s γ decay [12], the upper bound on the branching ratio of Bs → μ+ μ− [13], and the measurements of the muon anomalous magnetic moment (gμ -2) [14]. As shown in Fig. 2, a simultaneous χ2 minimization has been performed using the sum of anti-proton, positron fraction and gamma-rays. The overall χ2 fits include only three free parameters to enhance the signals of positrons, antiprotons and gamma rays independently. The most preferred point resides in the focus point region with a neutralino mass of 90 GeV with a χ2 of 26.8 for 33 degrees of freedom, which corresponds to a probability of 76.8 % indicating a good agreement. It also shows the positron fraction, anti-proton and gamma-ray spectra at a benchmark point in the focus point region. The shape of the positron flux generates a sharp bump around 40 GeV, corresponding to the half of neutralino LSP mass from annihilating by W gauge boson (66%) and b-quark pairs (20 %). The corresponding antiproton flux reproduces the mild excess of data in the low energy range. The GeV excess of the diffuse gammaray spectrum is obtained from the same neutralino pair annihilation [16].

3 AMS-02 Detector The particle identification of CRs relies on precise measurements of rigidity, velocity, energy and electric charge. The AMS detector shown in Fig. 3 uses a large superconducting magnet at its core and consists of a large area silicon microstrip tracker, a transition radiation detector (TRD), a time of flight (ToF) system with anti-coincidence counters (ACC), a ring Image cherenkov Counter (RICH) and an electromagnetic calorimeter (ECAL). The detector has an overall dimension of 3 × 3 × 3 m3 and the weight is about 7 tons. The TRD, ToF and Tracker have an acceptance of 0.5m2⋅sr, and the combination of Tracker, ToF and RICH provides cosmic nuclei separation up to Z = 26 (Fe) by dE/dx measurements [3]. The velocity is measured by ToF and RICH independently. The hadron to positron rejection with TRD, Tracker and ECAL is better than 105 up to 300GeV. The key feature of the detector is a superconducting magnet and its purpose is to extend the measurable energy range of particles and nuclei to the multi-TeV region

418

C. Chung

Fig. 3. Layout of the AMS-02 detector from http://ams.cern.ch

with a high bending power BL2 of 0.862 T⋅m 2. The TRD is observed as X-rays when a charged particle traverses the boundary between different dielectric materials for Lorentz factors (γ= E/m) larger than 1000. Therefore, it is useful to separate the positron (electron) events from the proton (anti-proton) background [15]. In the AMS-02 experiment, cosmic positron (electron) can be identified by using a TRD as a threshold device in the momentum range of 1 GeV/c ≤ p ≤ 300 GeV/c and it considerably improves the search for primary positron contribution from neutralino annihilations. The Tracker is composed of 8 layers of double sided silicon micro-strip sensors inside the superconducting magnet. It measures the trajectory of the incoming charged particles and determines the charge sign with a magnetic rigidity resolution of 20 % for 0.5TV protons. The finer p-side strips are used to measure the bending coordinate and give a spatial resolution better than 10μm. The TOF system is working as the primary fast trigger and consists of four layers of scintillator paddles, two layers between TRD and Tracker and two layers below the Tracker. The RICH measures the velocity of singly charged particles with a relative uncertainty of 0.1 % and contributes to the charge separation of nuclei with tracker and ToF. The ECAL measures the deposited energy, direction of electrons, positrons and gamma rays with an angular resolution around 1º. The fine grained sampling electromagnetic calorimeter consists of 9 superlayers along its depth. It is able to image the shower developments in 3D and allows the discrimination between hadronic and electromagnetic cascades.

4 Summary The AMS is a particle physics detector designed to measure the CR spectra on the ISS for three years mission starting 2010. The main physics motivations are focused on the precise measurements of CRs for the indirect dark matter search and direct detection for

Particle Physics Experiment on the International Space Station

419

Fig. 4. AMS-02 expectation of the positron fraction after three years operation on the ISS. The error bar considers a positron identification and proton, antiproton and electron background rejection as a function of positron kinetic energy [16].

heavy anti-matter in space. The AMS-02 will measure the cosmic positron spectrum with unprecedented accuracy up to 300GeV and considerably improve the search for primary positron contribution from neutralino annihilations as shown in Fig. 4. The search for high energy gamma-ray emission from the galactic centre is also planned for AMS-02 using a combination of tracker and calorimeter with good angular and energy resolution. The flux of anti-deuterons might be quite suppressed compared with antiprotons, but its signal could be observable by AMS-02 below energies of 3GeV/nucleon where secondary anti-deuterons are absent. This indirect dark matter search would complement accelerator and direct detection experiments. Now, we are on the threshold of a new and exciting ear of unexpected discoveries at the frontiers of particle physics on the ground and space.

References [1] WMAP Collaboration, D.N. Spergel et al., Astrophys. J. Suppl. Ser.148, 175 (2003); astro-ph/0603449 (2006) [2] H.E. Haber, G.L. Kane, Phys.Rep.117, 75 (1985); S.P. Martin, Phys.Rev.D62, 067505 (2000); H.P. Nilles, Phys.Rep.110, 1 (1984)

420

C. Chung

[3] AMS-02 Collaboration, ‘AMS on ISS, Construction of a particle physics detector on the International Space Station’, submitted to Nucl.Instrum.Meth.A (2005); C.H. Chung, Proceedings of Beyond Einstein: Physics for the 21st century (EPS13), Bern Switzerland (2005) [4] G. Jungman, M. Kamionkowski, K. Griest, Phys.Rep.267, 195 (1996); L. Bergstrom, Rep.Prog.Phys.63, 793 (2000); G. Bertone, D. Hopper, J. Silk, Phys.Rep.405, 279 (2005) [5] L. Bergstrom et al., Phys.Rev.D59, 043506 (1999); F.Donate et al., Phys.Rev.D69, 063501 (2004) [6] BESS Collaboration, H. Matsunaga et al., Phys.Rev.Lett.81, 4052 (1998); BESS Collaboration, S. Orito et al., Phys.Rev.Lett.84, 1078 (2000) [7] J.J. Beatty et al., Phys.Rev.Lett.93, 241102 (2004) [8] A.W. Strong, I.V. Moskalenko, ApJ509 , 212 (1998), ApJ664, L91 (2004) [9] W de Boer, C Sander, V Zhukov, A V Gladyshev and D I Kazakov (astroph/0508617) [10] G.L. Kane et al., Phys.Rev.D49, 6173 (1994) (hep-ph/9312272) [11] LEP Higgs working group, hep-ex/0107030, hep-ex/0107031, LHWG-Note 2005-01, The latest results are available via http://lephiggs.web.cern.ch/LEPHIGGS/papers/; LEPSUSYWG, ALEPH, DELPHI, L3 OPAL Collaboration The latest results are available via http://www.cern.ch/lepsusy/; Acessed 3 July 2008 [12] H. Asatrian et al., hep-ph/0512097 [13] G. Buchalla, A.J. Buras, Nucl.Phys.B400, 225 (1993); Nucl.Phys.B412, 106 (1994); Nucl.Phys.B~548, 309 (1999); M. Misiak, J.Urban, Phys.Lett.B451, 161 (1999) ; CDF and D0 Collaborations, R. Bernhard et al., hep-ex/0508058 ; A. Dedes, A. Pilaftsis, Phys.Rev.D67, 015012 (2003); S. Baek, Y.G. Kim, P. Ko, hep-ph/0406033, hepph/0506115 [14] The Muon g-2 Collaboration, G. Bennet et al., Phys.Rev.D62, 091101 (2000); Phys.Rev.Lett.86, 2227 (2001); Phys.Rev.Lett.89, 101804 (2002); Phys.Rev.Lett.92, 161802 (2004) (hep-ex/0401008) and hep-ex/0602035 [15] C.H. Chung et. al, Nucl.Phys.Proc.Suppl.113:154-158 (2002); Nucl.Instrum.Meth.A522, 69-72 (2004), IEEE Trans.Nucl.Sci.51:1365-1372 (2004) [16] C.H. Chung et. al, , proceedings of 15th International Conference on Supersymmetry and the Unification of Fundamental Interactions (SUSY07), Karlsruhe, Germany (2007)

Effects of Methylation Inhibition on Cell Prolieration and Metastasis of Human Breast Cancer Cells Seok Heo1 and Sungyoul Hong2 1 2

Department of Pediatrics, Medical University Vienna, Vienna, Austria Department of Genetic Engineering, Faculty of Life Science and Technology, Sungkyunkwan University, Suwon, Republic of Korea [email protected]

Abstract. Breast cancer is the second most fatal cancer in women. Adenosine has been shown to induce apoptosis through various mechanisms including adenosine receptor activation, adenosine monophosphate (AMP) conversion, AMP-activated protein kinase activation, or conversion to S-adenosylhomocysteine, which is an inhibitor of S-adenosylmethioninedependent methyltransferases. Since the pathways involved in the anticancer activity of adenosine analogues are not still clearly understood, I examined the relationship between methyltransferase inhibition and the anticancer, antimetastatic effect of adenosine dialdehyde (AdOx) which is known for AdoHcy hydrolase inhibitor, which result in methylation inhibition, using non-invasive and invasive human breast cancer cells (HBCs MCF-7 and MDA-MB 231, respectively). Morphological changes and condensed chromatin were observed in HBCs treated with AdOx. Cytotoxicity was increased and DNA synthesis and cell counts were decreased by AdOx in HBCs, but the cytotoxicity was higher in MCF-7 than in MDA-MB 231. In MDA-MB 231, AdOx lowered the expression of G1/S regulators and the tumor suppressor p21WAF1/Cip1. In MCF-7, apoptotic molecules and tumor suppressor p21WAF1/Cip1 expression were induced by AdOx. Colony dispersion and cell migration was inhibited by AdOx and the activities of matrix metalloproteinase-2 /-9, which are key enzymes for cancer invasion and migration, were decreased by AdOx. But, the mRNA levels of MMP-2 and MMP-9 were not affected in accordance with the changes in enzymatic activity. Mammary-specific serine protease inhibitor was increased by AdOx in both cell lines. These results suggest that methyltransferase inhibition by AdOx may decrease cell viability and influence cell cycle distribution and migratory potential, providing evidence of methylation inhibition as a potential target for anticancer and antimetastatic effects in HBCs.

1 Introduction 1.1 Cancer Metastasis and Breast Cancer Primary tumor is the mass of cell growing rapidly compared to normal cell present at the site of initial conversion. There would be of little clinical importance if the cells present in primary site of cancer remained in their own sites. Abnormal growth and proliferation of cancer cells would give pressure on neighboring tissue, which would ultimately noxious to the host. But by surgery, tumor mass which is well-defined its character can be removed in a straightforward and permanently. But there are some problems making human hard to recognize and excise the primary tumor, because tumor cells do not always remain at the primary sites, but move away from primary to

422

S. Heo and S. Hong

secondary sites by one of two procedures. There are 6 steps to proceed metastasis primary tumor cells to secondary sites: aggressive, proliferation of growthtransformed cells, breakdown of basal lamina, intravasation and circulation through bloodstream or lymphatic system, attachment to inner wall of the blood vessel, extravasation and abnormal proliferation at the metastasized site. But there are two crucial steps for cancer cells to move to another sites: (1) Invasion, or the movement of the cell into the adjacent tissue composed of another types of cells; and (2) metastasis, or the movement of cells to the distant secondary sites, usually through blood or lymphatic vessel system, or via body cavities. Substantial invasion usually occurs before any metastasis starts [1]. Invasion is just the expansion of tumor cells into surrounding tissues as a consequence of uninterrupted cell proliferation. However, movement of rapid proliferating cells may be occurred. Cancer cells tend to lose their adhesiveness to each other or to neighboring tissues or extracellular matrix (ECM), break away from the mass of the tumor, and take apart from each other then finally start to move from their own sites. Such movement does not occur in normal tissue. Normal cells move both in culture and in the track of embryological development, but when normal mature cells in culture get in touch each other, usually they stop not only growth but also movement. But cancer cells, at least in culture conditions, are not controlled by cell-cell contact. Also in body, they continue to grow and move into neighboring tissues. To do such actions, most of the cancer cells release proteases which are helpful to their digestion of extracellular components, thereby facilitating invasion [1]. Metastasis, that is the spreading and established growth of tumor cells at a site away from the original tumor site, is one of the characteristic observed in the final stages of the malignant cancer. Metastasis is a highly complicated process. Tumor cells are the prime movers but metastasis requires the participation of several types of normal cells at the primary tumor site and at the metastatic site. Interaction of tumor cells with host extracellular matrices and host basement membrane takes place at several points during the process of metastasis. Thus an important aspect of metastasis is the ability of the tumor cells to degrade ECM and to bind to and degrade basement membranes. There various kinds of proteolytic enzymes related to ECM degrading process and it includes: (1) matrix metalloproteinases (MMPs) such as transin, stromelysins and type IV collagenase; (2) plasminogen activator which change plasminogen into plasmin; (3) cathepsin B; (4) elastase; and (5) glycosidases. These enzymes are secreted by tumor cells or by normal cells, and they can be induced by extracellular stimulant. In other for cancer cells to move to distal region, first they must migrate away from the primary tumor mass as part of the invasive process. Numerous migratory factors have been identified that appeared to be associated with cancer cell migration [2]. Previous studies showed that cancer cells appeared to secrete migratory factors culture and also they can stimulate non-migratory cells in culture. Migration of cancer cells all over the body is not sufficient to cause metastasized tumors because the environmental condition of secondary site is different from that of the original site. Blood and lymphatic vessel system which are major pathways of transport of cancer cells are severe condition to survive so most of the cancer cells either die or reach the lung where they are either broken up or stay quiescent. It is thought that blood serum contains particular substances that are toxic to the cancer cells.

Effects of Methylation Inhibition on Cell Prolieration and Metastasis

423

The interactions of cells with ECM are critical for the normal development and function of organisms. Modulation of cell-matrix interactions occurs through the action of unique proteolytic enzymes responsible for degrading a variety of ECM proteins. By regulating the integrity and composition of ECM structure, these enzyme systems play a crucial role in the control of signals and they regulate a variety of cellular phenomena such as cell proliferation, differentiation and cell death. The loss and alteration of ECM have to be highly regulated because uncontrolled proteolysis contributes to atypical development and to the generation of many pathological conditions characterized by too much degradation of ECM. Matrix metalloproteinases (MMPs) are a well-known group of enzymes that regulate cell-matrix composition. The MMPs are metal ion-dependent, especially zinc ion, endopeptidase known for their ability to cleave one or several ECM components, as well as non-matrix proteins. They include a large family of protease that share general structural and functional elements and products of different genes. All members of these enzymes contain a pro-peptide and catalytic domain. The catalytic domain has the catalytic machinery including conserved methionine and metal ion binding domain. Metal ion binding such as zinc and calcium is necessary for maintaining the three dimensional structure, stability and enzymatic activities of MMPs. Substrate specificity is different among these enzymes. Most cells synthesize and immediately secrete MMPs into the ECM [3]. However, inflammatory cells such as neutrophils store this kind of protease. Tissue distribution of these proteases is divergent. Also some kinds of MMPs are synthesized (e.g. 72kDa gelatinase), but some are synthesized mainly on stimulation (e.g. collagenase). Abundant evidence exists on the role of MMPs both in normal and pathological conditions, including embryogenesis, wound healing, arthritis, cardiovascular disease, inflammation and cancer. There are evidences that the expression patterns of MMPs have interesting implications for the use of metalloproteinase inhibitors as therapeutic agents. Inhibition of specific MMPs in disease states and the regulation of each MMP genes by regulating chromatin structures can be a useful effort for therapeutic purpose [4]. Breast cancer is the most common malignant tumor in women. It starts as a local disease. Most of the cases of death of breast cancer patients, it is not the primary but the secondary tumors that are the main cause of death. Recent advances of diagnosis makes women’s survival rate. Survival rate of women breast cancer patients younger than 50 years of age has been increased by 10%; in older women the increase is 3% (Early Breast Cancer Trialists’ Collaborative Group, 2005). Breast cancer is a clinically heterogenous disease. Approximately 10-15% of breast cancer patients have an aggressive phenotype and develop distant metastases within 3 years after the initial recognition of disease but not prolonged up to 10 years. Therefore patients with breast cancer are at risk of experiencing metastasis for their entire lifetime. Heterogenous characteristic of breast cancer makes it difficult to assess risk factors as well as to characterize cure for this disease. Improving our insights of the molecular mechanisms of the breast cancer might also improve clinical management of the disease. 1.2 Regulation of Cell Cycle and in Cancer Progression Oncogenic transformation of normal cells results in cell cycle abnormality and apoptosis dysregulation. Upon activation of mitogenic signaling cascade, cells commit to enter the cell cycle regulation. S phase for the synthesis of DNA is followed by M

424

S. Heo and S. Hong

phase to separate into two daughter cells. Between the S phase and M phase, there exist G2 phase to repair mismatched nucleotide base pairs synthesized during S phase. In contrast, the G1 phase, which is located between the M and S phase, represents the period of cell growth including subcellular organelles and protein synthesis. Cells may stop their cycle in the G1 before entering the S phase and enter a state of quiescence called G0 phase. To start from cell growth and division, cells have to re-enter the cell cycle in the S phase. In order for cells to continue their cycling to the following phase, the prior phase has to be completed; otherwise cell cycle checkpoint mechanisms are operated. Cell cycle checkpoint is precisely regulated by cyclin-dependent kinase complexes (cdks) which activate regulators via phosphorylation on the serine or threonine residues. These complexes are composed of cdks and cofactors, including cyclins and endogenous cdk inhibitors (CKIs) such as p21cip/waf1. One crucial target of cdks is the retinoblastoma protein (Rb) which is known as a tumor suppressor protein. Therefore, manipulating the cdk complexes can be a valuable strategy in cancer therapeutics. The evidence that most cancer cells are aneuploid, reflecting abnormal sister chromatid separation, has inspired scientists to be interested in the mitotic checkpoints. Exhaustion of one of several mitotic checkpoint components by small molecules, such as intracellular antibodies, short-interfering RNA or dominant negative alleles induce cell death in in vitro. Another target relevant to not only cell cycle regulation but also apoptosis is tumor suppressor p53 which is inactivated in human cancer cells [5]. As the majority of tumor cells have lost the G1 checkpoint regulator as a result of p53 dysfunction but not the G2 checkpoint, upon DNA damage they would arrest in G2. Therefore, therapeutic approaches combining the use of DNA-damaging agent with small molecules which selectively disrupt the G2 checkpoint reveals a smart approach for cancer due to accelerated mitosis and promoted DNA lesion un-repaired. 1.3 Regulation of Apoptosis Apoptosis is a morphological event characterized by cell shrinkage, plasma membrane blebbing, chromatin condensation, nuclear fragmentation [6]. Cysteine aspartyl-specific proteases (Caspases) are responsible for the apoptotic morphology by cleaving various kinds of substrates which is involved in cytoskeleton, chromatin structure and nuclear envelope maintenance. Apoptosis is divided into two main pathways: the intrinsic pathway, which is characterized by depolarization of mitochondria followed by leading to caspase 9 activation; and the extrinsic pathway, which is characterized by the activation of death receptor via corresponding ligand and sequential activation of caspase 8. Caspase 9 and 8 which is known as starter caspase promote the activation of effector caspase, such as caspase 3, leading to apoptosis [6]. In addition to p53, other proteins relevant to tumorigenesis, such as bcl-2, AKT, ras, are also important in the inhibition of apoptosis. In case of bcl-2, aberrant upregulation of bcl-2is observed in various kinds of human cancers. Also, ras which is one of the well-known oncogenic proteins will promote abnormal proliferation and decrease in apoptosis. Although cell cycle regulation and apoptosis have distinct physiological characteristics, there is abundant circumstantial evidence that cell cycle and apoptosis are closely related. In vivo, apoptosis is detected primarily in proliferating tissue and is particularly evident after rapid cell growth. Therefore, apoptosis may serve a supplementary role: balancing the cell number between increments due to proliferation and reduction

Effects of Methylation Inhibition on Cell Prolieration and Metastasis

425

due to programmed cell death. There are many kinds of positive and negative signals to induce or inhibit apoptosis, but ultimately, these signals must break into the cell cycle if proliferation is to be halted and cells are to die. Several evidence suggest that the apoptosis induction is cell cycle-specific [7]. Among each cell cycle phase, entry into apoptosis is not possible during the whole G1 phase but seem to occur predominantly during entry into S phase. According to other experiments using apoptosisinducing agents, it appears that cells have to progress to late G1 phase for apoptosis to occur. In other words, arrest prior to this stage delays or inhibits apoptosis while arrest after this stage helps apoptosis. Key molecule of this regulation is p53 so that it can be considered as the main switch and the molecular meeting-point of cell cycle and apoptosis. One of the most well established mechanism for apoptosis by p53 is the induction of bax. Another protein which influence on apoptosis is Bcl-2, and these two proteins are critical to the regulation of apoptosis. Bcl-1 enhances cell survival while Bax promotes cell death, and the ratio of these two proteins determines the cell fate. Under certain situations, cyclin and cdks also seem to be essential for apoptosis [8]. Activation of cyclin-cdk complexes which is involved in G1-S and G2-M checkpoint have been observed in a number of differentiated cells during apoptosis. In case of caspases, it can be served as a meeting-point of apoptosis and cell cycle regulation because of its catalytic activity against cdk family. 1.4 S-Adenosylmethionine and Methylation S-Adenosylmethionine (AdoMet)-dependent methylation can be occurred on a large number of substrates including proteins, RNA, DNA and lipids. The synthesis of AdoMet is catalyzed by AdoMet synthetase, which transfers the adenosyl group of ATP to methionine. AdoMet exists in two stereoisomers; (S, S)-AdoMet is the biologically active form while (R, S)-AdoMet is a potent inhibitor of methyltransferase [9]. AdoMet is one of the most generally used and very multipurpose enzyme substrate. Sulfonium group of AdoMet makes it possible to be employed as both a methyl donor (and it is used as the major methyl donor in all living organisms) and a monoalkyl donor in the synthesis of polyamines. As a methyl donor, AdoMet is used in many reactions to transfer the methyl group to the nitrogen to the oxygen of the many different substrates via substrate-specific methyltransferase. SAdenosylhomocysteine (AdoHcy) is generated after the transfer of methy groups from AdoMet to the various kinds of substrates, and is then hydrolyzed to adenosine (Ado) and homocystein (Hcy) by AdoHcy hydrolase. Proteins can undergo posttranslational methylation at one or more nucleophilic side chains in a variety of cell types ranging from prokaryotic to eukaryotic organisms to lead to alteration in steric orientation, charge, hydrophobicity, and a global effect on the protein molecule involved, such as repair of damaged protein, response to environmental stress, cell growth / differentiation and carcinogenesis. Various amino acid residues are distributed in nature. The methyl group, donated from AdoMet is transferred by esterification to the γ-carboxyl group of one or more glutamic acid residues. In eukaryotic cells, proteins are methylated on carboxyl groups or on the side-chain nitrogens of lysine, arginine or histidine. Protein methylation is classified into 3 subtypes: N-methylation which is methylated at the side chains of arginine, lysine and histidine residues of proteins; O-methylation which is methylated at the free carboxyl groups of glutamyl and aspartyl residues; and S-methylation which is methylated at

426

S. Heo and S. Hong

the side chains of methionine and cysteine residues [10]. And substrate-specific methyltransferases by using AdoMet as a methyl donor are categorized into 3 groups: protein methyltransferase I (protein arginine N-methyltransferases; EC 2.1.1.23); protein methyltransferase II (protein carboxyl O-methyltransferases; EC 2.1.1.24); and protein methyltransferase III (protein lysine N-methyltransferase; EC 2.1.1.43). By methylating various regions of proteins, it can affect to hydrophobicity, proteinprotein interactions and other cellular processes. Also, protein methylation in involved in signal transduction, transcription regulation, heterogenous nuclear ribonucleoprotein export, cellular stress responses and the aging / repair of proteins. However, the potential role of protein methylation as a posttranslational modification in signal transduction has not been explored with the same breadth and intensity compared to protein phosphorylation or acetylation which has enjoyed in recent years. From the beginning of the first publication on protein methylation in the 1960s, by the early 1980, a variety of target amino acids and substrates has been discovered. But in spite of these advances in the research on protein methylation, the importance of the biological protein methylation remained unclear. In recent studies, there has been many improvements to elucidate the functions of protein methylation and had evidences on gene expression regulation and various kinds of signal transduction. Besides proteins, DNA also can be methylated by DNA-specific methyltransferase. For example, methylation is normally added only the 5’ position of the cytosine in a post-DNA synthesis catalyzed by one of several DNA methyltansfreases. DNA methylation plays a key role in suppression of the activity of endogenous parasitic sequence chromatin remodeling, and suppression of gene expression (epigenetic silencing). A prevalent definition of epigenetics is the study of mitotically and/or meiotically heritable changes in gene function that cannot be accounted by alterations in DNA sequences. Epigenetics has emerged as an crucial biological process of all multicellular organisms. There exists several epigenetic processes described, and all seem to be interconnected. Besides DNA methylation, posttranslational modification of histones seem to mediate epigenetic alterations in many organisms, and RNA interferen is an important epigenetic mechanism both in plant cells and in mammal cells. DNA methylation in the human genome occurs extensively at cytosine residues within the symmetric dinucleotide motif, CpG. Methylated cytosine accounts for 0.75~1% of the total DNA and almost 70% of all CpG islands. Methylated CpG islands are distributed all over the genome, especially with high densities in the promoters and transposons accumulated in the genome. While most CpG islands remain unmethylated and are associated with transcription-activated genes, certain CpG islands are normally methylated. The DNA methylation of a cell is accurately reproduced after DNA synthesis and stably inherited to the daughter cells. DNA methylation is mediated by an enzyme known as DNA methyltransferase (DNMT). There are many evidences to account for various roles of DNA methyltransferases. The epigenetic balance of the normal cells suffers a striking transformation in the cancer cells. These epigenetic abnormalities can be summarized into 5 categories: (1) transcriptional silencing of tumor suppressor; (2) global genomic hypomethylation; (3) loss of imprinting events; (4) epigenetic lack of intragenomic parasite repression; and (5) genetic lesions in chromatin-related genes. One thing is certain that promoter CpG island hypermethylation of tumor suppressor genes is an abundant key marker of all human cancers. Data accumulated in the last few years show that cancer cells exhibit significant changes in DNA methylation patterns compared with normal counter parts.

Effects of Methylation Inhibition on Cell Prolieration and Metastasis

427

And it can be summarized as global hypomethylation of the genome accompanied by focal hypermethylation events. The origin of these changes is largely unknown, but the most emphasized suggestion of aberrant DNA methylation is transcriptional inactivation of tumor suppressor genes by promoter methylation. De novo methylation of these genes can be occurred early stage of cancer progression and lead to unusual functions, including controlling of cell cycle, apoptosis, and signal transduction. On the other hand, global hypomethylation has been involved in chromosome instability, loss of imprinting, and reactivation of transposons and retroviruses, all of which may contribute to carcinogenesis. But aberrant DNA methylation patter is very complex among people and among cell- or cancer-type. In other words, these patterns suggest that methylation of specific genes may contribute to the development and progression of specific tumor type. For these reasons, the patterns of DNA methylation of target genes may serve as a key marker for diagnosis or prognosis of cancer and other methylation-related diseases [11]. Protein methylation, especially in histones, can interact with each other, and also DNA methylation cooperates with protein and other kinds of methylation. Histones can be methylated by histone-specific methyltransferase and also cross-talk with each other by acting as molecular switches, enabling or blocking the setting of other covalent marks [12]. It can also predict a chronology in the establishment of a specific modification patterns. The cross-talk can take place between modifications on different histones. In case of DNA methylation, especially on CpG islands within specific promoter brings out inheritable chromatin state of transcriptional repression. But the order of events leading to heterochromatin formation may differ from cell to cell. In any case, the epigenetic control of gene expression requires the cooperation of histone modification and DNA methylation, and malfunction of either of those processes result in aberrant gene expression involved in almost all human disease, such as cancers. There are evidences on the silencing of tumor-suppressor and other cancerrelated genes by aberrant methylation of CpG islands in their respective promoter regions. DNA methylation in the CpG islands has identified as an alternative mechanism to mutations and chromosomal deletions [13]. An interaction occurs between gene methylation and alterations in chromatin structures to silence gene expression. It makes the transcriptional factors hard to access to their binding sites. As a result of DNA/protein methylation, two products, the methylated substrates and the by-product AdoHcy are generated. It is important that AdoHcy by itself can function as a potent inhibitor of AdoMet-dependent methylation. Thus they need to break down AdoHcy further into adenosine and homocysteine by S-adenosylhomocysteine hydrolase (AdoHcy hydrolase). Adenosine dialdehyde (AdOx) is a potent AdoHcy hydrolase inhibitor to increase the AdoHcy level and reduce the activity of methyltransferases in cultured cells [14]. Therefore AdOx can inhibit AdoMet-dependent methylation. In the previous studies, it was elucidated that various kinds of DNAspecific methylase inhibitors had antineoplastic effect and showed promising preclinical and clinical activity, especially in leukemia [15, 16]. For example, the action mechanism of 5-aza-CdR is related to activation of tumor suppressor and induction of terminal differentiation or senescence of cancerous cells. In case of AdOx, it can inhibit methylation not only DNA but also protein at the same time. Although it has potent inhibitory effect on methylation in DNA and protein, there is no clear evidence about the effect of methylation inhibition by AdOx on the metastatic characteristic focused on MMPs and other metastatic molecules, such as maspin, mammary-specific serine protease inhibitor,

428

S. Heo and S. Hong

and TIMP, tissue inhibitors of matrix metalloproteinase. Also, the regulatory mechanisms of MMPs activity by DNA / protein methylation inhibition are unclear.

2 Conclusions In this review, we have proposed relationship between cancer progression and DNA/protein methylation. Also we analysed the effect of methylation inhibition by adenosine dialdehyde on different types of breast cancer cells. To find definite evidence on relationship between breast cancer progression and DNA/protein methylation, further experiments are in progress using biochemical and proteomic tools.

References [1] SB Oppenheimer (2006) Cellular basis of cancer metastasis: A review of fundamentals and new advances. Acta Histochem 108:327-334 [2] W Jiang and IF Newsham (2006) The tumor suppressor DAL-1/4.1B and protein methylation cooperate in inducing apoptosis in MCF-7 breast cancer cells. Mol Cancer 5:4 [3] JF Woessner Jr. (1991) Matrix metalloproteinases and their inhibitors in connective tissue remodeling, FASEB J. 5:2145-2154 [4] LM Matrisian (1994) Matrix metalloproteinase gene expression. Ann N Y Acad Sci 732:42-50 [5] KH Vousden, X Lu (2002) Live or let die: the cell's response to p53. Nat Rev Cancer 2:594604 [6] JC Reed (2002) Apoptosis-based therapies. Nat Rev Drug Discov 1:111-121 [7] W Meikrantz and R Schlegel (1995) Apoptosis and the cell cycle. J Cell Biochem 58:160-174. [8] LL Rubin, CL Gatchalian, G Rimon, SF Brooks (1994) The molecular mechanisms of neuronal apoptosis. Curr Opin Neurobiol 4:696-702 [9] RT Borchardt, YS Wu (1976) Potential inhibitors of S-adenosylmethionine-dependent methyltransferases. 5. Role of the asymmetric sulfonium pole in the enzymatic binding of S-adenosyl-L-methionine. J Med Chem 19:1099-1103 [10] WK Paik and S Kim (1990) Protein methylation, in: J. Lhoest, C. Colson (Eds.), Ribosomal protein methylation. CRC press, Boca Raton, FL, 155-178 [11] AS Wilson, BE Power, PL Molloy (2007) DNA hypomethylation and human diseases. Biochim Biophys Acta 1775:138-162 [12] W Fischle, Y Wang, CD Allis (2003) Histone and chromatin cross-talk. Curr Opin Cell Biol 15:172-183 [13] RL Momparler and V Bovenzi (2000) DNA methylation and cancer. J Cell Physiol 183:145-154 [14] RF O'Dea, BL Mirkin, HP Hogenkamp, DM Barten (1987) Effect of adenosine analogues on protein carboxylmethyltransferase, S-adenosylhomocysteine hydrolase, and ribonucleotide reductase activity in murine neuroblastoma cells. Cancer Res 47:3656-3661 [15] RL Momparler, J Bouchard, N Onetto, GE Rivard (1984) 5-aza-2'-deoxycytidine therapy in patients with acute leukemia inhibits DNA methylation, Leuk Res 8:181-185 [16] GE Rivard, RL Momparler, J Demers, P Benoit, R Raymond, K Lin, LF Momparler (1981) Phase I study on 5-aza-2'-deoxycytidine in children with acute leukemia. Leuk Res 5:453-462

Proteomic Study of Hydrophobic (Membrane) Proteins and Hydrophobic Protein Complexes Sung Ung Kang1,2, Karoline Fuchs2, Werner Sieghart2, and Gert Lubec1,* 1

Department of Pediatrics Division of Biochemistry and Molecular Biology, Center for Brain Research, Medical University of Vienna * Dept. of Pediatrics, Medical University of Vienna, Waehringer Guertel 18, 1090 Vienna Austria, [email protected] 2

Abstract. Over the past decade, understanding of the structure and function of membrane proteins has advanced significantly as well as how their detailed characterization can be approached experimentally. Detergents have played significant roles in this effort. They serve as tools to isolate, solubilize, and manipulate membrane proteins for subsequent biochemical and physical characterization. Combination of detergents and various separation methods coupled with mass spectrometry technology e.g. MALDI-TOF/TOF and nano-HPLC-ESI-QTOF/MS/MS is now possible to examine the expression of membrane proteins. This study for establishing separation methods of membrane proteins on two modified gel-electrophoresis (16-BAC and BN-PAGE subsequently with SDS-PAGE) could make it likely that the components of membranes become increasingly amenable to identification and characterization. To study the structure (complexes) and function of membrane proteins, we must first pre-fractionate enriched membrane proteins, or isolate and purify membrane complexes. Such proteins can be solubilized by high-salt solutions or detergents, which have affinity both for hydrophobic groups and for water. Due to a preponderance of binding detergents over hydrophobic regions, when integral proteins are exposed on aqueous solution, these protein molecules are prevented from aggregation and maintained their native conformation. Subsequently, diverse kinds of eletrophoretic analysis combined with mass spectrometry have been applied with site specific (tripsin, chymotrypsin, CNBr and Asp-N) enzymes. The final goal is to enable high-throughput analysis of ion-channel proteins and major neurotransmitter receptor complexes within central nervous system by an electrophoretic method allowing quantification with subsequent unambiguous protein identification.

1 Introduction 1.1 Theoretical Study of Biological Membranes and Detergents for Study of Membrane-Transporters (Complexes) in Mammalian Brain 1.1.1 Biological Membrane Proteins and Membrane Protein Complexes in Mammalian Brain The membranes of living organisms are involved in many aspects of not only the life, growth and development of all cells, but also important targets in various diseases e.g. *

Corresponding author.

430

S.U. Kang et al.

hyperkalemic periodic paralysis and myasthenia gravis involve ion channel defects result from genetic mutations and the actions of specific antibodies that interfere with channel function [1] and [2]. In spite of these importances, analytical processes of membrane proteome are tardy because the predominant structural elements of these membranes are lipids and proteins, major insoluble components, which consist of enriched hydrophobic amino acids (Alanine, Valine, Isoleucine, Leucine, Methionine, Phenylalanine, Tyrosine, Tryptophan). Biological membranes are composed of phospholipids and proteins where phospholipids can be viewed as biological detergents. The majority of the lipids that make up the membrane contain two hydrophobic groups connected to a polar head. This molecular architecture allows lipids to form structures called lipid bilayers, in which the hydrophobic chains face each other while the polar head groups are outside facing the aqueous milieu. Proteins and lipids, like cholesterol, are embedded in this bilayer. This bilayer model for membranes was first proposed by Singer and Nicolson in 1972 and is known as the fluid mosaic model [3]. The embedded proteins are held in the membrane by hydrophobic interactions between the hydrocarbon chains of the lipids and the hydrophobic domains of the proteins. These membrane proteins, known as integral membrane proteins, are insoluble in water but are soluble in detergent solutions [4]. 1.1.2 Solubilization of Hydrophobic (Membrane) Proteins and Hydrophobic Protein Complexes by Detergents Detergents are amphipathic molecules that contain both polar and hydrophobic groups. These molecules contain a polar group (head) at the end of a long hydrophobic carbon chain (tail). In contrast to purely polar or non-polar molecules, amphipathic molecules exhibit unique properties in water. Their polar group forms hydrogen bonds with water molecules, while the hydrocarbon chains aggregate due to hydrophobic interactions. These properties allow detergents to be soluble in water. In aqueous solutions, they form organized spherical structures called micelles, each of which contains several detergent molecules. Because of their amphipathic nature, detergents are able to solubilize hydrophobic compounds in water. Incidentally, one of the methods used to determine the CMC relies on the ability of detergents to solubilize a hydrophobic dye. Detergents are also known as surfactants because they decrease the surface tension of water. Detergents solubilize membrane proteins by mimicking the lipid-bilayer environment. Micelles formed by detergents are analogous to the bilayers of the biological membranes. Proteins incorporate into these micelles via hydrophobic interactions. Hydrophobic regions of membrane proteins, normally embedded in the membrane lipid bilayer, are now surrounded by a layer of detergent molecules and the hydrophilic portions are exposed to the aqueous medium. This keeps the membrane proteins in solution [4]. Complete removal of detergent could result in aggregation due to the clustering of hydrophobic regions and, hence, may cause precipitation of membrane proteins [5]. Although, phospholipids can be used as detergents in simulating the bilayer environment, they form large structures, called vesicles, which are not easily amenable for isolation and characterization of membrane proteins. Hence, the use of synthetic detergents is highly preferred for the isolation of membrane proteins. Dissolution of membranes by detergents can be divided into different stages. At low concentrations, detergents bind to the membrane by partitioning into the lipid bilayer. At higher concentrations, when the bilayers are saturated with detergents, the

Proteomic Study of Hydrophobic (Membrane) Proteins

431

membranes disintegrate to form mixed micelles with the detergent molecules [4]. In the detergent-protein mixed micelles, hydrophobic regions of the membrane proteins are surrounded by the hydrophobic chains of micelles. In the final, solubilization of the membranes leads to the formation of mixed micelles consisting of lipids and detergents and detergent micelles containing proteins. Other combinations of micelles containing lipids and detergents and lipid-protein-detergent molecules are possible at intermediate concentrations of detergent. Micelles containing protein-detergent molecules can be separated from other micelles based on their charge, size, or density. A large number of detergents with various combinations of hydrophobic and hydrophilic groups are now commercially available. Based on the nature of the hydrophilic head group, they can be broadly classified as ionic, non-ionic, and zwitterionic detergents. 1.1.3 Ion-Channel Proteins (Membrane Transporters), Neurotransmitter Receptors, and Their Complexes In the last few years a new word has entered the medical and scientific vocabulary. This word, channelopathy, describes those human and animal diseases that result from defects in ion channel function [6]. Ion channels are membrane proteins that act as gated pathways for the movement of ions across cell membranes. They play essential roles in the physiology and pathophysiology of all cells and it is therefore not very surprising that an ever increasing number of human and animal diseases have been found to be caused by defective ion channel function. Ion-channels regulate the flow of ions across the membrane in all cells. In nerve and muscle cells they are important for controlling the rapid changes in membrane potential associated with the action potential and with the postsynaptic potentials of target cells. For instance, the influx of Ca2+ ions controlled by these channels can alter many metabolic processes within cells, leading to the activation of various enzymes and other proteins, as well as release of neurotransmitter. Channels differ from one another in their ion selectivity and in the factors that control their opening and closing, or gating. Ion selectivity is achieved through physicalchemical interaction between the ion and various amino acid residues that line the walls of the channel pore. Gating involves a change of the channel’s conformation in response to an external stimulus, such as voltage, a ligand, stretch or pressure. The molecular make-up of a membrane-bound receptor consists of three distinct structural and functional regions: the extracellular domain, the transmembranespanning domain and intracellular domain. Receptors are characterized by their affinity to the ligand, their selectivity, their number, their saturability and their binding reversibility. So-called isoreceptors from families of structurally and functionally related receptors, which interact with the same neuroactive substance, they can be distinguished by their response to pharmacological agonists or antagonists. Isoforms of receptor can occur in a tissue-restricted manner, but expression of different isoreceptors in the same tissue is also found. Binding of a ligand to a receptor induces a modification in the topology of the receptor by changing its conformation: this allows either an ion current to flow, so-called ionotropic receptors, or elicits a cascade of intracellular events often consist of the metabotropic receptor [7]. The design of intramembranous receptors is quite variable. Some receptors consist of single polypeptides exhibiting three domains: an intracellular and extracellular domain linked by a transmembrane segment. Other receptors are also monomeric, but

432

S.U. Kang et al.

folded in the cell membrane and thus form variable intra- and extracellular as well as transmembrane segments. A large group of receptors consists of polymeric structures with complex tertiary topology. 1.2 Experimental Study of Biological Membranes; Study of Membrane-Transporters (Complexes) in Mammalian Brain 1.2.1 Analysis of Hydrophobic (Membrane) Proteins and Hydrophobic Protein Complexes Analysis by Modified Gel Electrophoresis Recently, most of protocol have been focused on direct injecting LC-MS and/or coupled with multidimensional protein identification technology (MudPIT) for whole membrane protein screening on murine brain [8], and with protein-tagging protocol for targeted membrane protein study [9]. Application of poly-acrylamide gel electrophoresis (PAGE) on membrane proteins, although it has several advantages comparing with LC-MS, is hesitated since its highly insoluble or hydrophobic specificity composed of large portion of hydrophobic amino acids. In this study, modified gel electrophoresis, BN/SDS-PAGE and 16-BAC/SDS-PAGE, will be applied because developing new applications in analysis of membrane-transport protein should help to answer neurobiological questions at the protein level, independent of antibody availability and specificity. 1.2.1.1 Blue Native Poly-Acrylamide Gel Electrophoresis. BN-PAGE is a technique developed by Schaegger and von Jagow (1991) for separation and analysis of membrane protein complexes, often in their enzymatically active form [10]. Its main advantage over conventional PAGE systems is the use of Coomassie Brilliant Blue G in both sample-loading buffer and the cathode buffer. Binding of the dye to native proteins performs two important functions; (1) it imparts a slight negative charge on the proteins, thereby enabling them to enter the native gel at neutral pH where the stability of protein complexes is most optimal. (2) By binding to hydrophobic regions of proteins, the dye prevents protein aggregation during electrophoresis. Electrophoresis separates detergent from protein complexes, which often results in the aggregation of hydrophobic membrane proteins. However, the presence of Coomassie dye in BN-PAGE maintains protein solubility, thereby enabling multiprotein complexes to separate one from another according, largely, to their apparent molecular mass (Mr). A further advantage of this technique is that protein bands are visible during electrophoresis and thus subsequent staining of the gel is not always necessary. Although the charge shift on membrane proteins and their complexes exerted by the Coomassie dye can lead to aberrant molecular masses, the Mr of proteins with a pI below 8.6 does not deviate significantly when compared with most soluble protein markers [11]. BN-PAGE has been instrumental in the analysis of protein complexes of the mitochondrial membranes, in particular respiratory complexes [10]. It has also been an important tool in the analysis and assembly of mitochondrial protein translocation complexes. Additionally, the method has been used to study individual or multiple protein complexes from membranes including chloroplasts, endoplasmic reticulum, and the plasma membrane. BN-PAGE can also be used for the analysis of soluble protein complexes, as has been observed for the heptameric mitochondrial matrix form of Hsp60. Not all membrane proteins and their complexes resolve on BN-PAGE. For

Proteomic Study of Hydrophobic (Membrane) Proteins

433

example, many mitochondrial proteins streak from the high~ to the low~ Mr range, which may be due to the proteins dissociating from their complexes during electrophoresis or from a change in their solubility. Other complexes resolve extremely well. This variability may be dependent on a number of factors including the detergent employed and the stability of the protein complexes, as well as whether Coomassie dye in fact binds to proteins being analyzed. It is important to use a detergent that is efficient at solubilizing membranes but does not disrupt the integrity of the membrane protein complex. Initial studies should determine which detergents are most suitable for maintaining the particular protein complex intact prior to application and for the duration of the run. Most studies employ Triton X-100, n-dodecyl maltoside, or digitonin as detergents. In the case of mitochondria, digitonin gives more discernible protein complexes of higher Mr in comparison to dodecyl maltoside or Triton X-100. Indeed, the stable complexes observed on BN-PAGE following Triton X-100 solubilization can instead be seen as supercomplexes when digitonin is used. BN-PAGE has also been used to analyze the subunit composition of membrane protein complexes. Total extracts or purified membrane protein complexes can be subjected to BN-PAGE, and indivisual subunits can be separated using SDS-PAGE in second dimension. Well-resolved protein spots originating from complexes can be observed and subjected to further downstream processing such as Coomassie staining, immunoblot analysis, or amino acid sequencing [12]. This is a particularly useful technique because two-dimensional gel electrophoresis using IEF in the first dimension may be problematic for resolving membrane proteins. Purification of membrane protein complexes for crystallization trials for structural analysis is the one of other applications of BN-PAGE. 1.2.1.2 16-BAC Poly-Acrylamide Gel Electrophoresis. The introduction of pHgradient SDS–PAGE two-dimensional electrophoresis by O’Farrell revolutionized the analysis of complex protein mixtures commonly found in biological samples. Numerous modifications of the original procedure have been described that attempted to overcome the limitations of the original procedure such as the introduction of immobilized pH gradients. These systems have been used to resolve and isolate picomolar quantities of soluble proteins. Due to their high reproducibility, these procedures also form the basis of two-dimensional gel protein databases. However, the separation of integral membrane proteins, particularly those with large hydrophobic domains (multiple transmembrane domains), has remained less than satisfactory since many of them resolve only poorly in the pH-gradient dimension. This is partially due to the inherent problem that membrane proteins do not solubilize well in nonionic detergents, particularly at low ionic strength (despite the presence of high amounts of urea). Even if solubilization can be achieved, the proteins often precipitate around pH values close to their isoelectric point. Furthermore, charge heterogeneity as commonly found in glycosylated membrane proteins contributes to additional streaking in the first dimension. Numerous attempts have been made to overcome these inherent problems, most of them aimed at improving initial solubilization of the proteins by using detergents stronger than Nonidet P-40 including SDS, zwitterionic detergents such as CHAPS, and zwittergent. While many of these protocols result in improvements of protein patterns derived from membrane fractions, it is often not clear which of the spots represent truly integral membrane proteins. Other modifications have been successful for separating individual membrane proteins, but the techniques must be optimized for

434

S.U. Kang et al.

each protein individually and usually do not tolerate loading of high amounts of protein. Thus, to our knowledge there appears to be no satisfactory protocol involving pHgradient electrophoresis which is universally suitable for the separation of biological membranes with a complex protein composition. Due to the problems encountered with pH-gradient electrophoresis, various alternative approaches have been used to increase the resolution of membrane proteins beyond the resolution afforded by onedimensional discontinuous SDS–PAGE. In search for a method that yields optimal resolution with minimal loss of material, we have adapted the procedure developed by Macfarlane [13]. In this procedure, separation in the first dimension is achieved by an ‘‘inverse’’ discontinuous electrophoresis system, using the cationic detergent benzyldimethyl-n-hexadecyl ammonium chloride (16-BAC), a stacking gel at pH4.1, and a separation gel at pH2.1. Similar to SDS–PAGE, proteins are separated according to their molecular mass. However, the properties of the acidic 16-BAC system are sufficiently different from the basic SDS system to allow for substantial resolution in the second dimension [14]. 1.2.2 Identification of Hydrophobic (Membrane) Proteins and Hydrophobic Protein Complexes Analysis by Mass Spectrometry and Immunological Approaches In mass spectrometry, the sample (e.g. a tryptic digest of a protein spot) is ionized and mass per charge is analyzed. The most commonly used ionization techniques for polypeptides and peptides are electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI). In ESI, polypeptides are ionized amino acid by amino acid, and thereby sequenced according to mass per charge (m/z). In MALDI, whole peptides are charged and their m/z is measured. To decrease the level of complexity at analysis, a preseparatory technique such as HPLC is used when peptide mixtures are studied. Presence of peptide fragments matching known sequences from database will identify the protein. Two problems in database matching are that some different amino acids have the same mass per charge ratio, thus making them indistinguishable from one another, and that reported sequences in the database may contain a substantial amount of erroneously annotated entries, thus making proteins with incorrect reference data evade identification. Sequencing errors may also make the databases references incorrect. Mass spectrometry is gaining further in popularity due to lowered costs of analysis. It is not limited to protein identification, but in proteomics it has had one of its greatest impacts as it offers new accurate possibilities of identification [15]. The immunological approach of identification is based on the production of specific antibodies and detection by a secondary antibody linked to an enzyme. The approach relies on the specificity of the antibody used; it is often produced against which the antibody was raised was a long polypeptide sequences, there may be a population of antibodies so that many other proteins with sufficiently similar epitopes cause unspecific binding. If a sample includes several isoforms from a family of proteins with a high degree of similarity, the different isoforms may not be distinguished immunologically. 1.3 Computational Study of Biological Membranes; Bioinformatics for Proteins Structure, Functions, and Interactions A typical Proteomics experiment might begin with cells grown under some specified set of conditions. A subset of the cellular proteins is purified through subcellular fractiona-

Proteomic Study of Hydrophobic (Membrane) Proteins

435

tion or the use of affinity tags or protein interactions. These proteins are then identified or quantified by some combination of one- or two-dimensional gel electrophoresis, high-performance liquid chromatography, amino acid sequencing, proteolysis, and/or mass spectrometry. A diverse set of bioinformatics analysis is required both to perform these experimental steps and to interpret the resulting data (Fig. 1). Beyond the initial step of identifying peptides from their sequences or mass spectra, bioinformatics analysis of these results is (1) to identify intact proteins from their constituent peptides, (2) to find related proteins in multiple species, (3) to find genes corresponding to the proteins, (4) to find coexpressed genes or proteins, (5) to find or test candidate protein interaction partners, (6) to validate or compare proteomics measurements of posttranslational modifications or protein subcellular localization with computational predictions of the same properties, (7) to predict structures or domains of proteins, and (8) to find the functions of proteins discovered in proteomics experiments. Experiment

Protein sequence Protein-type-specific DB Organism-specific DB Large protein DB Large nucleotide DB ESTs/unfinished sequences Identify orthologs

Sequence DB search

Search domain DBs/ domain assignment

Try to identify molecular function

Guess function from curated domains Medine links, Gene annotation Identify transmembrane segments and topology

Try to identify cellular function

Determine cellular localization DBs, signal prediction

Prosites, Glycosylation, Phosphorylation, cleavage

Predict secondary structure Homology modeling Fold recognition Ab initio modeling

Look for inherent sequence signals

Generate model of protein tertary structure

Predict functionally linked and interacting proteins

Take advantage of Experimental databasess

Gene fusions Operons Gene neighbors Phylogenetic profiles

Protein-Protein interaction Metabolic activity Signaling/Modification RNA expression Mutation/Phenotypes

Fig. 1. Flowchart illustrating the steps for bioinformatics to predict a protein’s structure, function, and interaction partners

2 Conclusion In this review, we introduced an analytical tool for the analysis of membrane proteins using a gel-based proteomics approach and mass spectrometric analysis. In addition, we for the first time reported the unambiguous sequence analysis and identification of two recombinant GABAA receptor subunits (REF). Further work is going on in our

436

S.U. Kang et al.

laboratories aiming to analyse receptor proteins from the brain, to identify receptor subtypes and isoforms and investigate posttranslational modifications.

References [1] Ptacek LJ, George AL Jr., Griggs RC, Tawil R, Kallen RG, Barchi R L, Robertson M, Leppert MF (1991) Identification of a mutation in the gene causing hyperkalemic periodic paralysis. Cell 67(5):1021-7 [2] Donnelly D, Mihovilovic M, Gonzalez-Ros JM, Ferragut JA, Richman D, MartinezCarrion M (1984) A noncholinergic site-directed monoclonal antibody can impair agonist-induced ion flux in Torpedo californica acetylcholine receptor. Proc Natl Acad Sci U S A. 81(24):7999-8003 [3] Singer SJ and Nicolson GL (1972) The fluid mosaic model of the structure of cell membranes. Science 175:720 [4] Helenius A and Simons K (1975) Solubilization of membranes by detergents. Biochim. Biophys. Acta 415:29 [5] Horigome T and Sugano H (1983) A rapid method for removal of detergents from protein solutions. Anal. Biochem. 130:393 [6] Hoffman EP (1995) Voltage-gated ion channelopathies: inherited disorders caused by abnormal sodium, chloride, and calcium regulation in skeletal muscle. Annu Rev Med. 46:431-41, Review [7] DeLorey TM and Olsen RW (1992) Gamma-aminobutyric acidA receptor structure and function. J Biol Chem. 267(24):16747-50, Review [8] Wu CC and Yates JR 3rd. (2003) The application of mass spectrometry to membrane proteomics. Nat Biotechnol. 21(3):262-7 [9] Olsen JV, Andersen JR, Nielsen PA, Nielsen ML, Figeys D, Mann M, Wisniewski JR (2004) HysTag--a novel proteomic quantification tool applied to differential display analysis of membrane proteins from distinct areas of mouse brain. Mol Cell Proteomics. 3(1):82-92 [10] Schaegger H and von Jagow G (1991) Blue native electrophoresis for isolation of membrane protein complexes in enzymatically active form. Anal Biochem. 199(2):223-31 [11] Schaegger H, Cramer WA, von Jagow G (1994) Analysis of molecular masses and oligomeric states of protein complexes by blue native electrophoresis and isolation of membrane protein complexes by two-dimensional native electrophoresis. Anal Biochem. 217(2):220-30 [12] Kang SU, Fuchs K, Sieghart W, Lubec G (2008) Gel-Based Mass Spectrometric Analysis of Recombinant GABAA Receptor Subunits Representing Strongly Hydrophobic Transmembrane Proteins. J Proteome Res. (in print) [13] Macfarlane DE (1983) Use of benzyldimethyl-n-hexadecylammonium chloride ("16BAC"), a cationic detergent, in an acidic polyacrylamide gel electrophoresis system to detect base labile protein methylation in intact cells. Analytical Biochemistry 132:231-5 [14] Bierczynska-Krzysik A, Kang SU, Silberrring J, Lubec G (2006) Mass spectrometrical identification of brain proteins including highly insoluble and transmembrane proteins. Neurochem Int. 49(3):245-55 [15] Chen WQ, Kang SU, Lubec G (2006) Protein profiling by the combination of two independent mass spectrometry techniques. Nat Protoc. 1(3):1446-52

Climate Change: Business Challenge or Opportunity? Chung-Hee Kim Department of Management, Glasgow, Strathclyde Business School, UK [email protected]

Abstract. This paper aims to critically explore and examine why, how and to what extent climate change and energy management is used as a business opportunity in the global market. With the case of the UK and Germany, the author argues that the integrating climate change and energy issues with business management may prove sustainable and environmentally sound for the firm. The implication is that climate change management can be a powerful instrument for addressing the integration of current global issues into the economic success of business, when business concerns these issues through corporate social responsibility (CSR) lense as an opportunity and legitimacy in the international market. To encourage business involvement and make it a real market, it is argued that the international level of cooperation and incentive policy can be the major driving force and contribution.

1 Introduction: Why “Climate Change” Now? Why is climate change emerging as an important issue and to what extent is it used as a means of doing business? Climate change and energy management is regarded as an economic challenge as well as an opportunity. In prior times, business could analyse its success mainly by profit maximisation, which even it was regarded as the only corporate social responsibility (CSR) [7]. However, this is no longer true in the age of globalisation. Business has to obey law, be ethical and be good corporate citizen beyond making a profit [1]. To elaborate, business, especially multi-national corporations (MNCs), have to understand recent challenges to traditional market economics [2] and seek legitimacy in the global market in which they operate [3, 9]. They have to build the competency to cope with the transformation of the market from a profit driven to a value driven one. Their ultimate goal now is not to gain immediate market share but to win the competition in a race to build competencies [8, 14]. To this point, climate change issues are elaborated upon through the views of Corporate Social Responsibility (CSR) as a method of value creation. In this regard, this paper will elaborate the issues according to three questions and seek the implications of business and other stakeholder group on climate change issues.

2 Research Questions and Discussions 2.1 What Kinds of Climate Change and Energy Management Will Prove Sustainable and Environmentally Sound? Business is the principle offender contributing to climate change as well as a critical player in resolving the problem. Therefore, without corporations’ active involvement,

438

C.-H. Kim

the issues will not be settled. If this is so, then how can business is approached on this issue? The answer is to integrate climate change with business management. In this era of globalisation, how should business fit climate change management to its market need? The author elaborates an answer with the case of Sir Richard Branson’s current investment announcement on the climate change issue (Clinton Global Initiative, September 2006). Why is Sir Richard Branson contributing £1.6bn to fight global warming and investigate renewable energy? Can we identify from this case how and to what extent energy management can be integrated with business management? It is supposed that this decision came from the entrepreneur’s calculation that environment issues will soon materialise as a competitive market advantage, and that is why he would like to be a first-mover in it. The first and foremost reason is to pursue a new corporate value through environmental investment as corporate responsibility. This is also related with risk management and sustainable development management. Through this action, Branson’s the Virgin can establish legitimacy in the global market through corporate responsibility, and transform the energy challenge into a global business opportunity. CSR is no longer an option for MNCs. It is emerging as ‘a must’ and ‘an operating license’ in the global market in order to win the market game. It is also assumed that integrating climate change into the business is very much related with stakeholder management: the term stakeholder has become an ‘idea of currency’ [6] and is now used almost as everyday terminology in business [13]. To elaborate, business can use CSR issues of energy and environment as an efficient way to communicate with various stakeholders of the day, and especially in the relationship with one of the most important stakeholders – government. Business can also pursue its business purpose through transparent lobbying to relevant governments (including international and host governments). As we are living in a time of decentralisation and diversity, the business’ relationship with other stakeholders is emerging as a critical issue at the heart of business management. The power on the global stage is no longer concentrated only in one segment (such as government, business, military or NGOs). There must be appropriate power sharing, and dialogue and networking, with other members of the community. Such relationships develop the foundations of trust, confidence and loyalty which must be built to foster a corporate reputation [4]. Therefore, according to Hillman and Keim [10], the firm must address “true” stakeholder issues, such as environmental protection, not just “social” and “philanthropic” issues. 2.2 Which Strategies, Instruments, and Programs are Driving Business Climate Change Management? Are There Any Incentives to Innovate? 2.2.1 Driving Force by Business Many business actions are unsustainable. Thus, sustainable development can be regarded as a capacity for continuance in the long term. The concept of SD has increasingly come to represent a new kind of world, where economic growth delivers a more just and inclusive society, at the same time as preserving the natural environment and the world's non-renewable resources for future generations.

Climate Change: Business Challenge or Opportunity?

439

As sustainability concepts move into mainstream asset management and as the request for sustainability funds increases, investors are looking for SD indicators of a firm’s value creation beyond economic parameters [11]. For example, the Dow Jones Sustainable Index (DJSI) and the Global Reporting Initiative (GRI) are extensively used by many investors for measuring corporate sustainability performance of the company. In this regard, the European Emission Trading Scheme is regarded as an efficient way of attracting business involvement in environmental activities while pursuing economic efficiency at the same time. If there is an opportunity to make money, business naturally gathers at this market. This kind of business-driven market is encouraging for both the environmental and economic aims. For instance, Morgan Stanley, with the largest financial commitment to date to ETS, has recently announced its investment plan of about $3bn (£1.6bn) in emission trading (26th October 2006, FT). From such a case, we can expect the future blueprint plan for a global environmental approach. 2.2.2 Driving Force by Government How business deals with energy and climate change issues clearly impacts on national competitiveness. Governments put the emphasis on these issues as governments play a very important role for business promotion of energy and resource management efficiency, as well as investment for renewable energy. When it comes to a comparative analysis of government policy between the UK and Germany, according to the 4th National Communication of each government policy and measures from the data of UN Framework Convention on Climate Change (see following table), this paper recognises that much of the driving forces of the two governments comes similarly from EU-level directives and regulations, whereas a detailed approach to each sector is somewhat different according to their national circumstances. Both governments emphasise the importance of energy efficiency and the invention of renewable energy through both environmental and economic perspectives, whereas the approach to a specific source of energy is somewhat different. For example, the German government has clearly announced its policy concerning nuclear energy, whereas the UK has not clearly mentioned its policy because nuclear energy has been hotly debated both politically and economically. In terms of renewable energy, the UK puts the emphasis on wind, wave and tidal, and biomass energy, whereas Germany focus on wind, hydro and biomass energy as future renewable energy sources. As can be seen from the comparison analysis between the two countries below, business must become more cleaver in following up governments’ policies, both in measures as well as incentives, and discover an appropriate strategy for their investment in this challenging market. Even though government tries to drive business to become actively involved as indicated in the above analysis, government has to know that many corporations still think this kind of government political approach and gesture is not enough and too vague. The business sector in the UK has been criticising the lack of consistency in government policy, which has to be transformed from a review process to an actual regulatory action and from target to real action (BWEA, 2006). It can only make business to join more actively in environment market.

440

C.-H. Kim Table 1. Government policy and regulation to business sector (Germany and UK) Germany

Framework

Regulation & Tax

Organisation

Emission Trading Scheme

Building

Nuclear Energy

Major renewable energy & investment

UK

- Gives long-term perspective to all actors and hence a dependable framework for investment decision

- Committed to clear, flexible and stable policy framework for business

- The National Climate Protection Programme of 13.7 2005 - The Renewable Energies Act (EEG) - Market incentives programme (MAP) (promoting solar collectors and biomass plants) - dena (German Energy Agency) (limited liability company, GmbH) - Working Group on Emissions Trading to Combat Greenhouse Effect (AGE) - Project Mechanism Act (ProMechG) - Transposing Directive 2002/91/EC into national law - Energy Saving Ordinance (EnEV) => based on EU Building Directives - Energy efficiency in building sector (including energy certificates)

- Climate change levy (tax on the use of energy in industry/commerce and public sector) - Climate change agreement (80% discount from the levy) - Carbon Trust (independent company funded by the Government) - voluntary UK ETS (5-year pilot scheme) => move to EU ETS as of 2007

- Building Regulation (Energy standard of new and refurbished buildings) - Implementing of the EU Energy Performance of Buildings Directive (EPBD)

- To be phased out gradually over the next 20 years.

- Not mentioned at NC4 (politically & economically debated issue)

- wind energy - hydro power - biomass utilization

- wind energy - wave and tidal energy - biomass heat

* It is analysed based on the source of UK and Germany National Communication 4, UN Framework Convention on Climate Change.

2.3 On What Level Do Incentives Have the Greatest Effect (International, EU, National, Regional, Etc.)? As climate change issues are not the only issues of one nation or one region, it is insisted that international and EU-level cooperation as well as incentives is the most

Climate Change: Business Challenge or Opportunity?

441

important factor to make the energy market actively develop. As can be seen from the national policy and measures of Germany and the UK, many of the national policies have been driven at the EU and multinational level. For example, for the successful continuation of an emission trading scheme, there must be a solid operating system at the EU or global level, and it must be backed up by international policy and control mechanisms. Still, the most severe emitter of the green house gases – the United States, has not joined in this market. Consequently, there is a question whether it will develop into a real market or will fade away. In order to attract US or other countries which still have not joined in this global endeavour, emission trading must be regarded as the most efficient way to attract them to introduce business investment [15]. Therefore, international cooperation on this issue is critical. Secondly, there must be an effort at the international level of making climate change and energy management issues into a global index and standard. Even though there may be a different approach by each nation on this issue according to their environmental, institutional and economic situation, there must be a global approach for international trade, with environment management as a global standard and rigorous policy; not just a bureaucratic approach with regulations but an encouragement policy with incentives for them to compete in transparency, is needed. It also gives business certainty for them to pursue their concrete strategy. As well, the international community has to recognise that there will be more and more businesses who would like to take competitive advantage of their first-mover advantage in this new hybrid market.

3 Conclusion To conclude, climate change and energy management can be a powerful instrument for addressing the integration of current global issues into the business success of business, when business sees these issues through CSR as an opportunity in the global market. To encourage business involvement and make it as a real market, an international level of cooperation and incentive policy can be the major driving force and contribution. Acknowledgement. This is a developing paper of the essay which was introduced at the World Business Dialogue Essay Competition, Cologne University, Germany in March 2007.

References [1] Carroll AN and Bucholtz AK (2003) Business and Society: Ethics and Stakeholder Management. Mason.: Thomas Learning [2] Clulow V (2005) Futures dilemmas for marketers: can stakeholder analysis add value? European Journal of Marketing 39(9/10):978-998 [3] Dunning J (2003) Making Globalization Good: The Moral Challenges of Global Capitalism, Oxford: Oxford University Press [4] Edelman Survey (2006) Annual Edelman Trust Barometer 2006. New York

442

C.-H. Kim

[5] European Union (2004) Directive 2004/101/EC, amending Directive 2003/87/EC establishing a scheme for greenhouse gas emission allowance trading within the Community, in respect of the Kyoto Protocol's project mechanisms. European Commission [6] Freeman RE and Phillips R (2002) Stakeholder theory: A libertarian defense. Business Ethics Quarterly 12(3):331-350 [7] Friedman M (1970) The Social Responsibility of Business Is to Increase Its Profits’. The New York Times Magazine. (13, September 1970) [8] Gary H (1990) The Core Competence of the Corporation. Harvard Busness Review 68, no. 3 (May/June 1990):79-91 [9] Gooderham P and Nordhaug O (2003) International Management: Cross Boundary Challenges. Oxford: Blackwell Publishing Ltd. [10] Hillman AJ and Keim GD (2001) Shareholder value, stakeholder management, and social issues: What's the bottom line?. Strategic Management Journal 22:125-139 [11] Holliday CO, Schmidheiny S and Watt SPW (2002) Walking the Talk: The Business Case for Sustainable Development. Sheffield, UK: Greenleaf Publishing [12] Morrison (2006) Morgan Stanley plans to invest $3bn in Kyoto scheme to reduce emissions. Financial Times. Oct 27, 2006 [13] Pinnington A, Macklin R and Campbell T (2007) Human Resource Management: Ethics and Employment. Oxford: Oxford University Press [14] Porter M and Kramer MR (2006a) Strategy & Society: The Link Between Competitive Advantage and Corporate Social Responsibility. Harvard Business Review 84(12):78-92 [15] Stern Review (2006) The Economics of Climate Change: The Stern Review. Cabinet office. HM Treasury. London [16] UK dti (2006) The Energy Challenge Energy Review Report 2006, Department for Trade and Industry, London [17] UNFCCC (2006) Germany National Communication 4. United Nations Framework Convention on Climate Change. http://unfccc.int/resource/docs/natc/gernc4.pdf. Accessed 30 June 2008 [18] UNFCCC (2006) UK National Communication 4. United Nations Frame work Convention on Climate Change. http://unfccc.int/resource/docs/natc/uknc4.pdf. Accessed 30 June 2008

Comparison of Eco-Industrial Development between the UK and Korea Dowon Kim and Jane C. Powell University of East Anglia, School of Environmental Sciences, Norwich, UK [email protected]

Abstract. The application of eco-industrial development (EID) has attracted increasing attention worldwide leading to diverse implementation strategies of EID including eco-industrial parks and industrial symbiosis. As eco-industrial developments have resulted in varying levels of success it is necessary for new EID programmes to understand the lessons from both the successful and less successful earlier cases. This study has been undertaken to compare the implication of the National Industrial symbiosis Programme (NISP) in the UK, regarded as an active EID case, with the Korean eco-industrial park scheme as a newly implemented one. The two EID models were analysed with a framework centred on three practical aspects: the underlying approach to EID, the organisational framework and the operational framework. Comparing the two models based on the three aspects, this study suggests several implications for the implementation of EID in Korea, including business centred approach, expanding EIPs to regional EID, standardising operational framework and developing innovative synergy generation tools.

1 Introduction Eco-industrial development (EID) is an emerging industrial approach to sustainable development grounded in industrial ecology. It compares industry with natural ecology in order to transform conventional industrial systems into more sustainable ones. EID aims to optimise industrial resource use by the collaboration among the firms in an industrial network, focusing on recycling industrial by-products and wastes within the network. As a result, not only can EID improve the economic growth and competitiveness of participating firms, but it also can minimise the environmental impacts from industrial resource use and consequently mitigate climate change. EID has been developed in diverse strategies since the Kalundborg symbiosis in Denmark was found in 1989 [1]. As the application of EID has attracted increasing attention worldwide, diverse EID strategies have been also developed and have evolved in many industrial regions in the world. The Kalundborg case and the National Industrial Symbiosis Programme (NISP) in the UK [2] are typical European examples of industrial symbioses (IS), while many other industrial regions including those in USA, China and Korea have also developed eco-industrial parks (EIPs) [3-8]. In addition, some European regions such as Styria in Austria and Finland have developed recycling network [9, 10]. Although it is hard to evaluate existing EID cases exactly as their features often vary, some EID programmes are regarded as successful cases while others are regarded as failed [4, 5, 11]. Therefore it is necessary for new EID

444

D. Kim and J.C. Powell

programmes such as Korean EIPs to understand which strategies were effective to implement EID in initial stage through the lessons from the existing cases. This study has been undertaken to explore the implications for implementing EID, based on the comparison of an active EID case with a newly implemented one. NISP, which was selected as the active EID case has been compared with the recently introduced Korean EIP scheme. The two EID models are analysed with a framework centred on three practical aspects: the underlying approach to EID, the organisational framework and the operational framework. The data of this study were collected mainly from literatures including presentation materials, reports and websites.

2 National Industrial Symbiosis Programme (NISP) in the UK NISP is a national scale network of industrial symbiosis in the UK aimed at improving cross industry resource efficiency through the commercial trading of materials, energy and water and sharing assets, logistics and expertise. Operating at the forefront of industrial symbiosis thinking and practice, the programme has helped industries and firms take a fresh look at their resources. Integrating the experience and knowledge from the four precursor pilot projects which were developed independently, NISP expanded its network into national scale [2, 12-17]. The programme is managed by an independent non-profit organisation funded by public organisations [18]. NISP is considered to have created significant economic, environmental and social benefits based on creating sustainable business opportunities and improving resource efficiency. Most industrial sectors across the country are involved in the NISP including include oil, chemicals, pharmaceutical, steel, paper, water, cement, construction, engineering, financial and consultancy industry [19], with the participating number increasing sharply since the programme was launched. In 2007, the number of businesses working with NISP is over six thousand over all industrial sectors [17]. Recently, NISP was selected as an exemplar programme for eco-innovation by the European Commission [20]. NISP has reported the following performance of the programme during 14 months from April 2005 to June 2006 [12]. • • • • • • • •

1,483,646 tonnes diverted from landfill (of which 29% was hazardous wastes) 1,827,756 tonnes of virgin material saved 1,272,069 tonnes CO2 savings 386,775,000 litres potable water savings £36,080,200 additional sales for industry £46,542,129 of cost savings to industry 790 jobs created £32,128,889 private capital investment in reprocessing

1 Underlying approach to EID: Business centred industrial symbiosis One of the most prominent features of NISP is that the programme adopted the notion of industrial symbiosis rather than an eco-industrial park or eco-industrial network. NISP’s definition of industrial symbiosis emphasises five points: expansion of geographical boundary of collaboration to national level; expansion of collaboration types to broader resource sharing; expansion of the participants of symbiosis to

Comparison of Eco-Industrial Development between the UK and Korea

445

non-industrial organisations; demand based approach and the necessity of management tool. In particular, the programme expands the concept of resources from byproduct or waste to all tangible and intangible resources that can practically diversify the opportunities of industrial symbiosis, including raw materials, expertise and services [12, 18]. NISP approaches inter-firm collaboration from the viewpoint of business, the main players of eco-industrial development. The programme practically seeks bottom line benefits for the member firms through efficient resource use rather than environmental management. The vision of the program is to change the way business thinks. The programme prefers business terms such as ‘new products’, ‘opportunity’ and ‘cost reduction’ to the environmental terms such as ‘regulation’ and ‘waste minimisation’ to attract voluntary participation of businesses by highlighting the business benefits rather than emphasising the environmental concerns that may cause negative preconception of businesses [19]. All synergy projects are led by project advisory group (PAG) which is composed of highly experienced and knowledgeable members in industry, business or related area, across the region. As PAG gives practical advice to the project team from the viewpoint of business and plays an important role in linking industrial sectors using the same ‘industrial language’, the credibility of the programme as well as the success rate of each synergy project increases, and more businesses are attracted to NISP [21]. NISP keeps the viewpoint of business during the industrial symbiosis process. Involvement in NISP is voluntary relying only on the participant’s willingness to collaborate and its process is not strictly formalised. The information or data of participating firms are strictly protected by the security system to address business’s concern about confidentiality. The quality level of data is not compelled, but wholly depending on the decision of the participating organisation. Synergy projects are developed by the businesses concerned based on their own interests while the NISP practitioners support them by monitoring the process, coordinating with other organisations or experts to solve regulation or technology issues of the project and advertising the synergy performance to stimulate industrial symbiosis in the similar fields [2, 21]. 2 Organisational framework of NISP A distinctive feature of the NISP model is that inter-firm synergy is sought in geographically wide spatial boundary. NISP is the first industrial symbiosis initiative in the world to be launched on a national scale [18]. Although each regional programme was implemented one by one, NISP has been planned on a national scale from the beginning. The national NISP network consists of twelve large interfacing regional NISPs with a critical mass comprised of a wide variety of cross sector industries so that inter-firm synergy can be created flexibly. The regional boundary was designed to be the same as the administrative area for regional economic development so that each regional programme can be supported by the regional governance. Although NISP operates on a regional basis led by each regional PAG and coordinated by each regional team, inter-regional collaboration and communication also work depending on the characteristics of the each inter-firm collaboration to cover most national industries. [2,21] NISP is coordinated and managed by the independent third party. Although the programme is funded by government and local authorities, and it contributes to the

446

D. Kim and J.C. Powell

benefit of businesses, the programme is independent of both governmental organisations and businesses. At a national level NISP is governed by an independent board consisted of leading academic and representatives from the regulators and government, while that at a regional level is governed by business-led PAG. The programme is currently managed by International Synergies Limited, a Birmingham based independent non-profit organisation partially funded by public organisations [17]. The regional programme is coordinated by regional NISP practitioners who have proven track records in environmental and resource management in diverse fields. Not only can their wealth of experience lead the synergy opportunities more successful, but it also contributes to building the credibility between the participants and the NISP [2,12]. NISP has sought support from diverse sectors and partnership in order to expedite the growth of the programme and to reduce the risks of public funding. NISP is currently supported by academia, non governmental organisations and government departments and agencies including the DTI, DEFRA, Environment Agency, local authorities and regional development agencies (RDA) as well as industrial organisations. NISP considers the partnership with RDA particularly important as the programme supports regional development. NISP is funded by diverse public funding body including DEFRA (Business Resource Efficiency and Waste, BREW) and each RDA. Addressing the increasing necessity of new technology or expertise for synergy projects, NISP built the partnership with the Resource Efficiency Knowledge Transfer Network (RE-KTN) to transfer higher level technology or knowledge that are necessary to create more sustainable output in synergy projects [18, 21]. 3 Operational framework of NISP While most industrial symbiosis activities are carried out independently on a regional basis, all regional NISPs share overall operational framework including operational tools and IS database accumulated in each region. The national team develops basic operational framework such as IS process or internet based database system and provide them to each regional NISP. In addition, the operational tools developed in the regional NISPs are disseminated to all regions by the national team. Each regional NISP identifies synergy partnerships and facilitates them independently in each regional context using the structured operational tools. Therefore the operational framework is shared in national scale, while practical IS tasks are carried out in the regional context as the combination of a top-down and bottom-up approach. NISP has developed its own operational tools. The basic IS process of NISP consists of six steps, although this process is flexibly applied depending on each case. The first step is securing commitment and ownership of the firms and organisations in the region. The second and third steps are training participating organisations to develop industrial symbiosis thinking and collecting resource data from them, which take place concurrently. The fourth step is identifying synergy projects and the core stage of industrial symbiosis process, while the fifth step involves implementing commercially viable synergies and the final step is monitoring and maintaining the synergy projects. [2] In addition to the IS process, operational tools for generation of synergy idea have been developed by NISP for use during the IS process. The training program for

Comparison of Eco-Industrial Development between the UK and Korea

447

member organisations and the web based data collection system are the main tools for the initial stage of IS process. As the database is embodied in web-based framework, all data and information entered in a region can be shared nationally with flexible and remote access. The data held on the system are analysed by regional practitioners to identify synergistic links. [21] IS workshops are also the main operational tools for the generation of synergy opportunities. ‘Synergy Workshop’ events are structured, facilitated and ‘themed’ forums designed by NISP to provide IS opportunities. By drawing diverse organisations, such as inter-sector industries, across sector industries or governmental bodies, together into specific perspectives or interests such as specific legislation or key resource issues on a common theme, the event can identify the collaboration ideas that add value to each [21, 22]. In addition, the ‘Quick-Wins Workshop’ developed by NISP Humber is also widely used by most regional NISPs as one of the most popular tools to generate synergy ideas. The underling notion of the workshop is that some synergy ideas can be created with low cost and within short-term from the discussions between cross-sector parties. During the workshop, resource streams provided by participants are collected on a tabular matrix with the two axes, ‘Wants to source’ and ‘Wants to supply’, to enable the demand-supply matching process to be simple and quick. The participants from cross-sector organisations are encouraged to match and discuss their potential resource wants and needs with one another. After completing the matrix table, resource link diagram is drawn by the practitioners to demonstrate the matched demand-supply streams more clearly. [2]

3 Eco-Industrial Parks (EIPs) in Korea Eco-industrial development in Korea is in its initial stages although the necessity of EID has been raised since 1990s [23]. The Korean government has been concerned about environmental issues in industrial estates including the increasing environmental and resource costs as well as the industrial waste problems in Korea, where about 500 conventional industrial estates exist with high industry density. In order to address these problems, the Ministry of Commerce, Industry and Energy (MOCIE), responsible for the development of industry in Korea, has adopted the concept of an EIP to transform conventional industrial estates into sustainable ones. After a few years of feasibility studies, Korea National Cleaner Production Centre launched pilot scheme EIP projects supported by MOCIE in 2005 [6]. Five pilot EIP projects have been selected for implementation within existing industrial estates: three were selected in 2005, Ulsan EIP, Pohang EIP and Yeosu EIP, and a further two, Banwol-Sihwa EIP and Cheongju EIP, in 2006 [24]. In addition to these, a few EIP projects have been prepared voluntarily in several industrial regions [25, 26]. According to the governmental plan for implementing EIPs, the Korean EIP model will be developed based on three stages as illustrate in Table 1. The Korean EID model is intended to emerge after 2015 from the results of the pilot projects [7, 24]. Therefore, this study focuses on common tendencies of the pilot EIP projects in Korea rather than the typical Korean EID model.

448

D. Kim and J.C. Powell

1 Underlying approach to EID: Government driven eco-industrial parks The Korean EID model is characterised by an eco-industrial park driven by the effective environmental management of industrial estates. The Korean government has initiated EIPs as a tool of cleaner production to address the environmental problems in an industrial estate. The governmental policy on the environmental management of industry is explained with three main dimensions: improving the environmental and economic efficiency by accelerating cleaner production for individual firm level; by implementing EIPs for each industrial estate level and by disseminating EIPs for Table 1. Long-term EIP plan in Korea

Stage

Goal

Main contents

1st stage (2005~2009)

Pilot EIP projects

Building implementation framework for an EIP Networking material and energy flows within an industrial estate through the pilot EIP projects on the selected industrial estates

2nd Stage (2010~2014)

Expanding circular resource networks

3rd Stage (2014~2019)

Establishing Korea specific EIP model

Expanding the circular resource networks to national scale by leveraging the results of pilot projects Solidifying cooperation with regional community Designing resource-circular industrial estate based on the principle of industrial ecology Applying Korea specific EIP scheme to all new industrial estates from the planning stage (source: modified from [24]).

national scale industry level [24]. According to the EIP plan, three necessities of an EIP are stressed: the implementation of sustainable industrial system, the reformation of conventional industrial estates to more environmentally sound ones and the reinforcement of the environmental policy on the location of industries [27]. Consequently the Korean EIPs strongly points to more effective environmental management of industrial estates through top-down managerial approach rather than business opportunities through business centric way. 2 Organisational framework of the Korean EIPs The five pilot EIP projects each located at different industrial estates are across the country. Inter-firm collaboration tends to be limited within a specific industrial estate with little cooperation between the EIPs. Each pilot EIP project is however directly connected with the government due to the governmental funding and the guidelines for each pilot EIP project to meet. Therefore each project is strongly dependent on the government rather than local authority or local community. Although local authorities also tend to partly fund to the projects, the influence of the government is still greater than that of local authorities as the local authorities’ funding is given in expectation of governmental funding. Each pilot EIP project is composed of several sub-projects or research projects based on a consortium with research institutes and local organisations. As the pilot EIP projects were asked to provide the industrial estate with most expertise including

Comparison of Eco-Industrial Development between the UK and Korea

449

technology necessary for an EIP, the pilot EIP projects tended to seek a few grand technology projects. Therefore, although the situation may vary, in general each pilot EIP project forms a consortium with several research institutes and local organisations to perform independent tasks. In addition, the government guidelines that are grounded on the environmental management policy to accelerate cleaner production at individual firm level first, requires the pilot projects to undertake compulsory tasks in common such as process diagnosis or ISO14000 certificate [24]. Therefore the organisational framework inclines toward the sum of specific top-down tasks rather than collecting bottom needs. 3 Operational framework of the Korean EIPs The independent relationship between pilot EIPs has also led the operational framework of every pilot EIP project to be quite independent of other pilot EIP projects. Even every sub-project or research project in a pilot EIP project also has its own operational framework. The IT platform of database and information network to identify synergy opportunities in one pilot EIP project is different from that of others. Therefore it is difficult to generalise its operational framework and tools as every pilot EIP project has developed its own operational framework independently. In addition, the Korean pilot EIP projects seem to have concentrated on developing technological tools and breakthrough technologies. The government leads the pilot projects to develop resource management technologies that can be adopted by every industrial estate, such as water pinch, integrated resource recovery system, chemical life cycle management. The government is going to disseminate these resource management technologies to all EIPs after studying their feasibility in the pilot projects until 2009 [24].

4 Reflections from the Comparison of the Two EID Models As described above, there are several differences between the two EID models, NISP and Korean EIPs. Table 2 summarises the main differences of the two EID models in terms of fundamental approach to EID, organisational and operational framework. 1 Reflections from underlying approach to EID NISP is business centric and business led industrial symbiosis while Korean EIPs are regarded as an environmental management programme. The NISP approach can lead to the voluntary involvement of businesses based on economic benefits, while the Korean EIP programme may improve the environmental quality of industrial estates. It is difficult to determine which approach is more desirable in terms of sustainable development, which should attain both economic and environmental improvements. As the economic benefits may prevail over the environmental ones in the NISP context when the two benefits conflict while the EID programme is usually supported by public funding, it can be controversial whether the business benefits should be prioritised. On the other hand, voluntary participation of businesses is hardly expected when environmental management is too emphasised in the Korean EIP context.

450

D. Kim and J.C. Powell Table 2. Summary of the comparison of EID models between NISP and Korean EIPs NISP in the UK

Approach to EID

Business centred and business led industrial symbiosis

Organisational framework

Large region based national network Managed by independent 3rd party with combination of top-down and bottom-up Practitioners network for facilitating synergies Funded by public organisations including regional agencies Expertise including technology supported from external network Unified operational framework over the nation Developing diverse operational tools for creating synergy idea

Operation framework

EIPs in Korea EIPs for the effective environmental management of industry Within industrial estate or closely located estates Government driven top-down approach Technology development oriented organisation Mainly funded by the government Self-development of technologies for synergy Independent operational framework between EIPs Developing technological solutions

It is considered that priority should be given to voluntary involvement in initial stage of EID. If businesses do not participate in EID programme voluntarily, synergy opportunities rarely occur and consequently industrial sustainability does not improve. NISP regards the voluntary involvement of businesses as a critical factor for the successful implementation of EID, because they consider the synergy ideas should be created by the organisation that can realise it. It is hard to expect businesses who are concerned about the leak of business information will proactively create synergy ideas and make excellent progress, when they are involved reluctantly. Therefore, Korean EIPs should consider the business centric approach to attract the voluntary involvement of businesses in the initial stage, reflecting the experience and approach of NISP. 2. Reflections from organisational framework One of the most distinct differences between the two models in terms of organisational framework is the geographical boundary of the collaboration network. While NISP adopted large region based national network, each Korean EIP focuses their collaboration boundary on an individual industrial estate. Which scale of geographical boundary is better for EID? There seem to be a couple of reasons why an EIP model looks suitable for Korean industrial conditions. Firstly, when very large numbers of firms are located in one industrial estate as occurs in Korea, inter-firm collaboration can be activated within an industrial estate. Secondly, as some large industrial estates in Korea are independently managed by central government, it may be more efficient to transform directly them into EIPs rather than to approach EID based on regional development. However, as most large industrial estates in Korea are composed of the firms belonging to similar

Comparison of Eco-Industrial Development between the UK and Korea

451

industrial sectors aimed at cluster effects, it is necessary to address the homogeneousness to promote by-product exchange. In addition, EIP centred on large scale industrial estates may exclude small scale industrial estates or scattered firms located outside industrial estates. It has been argued that a large industrial region can generate more effective performance of EID than single industrial estates based on German and Austrian cases [1, 9]. Single industrial estates that contain limited number of firms and industrial sectors are not only vulnerable to the flexibility of business links that are difficult to replace within an industrial estate, but they also restrict the opportunities of by-product exchange. On the other hand, a large industrial region can increase the flexibility of business links based on industrial diversity and can facilitate by-product markets as it can contain many potential collaboration candidates. NISP operates on a large regional scale to create flexible inter-firm synergies with a critical mass of cross sector industries. Therefore it is recommended that the geographical boundary of Korean EIPs should be expanded to a regional scale. In order to address the homogeneousness that is one of the most critical problems to EID in Korea, it is necessary to expand the geographical boundary to a larger region that contains diverse industrial estates and provide greater opportunities for by-product exchange. Then, while the main EID activities at an industrial estate level are inter-firm sharing among the firms of similar industrial sector, by-product exchange can be activated between industrial estates at regional level. As the distance between industrial estates in Korea is relatively short, the economic and the environmental burden may be inconsiderable. However, how large is the optimum scale of regional boundary in Korea? The regional scale of NISP cannot be replicated to Korean EID as the geographical, economical, industrial and administrative conditions in the UK are different from those in Korea. Therefore the optimum scale of geographical boundary in Korea needs to be studied further. The optimum geographical boundary to promote by-product exchange will be the function of the scale of firms, the density of firms located in an industrial estate, the diversity of industrial sectors, logistics in the region and governance structure of regional administration. 3 Reflections from operational framework Every regional NISP shares an operational framework and tools while Korean pilot EIP projects do not. Each Korean pilot EIP project develops its own tools independently and flexibly, concentrating on the characteristics of their industrial estates and the participating firms. However, this independency also causes redundancy in resource use and difficulties in the dissemination of the operational tools or good practices between EIPs. As every pilot EIP has spent their resources to develop its own IT platform, not only does it cause redundancy in resource use, but also it is hard to share the systems between EIPs. In addition, incompatibility of operational framework may impede collaborative synergy projects between EIPs. To overcome these problems and to facilitate inter-regional cooperation, it is necessary to standardise the operational framework. NISP has developed a diverse range of operational tools while Korean EIPs have concentrated on technology development. In practice, the priority in EID programme should be given to practical tools for synergy idea creation and development rather

452

D. Kim and J.C. Powell

than technology, as synergy ideas can be generated without high level technology. Although technological capability is also critical to solve the problems of inter-firm collaboration, it can be developed in partnership with external experts. In addition, as industrial circumstances differ between regions and synergy can be created in the diverse directions under the rapidly varying industrial conditions, the diverse viewpoints of industrial people can identify more productive synergy ideas than a few experts. Therefore, it is necessary for Korean EIPs to develop the operational tools such as training programme, diverse workshops and RAG programme of NISP. In particular, it is considered that Korean EIPs need to introduce the NISP workshop programmes as its format looks very productive as well as highly interactive. By engaging forty to fifty delegates from diverse cross-sector in one morning workshop over 100 potential synergy ideas can be identified [21]. In addition, the sustainability criteria and economic instruments need to be developed in EID programme. As industrial symbiosis is realised based on business benefits, the environmental or social benefits may be sacrificed in some instances. As many EID programmes are funded by public organisations including government and local authority, the EID programmes should contribute to the improvements of public benefits and thus any decision should be made on the basis of balanced criteria. In order to address this issue it may be necessary to develop certain drivers that ‘push’ the environmental and social benefit, including economic instruments or a sustainability index. Although NISP already demonstrates the environmental benefits of its performance such as the reduction of carbon dioxide, waste and job creation, it may be necessary to expand and standardise the criteria so as to quantify the improvements in sustainability, and to decide which is the better option when several options are being considered. 4. Limitation of the study This study excludes the policy framework as the relevant documents on NISP are insufficient for analysis. The policies include domestic and EU regulations, economic instruments and strategies impacting on NISP. As the policy framework can be one of the critical background conditions of the programme, the evaluation of EID model can bear more reliable results when it is studied further. Peter Laybourn, the Programme Director of NISP, also emphasised the necessity of further study on policy framework by raising the question, “What within that policy framework could be changed to make the conditions for IS more favourable?” [12 pp17].

5 Conclusions It may be still early to decide whether NISP is a successful case, as the programme has less than ten year experience including pilot projects. Although it is hard to say that the programme is successful in all aspects, the program can be regarded as one of the most active cases in the world at present as it has made distinct achievements and the number of participating firms has rapidly increased. The Korean pilot EIP projects started just a few years ago and seem to have experienced many trials and errors in their implementation. Therefore, the reflections from the experience of NISP model are discussed in this paper to give useful implications to the Korean EID programme.

Comparison of Eco-Industrial Development between the UK and Korea

453

Comparing the two models based on the three aspects, this study suggested several implications including business centred approach, expanding EIPs to regional EID, standardising operational framework and developing innovative synergy generation tools.

References [1] Sterr T, Ott T (2004) The industrial region as a promising unit for eco-industrial development--reflections, practical experience and establishment of innovative instruments to support industrial ecology. Journal of Cleaner Production 12:947-965. [2] Laybourn P, Clark W (2004) National Industrial Symbiosis Programme: A year of achievement, NISP (National Industrial Symbiosis Programme). [3] Potts Carr AJ (1998) Choctaw Eco-Industrial Park: An ecological approach to industrial land-use planning and design. Landscape and Urban Planning 42:239. [4] Chertow MR (2007) "Uncovering" industrial symbiosis. Journal of Industrial Ecology 11:11-30 [5] Gibbs D, Deutz P (2005) Implementing industrial ecology? Planning for eco-industrial parks in the USA. Geoforum 36:452. [6] Park H-S, Won J-Y (2007) Ulsan Eco-industrial Park: Challenges and Opportunities. Journal of Industrial Ecology 11:11-13. [7] Lee K, Yoon C, Lee MY, Kim JY, Kim SD, Byun KJ, Kwon MJ, Lee SI, Ma HR, Jin HJ, An DK, Kim JW, Kim HS, Moon SW, Lee T, Choi J (2003) Mater plan of implementing an eco-industrial park for developing the foundation of cleaner production. Korea Ministry of Commerce, Industry and Energy (MOCIE). [8] Chiu ASF, Yong G (2004) On the industrial ecology potential in Asian Developing Countries. Journal of Cleaner Production 12:1037. [9] Schwarz EJ, Steininger KW (1997) Implementing nature's lesson: The industrial recycling network enhancing regional development. Journal of Cleaner Production 5:47-56. [10] Korhonen J, Niemeläinen H, Pulliainen K (2002) Regional industrial recycling network in energy supply - The case of Joensuu city, Finland. Corporate Social Responsibility and Environmental Management 9:170-185. [11] Gibbs D (2003) Trust and Networking in Inter-firm Relations: The Case of Eco-industrial Development. Local Economy 18:222. [12] Laybourn P (2007) NISP: Origins and Overview. In: Industrial Symbiosis in Action: Report on the 3rd International Industrial Symbiosis Research Symposium, Birmingham, England, August 5-6, 2006 (Lombardi R, Laybourn P, eds): Yale School of Forest & Environmental Studies. [13] BCSD-NSR (2002) National Industrial Symbiosis Programme (NISP): Business delivery of political strategy on resource productivity - "Making it happen". In: Business Council for Sustainable Development North Sea Region. [14] Curry RW (2003) Mersey Banks Industrial Symbiosis Project: A study into opportunities for sustainable development through Inter-company collaboration. In: North West Chemical Initiative. [15] Parry C (2005) TVISP Final Report: Jan 2003 - Dec 2004. Clean Environment Management Centre (CLEMANCE), University of Teesside. [16] Kane G, Parry C, Street G (2005) Tees Valley Industrial Symbiosis Project: A case study on the implementation of industrial symbiosis. CLEMANCE, University of Teesside.

454

D. Kim and J.C. Powell

[17] NISP (2007) Synergie: Newsletter for the National Industrial Symbiosis Programme, Issue No 3. National Industrial Symbiosis Programme. [18] NISP. NISP homepage. [cited; Available from: http://www.nisp.org.uk/default.aspx. [19] Laybourn P (2003) National Industrial Symbiosis Programme. In: Business Strategy and the Environment Conference. Leister, UK.. [20] NISP. NISP News. [cited; Available from: http://www.nisp.org.uk/article_index.aspx. [21] Clark W, Laybourn PT (2005) A case for publicly-funded macro industrial symbiosis networks: a model from the UK's National Industrial Symbiosis Programme (NISP). In: 11th Annual International Sustainable Development Research Conference. Helsinki, Finland: ERP Environment. [22] NISP West Midlands (2005) NISP: Quarterly Newsletter No.5. NISP West Midlands: Birmingham. [23] Choi J (1995) Development of an ecology oriented industrial estate: Application of industrial ecology's viewpoint. :71-91. [24] Chung D (2006) Eco-industrial park in Korea: Current status and future plan. In: The 4th International Conference on Eco-Industrial Park. Seoul, Korea: Korea National Cleaner Production Center [1]. [25] Oh DS, Kim KB, Jeong SY (2005) Eco-Industrial Park Design: A Daedeok Technovalley case study. Habitat International 29:269. [26] Kim H (2007) Building an eco-industrial park as a public project in South Korea. The stakeholders' understanding of and involvement in the project. Sustainable Development 15:357-369. [27] KNCPC (2004) Understanding of an eco-industrial park. In: Korea Ministry of Commerce IaEM, ed: Korea National Cleaner Production Centre [1].

토지연구

On Applications of Semiparametric Multiple Index Regression Eun Jung Kim Laboratoire de Statistique Théorique et Appliquée, Université Paris VI, Paris, France Laboratoire de Statistique, CREST-INSEE, Malakoff, France [email protected]

Abstract. Generalized linear models (GLM) have been used as a classical parametric method for estimating a conditional regression function. We intend to examine the practical performance of Multiple-Index Modelling (MIM) as an alternative semiparametric approach. We focus specially on models with binary response Y and multivariate covariates X . We shall use two methods to estimate a regression function among which the refined Outer Product Gradient (rOPG) and the refined Minimum Average Variance Estimation (rMAVE) defined in Xia et al. (2002). We will show here by simulation argument that Multiple-Index modelling appears to be much more efficient than the GLM methodology and its usual Single-Index modelling (SIM) generalization to modelize the regression function.

1 Introduction Let us consider the classical problem of estimating a regression function m ( x ) = E (Y | X = x ) from independent observations ( X i , Yi ) ∈ R p + 1 ( i = 1,

..., n ) of a random vector ( X , Y ) ∈ R p +1 . When Y is a binary variable, a commonly used parametric estimation method is the so-called “logit model” which belongs to the class of “generalized linear models” (GLM). One assumes that

m ( x ) = F ( β 0 + β 1T x ) where F ( s ) = 1 / (1 + ex p ( − s )) and ( β 0 , β 1T ) T ∈ R p +1 . More recently, a number of authors proposed to estimate simultaneously the parameters β 0 and β 1 and the function F ( ⋅) , by using “single-index modelling” (SIM) methodology. In this paper, as an extension of the Logit models and semiparametric SIM, we intend to examine the practical performance and the utility of using semiparametric “multiple-index modelling” (MIM). The basic assumption of MIM is that all the relevant information provided by X is contained in d factors, that is, d linear combinations of the components of X which can be interpreted as projection directions for X . That assumption can be more realistic in practice than the assumption used in GLM and SIM in which only one factor is supposed to have all the relevant information of X .

456

E.J. Kim

In the statistical literature, several papers are concerned with multiple-index inference and are known as “dimension reduction techniques” (Li (1991), Cook (1994), Hristache et al. (2001), Xia et al. (2002) and Delecroix et al. (2006)). MIM needs some troublesome computaional implementations. In the literature, we chose two methods for their simplicity in implemetation, namely the refined Outer Product Gradient (rOPG) and the refined Minimum Average Variance Estimation (rMAVE) (Xia et al., 2002) which is derived from the Minimum Average Variance Estimation (MAVE) method. The procedure allows to estimate simultaneously a regression function as well as multiple indexes by local polynomial fitting. See more their advantages in Xia et al. (2002). rOPG and rMAVE applied to SIM were specially studied in theory as well as in practice by Xia (2006). They can be also applied to a multiple-index framework. The paper is organized as follows. Section 2 explains MIM methodology including a comparison with SIM and GLM. Section 3 provides a brief description of two estimation methods used in the paper. In Section 4, the practical performance of MIM using simulated and real data is illustrated. Section 5 includes a conclusion and a discussion.

2 MIM Methodologies A semiparmetric multiple-index model is defined as:

m( X ) = g ( BT X )

(2.1)

→ R d (d