1,923 721 33MB
Pages 1563 Page size 504.24 x 677.04 pts Year 2008
Handbook of Optoelectronics
Handbook of Optoelectronics Volume I
© 2006 by Taylor & Francis Group, LLC
Handbook of Optoelectronics Volume I
edited by
John P Dakin University of Southhampton, UK
Robert G W Brown University of Nottingham, UK
New York London
Taylor & Francis is an imprint of the Taylor & Francis Group, an informa business
© 2006 by Taylor & Francis Group, LLC
IP344_Discl Page 1 Thursday, April 20, 2006 10:02 AM
Published in 2006 by CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2006 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-10: 0-7503-0646-7 (Hardcover) International Standard Book Number-13: 978-0-7503-0646-1 (Hardcover) This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. No part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com Taylor & Francis Group is the Academic Division of Informa plc.
© 2006 by Taylor & Francis Group, LLC
and the CRC Press Web site at http://www.crcpress.com
Editorial Board Editors-in-Chief John P Dakin and Robert G W Brown Section Editors Roel Baets Ghent University –IMEC Ghent, Belgium Jean-Luc Beylat Alcatel Villarceaux, France Edward Browell NASA Langley Research Center Hampton, Virginia Robert G W Brown Department of Electronic Engineering University of Nottingham Nottingham, United Kingdom John P Dakin Optoelectronics Research Centre University of Southampton Highfield, United Kingdom
© 2006 by Taylor & Francis Group, LLC
Michel Digonnet Edward L. Gintzon Laboratory Stanford University Stanford, California Galina Khitrova College of Optical Sciences University of Arizona Tucson, Arizona Peter Raynes Department of Engineering University of Oxford Oxford, United Kingdom Alan Rogers Department of Electronic Engineering University of Surrey Guildford, United Kingdom Tatsuo Uchida Department of Electronics Engineering Tohoku University Sendai, Japan
List of contributors Takao Ando Shizuoka University Shizuoka, Japan
Nadir Dagli University of California Santa Barbara, California
Nicholas Baynes CDT Limited Cambridge, United Kingdom
John Dakin ORC, Southampton University Southampton, United Kingdom
Zbigniew Bielecki Military University of Technology Warsaw, Poland
Xavier Daxhelet Ecole Polytechnique de Montreal Montreal, Quebec, Canada
Anders Bjarklev Bjarklev Consult APS Roskilde, Denmark
Michel Digonnet Stanford University Stanford, California
Nikolaus Boos EADS Eurocopter SAS Marignane, France
Uzi Efron Ben-Gurion University Beer-Sheva, Israel
Jens Buus Gayton Photonics Gayton, United Kingdom
Gu¨nter Gauglitz Institut fu¨r Physikalische und Theoretische Chemie Tu¨bingen, Germany
Chien-Jen Chen Onetta Inc. San Jose, California
Ron Gibbs Gibbs Associates Dunstable, United Kingdom
Dominique Chiaroni Alcatel CIT Marcoussis, France
Martin Grell University of Sheffield Sheffield, United Kingdom
Krzysztof Chrzanowski Military University of Technology Warsaw, Poland
Nick Holliman University of Durham, Durham, United Kingdom
David Coates CRL Hayes, United Kingdom
Kazuo Hotate University of Tokya Tokyo, Japan
© 2006 by Taylor & Francis Group, LLC
Michel Joindot France Telecom Lannion, France
Tom Markvart University of Southampton Southampton, United Kingdom
George K Knopf The University of Western Ontario London, Ontario, Canada
Tanya M Monro University of Southampton Southampton, United Kingdom
Ton Koonen Eindhoven University of Technology Eindhoven, The Netherlands
Johan Nilsson University of Southampton Southampton, United Kingdom
Hidehiro Kume Hamamatsu Photonics KK Shizuoka, Japan
Yoshi Ohno National Institute of Standards and Technology Gaithersburg, Maryland
Suzanne Lacroix Ecole Polytechnique de Montreal Montreal, Quebec, Canada Jesper Lægsgaard University of Southampton Southampton, United Kingdom John N Lee Naval Research Laboratory Washington, District of Columbia Christian Lerminiaux Corning SA – CERF Avon, France Robert A Lieberman Intelligent Optical Systems Inc. Torrance, California John Love Australian National University Canberra, Australia
Susanna Orlic Technical University Berlin Berlin, Germany Antoni Rogalski Military University of Technology Warsaw, Poland Alan Rogers University of Surrey Guildford, United Kingdom Neil Ross University of Southampton Southampton, United Kingdom Tsutae Shinoda Fujitsu Laboratories Ltd Akashi, Japan Hilary G Sillitto Edinburgh, United Kingdom
Makoto Maeda Home Network Company, SONY Kanagawa, Japan
Anthony E Smart Scattering Solutions, LLC Costa Mesa, California
Michael A Marcus Eastman Kodak Company Rochester, New York
Brian Smith Pips Technology Hampshire, United Kingdom
© 2006 by Taylor & Francis Group, LLC
Euan Smith CDT Limited Cambridge, United Kingdom Peter G R Smith University of Southampton Southampton, United Kingdom Gu¨nter Steinmeyer Max-Born-Institute for Nonlinear Optics and Short Pulse Spectroscopy Berlin, Germany Klaus Streubel OSRAM Opto Semiconductors Regensberg, Germany
Kenkicki Tanioka NHK Science and Technical Research Laboratories Tokyo, Japan Heiju Uchiike Saga Univerisity Saga, Japan J Michael Vaughan Research Consultant, Optoelectronics Buckinghamshire, United Kingdom Tuan Vo-Dinh Oak Ridge National Laboratory Oak Ridge, Tennessee
Masayuki Sugawara NHK Science and Technical Research Laboratories Tokyo, Japan
David O Wharmby Technology Consultant Ilkley, United Kingdom
Yan Sun Onetta Inc. San Jose, California
William S Wong Onetta Inc. San Jose, California
© 2006 by Taylor & Francis Group, LLC
Acknowledgments Firstly we must thank all the many leading scientists and technologists who have contributed so generously to the chapters of this book. It is no small task to produce a comprehensive and dispassionately accurate summary, even of your own research field, and we are most grateful to all of our authors. John Dakin would like to acknowledge all his family, close friends and colleagues at Southampton University who have been most understanding during the production of this Handbook. Robert Brown would like to acknowledge the constant support of his wife and close family throughout the preparation of this book. We both wish to give special thanks to the (UK) Institute of Physics Publishing (IoPP) staff who did so much during the book’s development and production period. Gillian Lindsay worked hard with the two of us at the outset some years ago, and Karen Donnison took over in the middle-period and patiently cajoled the editors and authors to deliver on their promises. Lastly we thank Dr. John Navas, who took over the reins in the final stages. He carried the book forward from IoPP to Taylor and Francis in the last few months and enabled the final product to be delivered. Robert G W Brown John P Dakin
© 2006 by Taylor & Francis Group, LLC
Introduction Optoelectronics is a remarkably broad scientific and technological field that supports a multi-billion US-dollar per annum global industry, employing tens of thousands of scientists and engineers. The optoelectronics industry is one of the great global businesses of our time. In this Handbook, we have aimed to produce a book that is not just a text containing theoreticallysound physics & electronics coverage, nor just a practical engineering handbook, but a text designed to be strong in both these areas. We believe that, with the combined assistance of many world experts, we have succeeded in achieving this very difficult aim. The structure and contents of this Handbook have proved fascinating to assemble, using this input from so many leading practitioners of the science, technology and art of optoelectronics. Today’s optical telecommunications, display and illumination technologies rely heavily on optoelectronic components: laser diodes, light emitting diodes, liquid crystal and plasma screen displays etc. In today’s world it is virtually impossible to find a piece of electrical equipment that does not employ optoelectronic devices as a basic necessity – from CD and DVD players to televisions, from automobiles and aircraft to medical diagnostic facilities in hospitals and telephones, from satellites and space-borne missions to underwater exploration systems – the list is almost endless. Optoelectronics is in virtually every home and business office in the developed modern world, in telephones, fax machines, photocopiers, computers and lighting. ‘Optoelectronics’ is not precisely defined in the literature. In this Handbook we have covered not only optoelectronics as a subject concerning devices and systems that are essentially electronic in nature, yet involve light (such as the laser diode), but we have also covered closely related areas of electro-optics, involving devices that are essentially optical in nature but involve electronics (such as crystal lightmodulators). To provide firm foundations, this Handbook opens with a section covering ‘Basic Concepts’. The ‘Introduction’ is followed immediately by a chapter concerning ‘Materials’, for it is through the development and application of new materials and their special properties that the whole business of optoelectronic science and technology now advances. Many optoelectronic systems still rely on conventional light sources rather than semiconductor sources, so we cover these in the third chapter, leaving semiconductor matters to a later section. The detection of light is fundamental to many optoelectronic systems, as are optical waveguides, amplifiers and lasers, so we cover these in the remaining chapters of the Basic Concepts section. The ‘Advanced Concepts’ section focuses on three areas that will be useful to some of our intended audience, both now, in advanced optics and photometry – and now and increasingly in the future concerning non-linear and short-pulse effects. ‘Optoelectronics Devices and Techniques’ is a core foundation section for this Handbook, as today’s optoelectronics business relies heavily on such knowledge. We have attempted to cover all the main areas of semiconductor optoelectronics devices and materials in the eleven chapters in this section, from light emitting diodes and lasers of great variety to fibers, modulators and amplifiers. Ultra-fast and integrated devices are increasingly important, as are organic electroluminescent devices and photonic bandgap and crystal fibers. Artificially engineered materials provide a rich source of possibility for next generation optoelectronic devices.
© 2006 by Taylor & Francis Group, LLC
At this point the Handbook ‘changes gear’ – and we move from the wealth of devices now available to us – to how they are used in some of the most important optoelectronic systems available today. We start with a section covering ‘Communication’, for this is how the developed world talks and communicates by internet and email today – we are all now heavily dependent on optoelectronics. Central to such optoelectronic systems are transmission, network architecture, switching and multiplex architectures – the focus of our chapters here. In Communication we already have a multi-tens-ofbillions-of-dollars-per-annum industry today. ‘Imaging and displays’ is the other industry measured in the tens of billions of dollars per annum range at the present time. We deal here with most if not all of the range of optoelectronic techniques used today from cameras, vacuum and plasma displays to liquid crystal displays and light modulators, from electroluminescent displays and exciting new 3-dimensional display technologies just entering the market place in mobile telephone and laptop computer displays – to the very different application area of scanning and printing. ‘Sensing and Data Processing’ is a growing area of optoelectronics that is becoming increasingly important – from non-invasive patient measurements in hospitals to remote sensing in nuclear power stations and aircraft. At the heart of many of today’s sensing capabilities is the business of optical fiber sensing, so we begin this section of the Handbook there, before delving into remote optical sensing and military systems (at an un-classified level – for here-in lies a problem for this Handbook – that much of the current development and capability in military optoelectronics is classified and un-publishable because of it’s strategic and operational importance). Optical information storage and recovery is already a huge global industry supporting the computer and media industries in particular; optical information processing shows promise but has yet to break into major global utilization. We cover all of these aspects in our chapters here. ‘Industrial Medical and Commercial Applications’ of optoelectronics abound and we cannot possibly do justice to all the myriad inventive schemes and capabilities that have been developed to date. However, we have tried hard to give a broad overview within major classification areas, to give you a flavor of the sheer potential of optoelectronics for application to almost everything that can be measured. We start with the foundation areas of spectroscopy – and increasingly important surveillance, safety and security possibilities. Actuation and control – the link from optoelectronics to mechanical systems is now pervading nearly all modern machines: cars, aircraft, ships, industrial production etc – a very long list is possible here. Solar power is and will continue to be of increasing importance – with potential for urgently needed breakthroughs in photon to electron conversion efficiency. Medical applications of optoelectronics are increasing all the time, with new learned journals and magazines regularly being started in this field. Finally we come to the art of practical optoelectronic systems – how do you put optoelectronic devices together into reliable and useful systems, and what are the ‘black art’ experiences learned through painful experience and failure? This is what other optoelectronic books never tell you – and we are fortunate to have a chapter that addresses many of the questions we should be thinking about as we design and build systems – but often forget or neglect at our peril. In years to come, optoelectronics will develop in many new directions. Some of the more likely directions to emerge by 2010 will include optical packet switching, quantum cryptographic communications, three-dimensional and large-area thin-film displays, high-efficiency solar-power generation, widespread bio-medical and bio-photonic disease analyses and treatments and optoelectronic purification processes. Many new devices will be based on quantum dots, photonic
© 2006 by Taylor & Francis Group, LLC
crystals and nano-optoelectronic components. A future edition of this Handbook is likely to report on these rapidly changing fields currently pursued in basic research laboratories. We are confident you will enjoy using this Handbook of Optoelectronics, derive fascination and pleasure in this richly rewarding scientific and technological field, and apply your knowledge in either your research or your business. Robert G W Brown John P Dakin
© 2006 by Taylor & Francis Group, LLC
Table of Contents BASIC CONCEPTS A1.1 A1.2 A1.3 A1.4 A1.5 A1.6
Alan Rogers
An introduction to optoelectronics Alan Rogers......................................................................... 1 Optical materials Neil Ross ......................................................................................................... 21 Incandescent, discharge and arc lamp sources David O Wharmby ............................................. 45 Detection of optical radiation Antoni Rogalski and Zbigniew Bielecki......................................... 73 Propagation along optical fibres and waveguides John Love ...................................................... 119 Introduction to lasers and optical amplifiers William S Wong, Chien-Jen Chen and Yan Sun .......................................................................................................................................... 179
ADVANCED CONCEPTS A2.1 A2.2 A2.3
Alan Rogers and Galina Khitrova
Advanced optics Alan Rogers...................................................................................................... 205 Basic concepts in photometry, radiometry and colorimetry Yoshi Ohno .................................... 287 Nonlinear and short pulse effects Gu¨nter Steinmeyer .................................................................. 307
OPTOELECTRONIC DEVICES AND TECHNIQUES B1.1 B1.2 B2 B3 B4 B5 B6 B7 B8 B9 B10 B11
John P Dakin, Roel Bates and Michel Digonnet
Visible light-emitting diodes Klaus Streubel ................................................................................ 329 Semiconductor lasers Jens Buus .................................................................................................. 385 Optical detectors and receivers Hidehiro Kume ........................................................................... 413 Optical fibre devices Suzanne Lacroix and Xavier Daxhelet ......................................................... 457 Optical modulators Nadir Dagli.................................................................................................. 489 Optical amplifiers Johan Nilsson, Jesper Lægsgaard and Anders Bjarklev .................................... 533 Ultrafast optoelectronics Gu¨nter Steinmeyer ............................................................................... 565 Integrated optics Nikolaus Boos and Christian Lerminiaux .......................................................... 587 Infrared devices and techniques Antoni Rogalski and Krzysztof Chrzanowski............................... 653 Organic light emitting devices Martin Grell ................................................................................ 693 Microstructured optical fibres Tanya M Monro, Anders Bjarklev and Jesper Lægsgaard ............. 719 Engineered optical materials Peter G R Smith............................................................................. 745
© 2006 by Taylor & Francis Group, LLC
Handbook of Optoelectronics Volume II
© 2006 by Taylor & Francis Group, LLC
Handbook of Optoelectronics Volume II
edited by
John P Dakin University of Southhampton, UK
Robert G W Brown University of Nottingham, UK
New York London
Taylor & Francis is an imprint of the Taylor & Francis Group, an informa business
© 2006 by Taylor & Francis Group, LLC
IP344_Discl Page 1 Thursday, April 20, 2006 10:02 AM
Published in 2006 by CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2006 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-10: 0-7503-0646-7 (Hardcover) International Standard Book Number-13: 978-0-7503-0646-1 (Hardcover) This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. No part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com Taylor & Francis Group is the Academic Division of Informa plc.
© 2006 by Taylor & Francis Group, LLC
and the CRC Press Web site at http://www.crcpress.com
Editorial Board Editors-in-Chief John P Dakin and Robert G W Brown Section Editors Roel Baets Ghent University –IMEC Ghent, Belgium Jean-Luc Beylat Alcatel Villarceaux, France Edward Browell NASA Langley Research Center Hampton, Virginia Robert G W Brown Department of Electronic Engineering University of Nottingham Nottingham, United Kingdom John P Dakin Optoelectronics Research Centre University of Southampton Highfield, United Kingdom
© 2006 by Taylor & Francis Group, LLC
Michel Digonnet Edward L. Gintzon Laboratory Stanford University Stanford, California Galina Khitrova College of Optical Sciences University of Arizona Tucson, Arizona Peter Raynes Department of Engineering University of Oxford Oxford, United Kingdom Alan Rogers Department of Electronic Engineering University of Surrey Guildford, United Kingdom Tatsuo Uchida Department of Electronics Engineering Tohoku University Sendai, Japan
List of contributors Takao Ando Shizuoka University Shizuoka, Japan
Nadir Dagli University of California Santa Barbara, California
Nicholas Baynes CDT Limited Cambridge, United Kingdom
John Dakin ORC, Southampton University Southampton, United Kingdom
Zbigniew Bielecki Military University of Technology Warsaw, Poland
Xavier Daxhelet Ecole Polytechnique de Montreal Montreal, Quebec, Canada
Anders Bjarklev Bjarklev Consult APS Roskilde, Denmark
Michel Digonnet Stanford University Stanford, California
Nikolaus Boos EADS Eurocopter SAS Marignane, France
Uzi Efron Ben-Gurion University Beer-Sheva, Israel
Jens Buus Gayton Photonics Gayton, United Kingdom
Gu¨nter Gauglitz Institut fu¨r Physikalische und Theoretische Chemie Tu¨bingen, Germany
Chien-Jen Chen Onetta Inc. San Jose, California
Ron Gibbs Gibbs Associates Dunstable, United Kingdom
Dominique Chiaroni Alcatel CIT Marcoussis, France
Martin Grell University of Sheffield Sheffield, United Kingdom
Krzysztof Chrzanowski Military University of Technology Warsaw, Poland
Nick Holliman University of Durham, Durham, United Kingdom
David Coates CRL Hayes, United Kingdom
Kazuo Hotate University of Tokya Tokyo, Japan
© 2006 by Taylor & Francis Group, LLC
Michel Joindot France Telecom Lannion, France
Tom Markvart University of Southampton Southampton, United Kingdom
George K Knopf The University of Western Ontario London, Ontario, Canada
Tanya M Monro University of Southampton Southampton, United Kingdom
Ton Koonen Eindhoven University of Technology Eindhoven, The Netherlands
Johan Nilsson University of Southampton Southampton, United Kingdom
Hidehiro Kume Hamamatsu Photonics KK Shizuoka, Japan
Yoshi Ohno National Institute of Standards and Technology Gaithersburg, Maryland
Suzanne Lacroix Ecole Polytechnique de Montreal Montreal, Quebec, Canada Jesper Lægsgaard University of Southampton Southampton, United Kingdom John N Lee Naval Research Laboratory Washington, District of Columbia Christian Lerminiaux Corning SA – CERF Avon, France Robert A Lieberman Intelligent Optical Systems Inc. Torrance, California John Love Australian National University Canberra, Australia
Susanna Orlic Technical University Berlin Berlin, Germany Antoni Rogalski Military University of Technology Warsaw, Poland Alan Rogers University of Surrey Guildford, United Kingdom Neil Ross University of Southampton Southampton, United Kingdom Tsutae Shinoda Fujitsu Laboratories Ltd Akashi, Japan Hilary G Sillitto Edinburgh, United Kingdom
Makoto Maeda Home Network Company, SONY Kanagawa, Japan
Anthony E Smart Scattering Solutions, LLC Costa Mesa, California
Michael A Marcus Eastman Kodak Company Rochester, New York
Brian Smith Pips Technology Hampshire, United Kingdom
© 2006 by Taylor & Francis Group, LLC
Euan Smith CDT Limited Cambridge, United Kingdom Peter G R Smith University of Southampton Southampton, United Kingdom Gu¨nter Steinmeyer Max-Born-Institute for Nonlinear Optics and Short Pulse Spectroscopy Berlin, Germany Klaus Streubel OSRAM Opto Semiconductors Regensberg, Germany
Kenkicki Tanioka NHK Science and Technical Research Laboratories Tokyo, Japan Heiju Uchiike Saga Univerisity Saga, Japan J Michael Vaughan Research Consultant, Optoelectronics Buckinghamshire, United Kingdom Tuan Vo-Dinh Oak Ridge National Laboratory Oak Ridge, Tennessee
Masayuki Sugawara NHK Science and Technical Research Laboratories Tokyo, Japan
David O Wharmby Technology Consultant Ilkley, United Kingdom
Yan Sun Onetta Inc. San Jose, California
William S Wong Onetta Inc. San Jose, California
© 2006 by Taylor & Francis Group, LLC
Acknowledgments Firstly we must thank all the many leading scientists and technologists who have contributed so generously to the chapters of this book. It is no small task to produce a comprehensive and dispassionately accurate summary, even of your own research field, and we are most grateful to all of our authors. John Dakin would like to acknowledge all his family, close friends and colleagues at Southampton University who have been most understanding during the production of this Handbook. Robert Brown would like to acknowledge the constant support of his wife and close family throughout the preparation of this book. We both wish to give special thanks to the (UK) Institute of Physics Publishing (IoPP) staff who did so much during the book’s development and production period. Gillian Lindsay worked hard with the two of us at the outset some years ago, and Karen Donnison took over in the middle-period and patiently cajoled the editors and authors to deliver on their promises. Lastly we thank Dr. John Navas, who took over the reins in the final stages. He carried the book forward from IoPP to Taylor and Francis in the last few months and enabled the final product to be delivered. Robert G W Brown John P Dakin
© 2006 by Taylor & Francis Group, LLC
Introduction Optoelectronics is a remarkably broad scientific and technological field that supports a multi-billion US-dollar per annum global industry, employing tens of thousands of scientists and engineers. The optoelectronics industry is one of the great global businesses of our time. In this Handbook, we have aimed to produce a book that is not just a text containing theoreticallysound physics & electronics coverage, nor just a practical engineering handbook, but a text designed to be strong in both these areas. We believe that, with the combined assistance of many world experts, we have succeeded in achieving this very difficult aim. The structure and contents of this Handbook have proved fascinating to assemble, using this input from so many leading practitioners of the science, technology and art of optoelectronics. Today’s optical telecommunications, display and illumination technologies rely heavily on optoelectronic components: laser diodes, light emitting diodes, liquid crystal and plasma screen displays etc. In today’s world it is virtually impossible to find a piece of electrical equipment that does not employ optoelectronic devices as a basic necessity – from CD and DVD players to televisions, from automobiles and aircraft to medical diagnostic facilities in hospitals and telephones, from satellites and space-borne missions to underwater exploration systems – the list is almost endless. Optoelectronics is in virtually every home and business office in the developed modern world, in telephones, fax machines, photocopiers, computers and lighting. ‘Optoelectronics’ is not precisely defined in the literature. In this Handbook we have covered not only optoelectronics as a subject concerning devices and systems that are essentially electronic in nature, yet involve light (such as the laser diode), but we have also covered closely related areas of electro-optics, involving devices that are essentially optical in nature but involve electronics (such as crystal lightmodulators). To provide firm foundations, this Handbook opens with a section covering ‘Basic Concepts’. The ‘Introduction’ is followed immediately by a chapter concerning ‘Materials’, for it is through the development and application of new materials and their special properties that the whole business of optoelectronic science and technology now advances. Many optoelectronic systems still rely on conventional light sources rather than semiconductor sources, so we cover these in the third chapter, leaving semiconductor matters to a later section. The detection of light is fundamental to many optoelectronic systems, as are optical waveguides, amplifiers and lasers, so we cover these in the remaining chapters of the Basic Concepts section. The ‘Advanced Concepts’ section focuses on three areas that will be useful to some of our intended audience, both now, in advanced optics and photometry – and now and increasingly in the future concerning non-linear and short-pulse effects. ‘Optoelectronics Devices and Techniques’ is a core foundation section for this Handbook, as today’s optoelectronics business relies heavily on such knowledge. We have attempted to cover all the main areas of semiconductor optoelectronics devices and materials in the eleven chapters in this section, from light emitting diodes and lasers of great variety to fibers, modulators and amplifiers. Ultra-fast and integrated devices are increasingly important, as are organic electroluminescent devices and photonic bandgap and crystal fibers. Artificially engineered materials provide a rich source of possibility for next generation optoelectronic devices.
© 2006 by Taylor & Francis Group, LLC
At this point the Handbook ‘changes gear’ – and we move from the wealth of devices now available to us – to how they are used in some of the most important optoelectronic systems available today. We start with a section covering ‘Communication’, for this is how the developed world talks and communicates by internet and email today – we are all now heavily dependent on optoelectronics. Central to such optoelectronic systems are transmission, network architecture, switching and multiplex architectures – the focus of our chapters here. In Communication we already have a multi-tens-ofbillions-of-dollars-per-annum industry today. ‘Imaging and displays’ is the other industry measured in the tens of billions of dollars per annum range at the present time. We deal here with most if not all of the range of optoelectronic techniques used today from cameras, vacuum and plasma displays to liquid crystal displays and light modulators, from electroluminescent displays and exciting new 3-dimensional display technologies just entering the market place in mobile telephone and laptop computer displays – to the very different application area of scanning and printing. ‘Sensing and Data Processing’ is a growing area of optoelectronics that is becoming increasingly important – from non-invasive patient measurements in hospitals to remote sensing in nuclear power stations and aircraft. At the heart of many of today’s sensing capabilities is the business of optical fiber sensing, so we begin this section of the Handbook there, before delving into remote optical sensing and military systems (at an un-classified level – for here-in lies a problem for this Handbook – that much of the current development and capability in military optoelectronics is classified and un-publishable because of it’s strategic and operational importance). Optical information storage and recovery is already a huge global industry supporting the computer and media industries in particular; optical information processing shows promise but has yet to break into major global utilization. We cover all of these aspects in our chapters here. ‘Industrial Medical and Commercial Applications’ of optoelectronics abound and we cannot possibly do justice to all the myriad inventive schemes and capabilities that have been developed to date. However, we have tried hard to give a broad overview within major classification areas, to give you a flavor of the sheer potential of optoelectronics for application to almost everything that can be measured. We start with the foundation areas of spectroscopy – and increasingly important surveillance, safety and security possibilities. Actuation and control – the link from optoelectronics to mechanical systems is now pervading nearly all modern machines: cars, aircraft, ships, industrial production etc – a very long list is possible here. Solar power is and will continue to be of increasing importance – with potential for urgently needed breakthroughs in photon to electron conversion efficiency. Medical applications of optoelectronics are increasing all the time, with new learned journals and magazines regularly being started in this field. Finally we come to the art of practical optoelectronic systems – how do you put optoelectronic devices together into reliable and useful systems, and what are the ‘black art’ experiences learned through painful experience and failure? This is what other optoelectronic books never tell you – and we are fortunate to have a chapter that addresses many of the questions we should be thinking about as we design and build systems – but often forget or neglect at our peril. In years to come, optoelectronics will develop in many new directions. Some of the more likely directions to emerge by 2010 will include optical packet switching, quantum cryptographic communications, three-dimensional and large-area thin-film displays, high-efficiency solar-power generation, widespread bio-medical and bio-photonic disease analyses and treatments and optoelectronic purification processes. Many new devices will be based on quantum dots, photonic
© 2006 by Taylor & Francis Group, LLC
crystals and nano-optoelectronic components. A future edition of this Handbook is likely to report on these rapidly changing fields currently pursued in basic research laboratories. We are confident you will enjoy using this Handbook of Optoelectronics, derive fascination and pleasure in this richly rewarding scientific and technological field, and apply your knowledge in either your research or your business. Robert G W Brown John P Dakin
© 2006 by Taylor & Francis Group, LLC
Table of Contents COMMUNICATION C1.1 C1.2 C1.3
Jean-Luc Beylat
Optical transmission Michel Joindot and Michel Digonnet........................................................... 765 Optical network architectures Ton Koonen.................................................................................. 797 Optical switching and multiplexed architectures Dominique Chiaroni......................................... 833
IMAGING AND DISPLAYS C2.1 C2.2 C2.3 C2.4 C2.5 C2.6 C2.7
Peter Raynes and Tatsuo Uchida
Camera technology Kenkicki Tanioka, Takao Ando and Masayuki Sugawara ............................... 867 Vacuum tube and plasma displays Makoto Maeda, Tsutae Shinoda and Heiju Uchiike ................ 931 Liquid crystal displays David Coates........................................................................................... 957 Technology and applications of spatial light modulators Uzi Efron ........................................... 991 Organic electroluminescent displays Nicholas Baynes and Euan Smith ...................................... 1039 Three-dimensional display systems Nick Holliman .................................................................... 1067 Optical scanning and printing Ron Gibbs.................................................................................. 1101
SENSING AND DATA PROCESSING C3.1 C3.2 C3.3 C3.4 C3.5
John P Dakin, Roel Bates and Edward Browell
Optical fibre sensors John P Dakin, Kazuo Hotate, Robert A Lieberman and Michael A Marcus......................................................................................................................... 1129 Remote optical sensing by laser J Michael Vaughan ................................................................. 1217 Military optoelectronics Hilary G Sillitto .................................................................................. 1297 Optical information storage and recovery Susanna Orlic.......................................................... 1335 Optical information processing John N Lee............................................................................... 1369
INDUSTRIAL, MEDICAL & COMMERCIAL APPLICATIONS C4.1 C4.2 C4.3 C4.4 C4.5
Spectroscopic analysis Gu¨nter Gauglitz and John P Dakin ......................................................... 1399 Intelligent surveillance Brian Smith ........................................................................................... 1443 Optical actuation and control George K Knopf ......................................................................... 1453 Optical to electrical energy conversion: solar cells Tom Markvart ............................................. 1479 Medical applications of photonics Tuan Vo-Dinh...................................................................... 1501
THE ART OF PRACTICAL OPTOELECTRONICS C5
John P Dakin and Roel Bates
The art of practical optoelectronic systems
© 2006 by Taylor & Francis Group, LLC
Roel Bates
Anthony E Smart ................................................... 1519
An introduction to optoelectronics
1
A1.1 An introduction to optoelectronics Alan Rogers A1.1.1
Objective
In this chapter, we shall take a quite general look at the nature of photons and electrons (and of their interactions) in order to gain a familiarity with their overall properties, insofar as they bear upon our subject. Clearly it is useful to acquire this ‘feel’ in general terms before getting immersed in some of the finer detail which, whilst very necessary, does not allow the inter-relationships between the various aspects to remain sharply visible. The intention is that the familiarity acquired by reading this chapter will facilitate an understanding of the other chapters in the book. Our privileged vantage point for the modern views of light has resulted from a laborious effort by many scientists over many centuries, and a valuable appreciation of some of the subtleties of the subject can be obtained from a study of that effort. A brief summary of the historical development is our starting point. A1.1.2
Historical sketch
The ancient Greeks speculated on the nature of light from about 500 BC. The practical interest at that time centred, inevitably, on using the sun’s light for military purposes; and the speculations, which were of an abstruse philosophical nature, were too far removed from the practicalities for either to have much effect on the other. The modern scientific method effectively began with Galileo (1564 –1642), who raised experimentation to a properly valued position. Prior to his time experimentation was regarded as a distinctly inferior, rather messy activity, definitely not for true gentlemen. (Some reverberations from this period persist, even today!) Newton was born in the year in which Galileo died, and these two men laid the basis for the scientific method which was to serve us well for the following three centuries. Newton believed that light was corpuscular in nature. He reasoned that only a stream of projectiles, of some kind, could explain satisfactorily the fact that light appeared to travel in straight lines. However, Newton recognized the difficulties in reconciling some experimental data with this view, and attempted to resolve them by ascribing some rather unlikely properties to his corpuscles; he retained this basic corpuscular tenet, however. Such was Newton’s authority, resting as it did on an impressive range of discoveries in other branches of physics and mathematics, that it was not until his death (in 1727) that the views of other men such as Euler, Young and Fresnel began to gain their due prominence. These men believed that light was a wave motion in a ‘luminiferous aether’, and between them they developed an impressive theory which well explained all the known phenomena of optical interference and diffraction. The wave theory rapidly gained ground during the late 18th and early 19th centuries.
© 2006 by Taylor & Francis Group, LLC
2
An introduction to optoelectronics
The final blow in favour of the wave theory is usually considered to have been struck by Foucault (1819 –1868) who, in 1850, performed an experiment which proved that light travels more slowly in water than in air. This result agreed with the wave theory and contradicted the corpuscular theory. For the next 50 years the wave theory held sway until, in 1900, Planck (1858 –1947) found it mathematically convenient to invoke the idea that light was emitted from a radiating body in discrete packets, or ‘quanta’, rather than continuously as a wave. Although Planck was at first of the opinion that this was no more than a mathematical trick to explain the experimental relation between emitted intensity and wavelength, Einstein (1879 –1955) immediately grasped the fundamental importance of the discovery and used it to explain the photoelectric effect, in which light acts to emit electrons from matter: the explanation was beautifully simple and convincing. It appeared, then, that light really did have some corpuscular properties. In parallel with these developments, there were other worrying concerns for the wave theory. From early in the 19th century its protagonists had recognized that ‘polarization’ phenomena, such as those observed in crystals of Iceland spar, could be explained if the light vibrations were transverse to the direction of propagation. Maxwell (1831 –1879) had demonstrated brilliantly (in 1864), by means of his famous field equations, that the oscillating quantities were electric and magnetic fields. However, there arose persistently the problem of the nature of the ‘aether’ in which these oscillations occurred and, in particular, how astronomical bodies could move through it, apparently without resistance. A famous experiment in 1887, by Michelson and Morley, attempted to measure the velocity of the earth with respect to this aether, and consistently obtained the result that the velocity was zero. This was very puzzling in view of the earth’s known revolution around the sun. It thus appeared that the medium in which light waves propagate did not actually exist! The null result of the aether experiment was incorporated by Einstein into an entirely new view of space and time, in his two theories of relativity: the special theory (1905) and the general theory (1915). Light, which propagates in space and oscillates in time, plays a crucial role in these theories. Thus physics arrived (ca. 1920) at the position where light appeared to exhibit both particle (quantum) and wave aspects, depending on the physical situation. To compound this duality, it was found (by Davisson and Germer in 1927, after a suggestion by de Broglie in 1924) that electrons, previously thought quite unambiguously to be particles, sometimes exhibited a wave character, producing interference and diffraction patterns in a wave-like way. The apparent contradiction between the pervasive wave-particle dualities in nature is now recognized to be the result of trying to picture all physical phenomena as occurring within the context of the human scale of things. Photons and electrons appear to behave either as particles or as waves to us only because of the limitations of our modes of thought. We have been conditioned to think in terms of the behaviour of objects such as sticks, stones and waves on water, the understanding of which has been necessary for us to survive, as a species, at our particular level of things. In fact, the fundamental atomic processes of nature are not describable in these same terms and it is only when we try to force them into our more familiar framework that apparent contradictions such as the wave – particle duality of electrons and photons arise. Electrons and photons are neither waves nor particles but are entities whose true nature is somewhat beyond our conceptual powers. We are very limited by our preference (necessity, almost) for having a mental picture of what is going on. Present-day physics with its gauge symmetries and field quantizations rarely draws any pictures at all, but that is another story. . . A1.1.3
The wave nature of light
In 1864, Clerk Maxwell was able to express the laws of electromagnetism known at that time in a way which demonstrated the symmetrical interdependence of electric and magnetic fields. In order to
© 2006 by Taylor & Francis Group, LLC
The wave nature of light
3
complete the symmetry he had to add a new idea: that a changing electric field (even in free space) gives rise to a magnetic field. The fact that a changing magnetic field gives rise to an electric field was already well known, as Faraday’s law of induction. Since each of the fields could now give rise to the other, it was clearly conceptually possible for the two fields mutually to sustain each other, and thus to propagate as a wave. Maxwell’s equations formalized these ideas and allowed the derivation of a wave equation. This wave equation permitted free-space solutions which corresponded to electromagnetic waves with a defined velocity; the velocity depended on the known electric and magnetic properties of free space, and thus could be calculated. The result of the calculation was a value so close to the known velocity of light as to make it clear that light could be identified with these waves, and was thus established as an electromagnetic phenomenon. All the important features of light’s behaviour as a wave motion can be deduced from a detailed study of Maxwell’s equations. We shall limit ourselves here to a few of the basic properties. If we take Cartesian axes Ox, Oy, Oz (figure A1.1.1) we can write a simple sinusoidal solution of the free-space equations in the form: Ex ¼ E0 exp½iðvt 2 kzÞ H y ¼ H 0 exp½iðvt 2 kzÞ :
ðA1:1:1Þ
These two equations describe a wave propagating in the Oz direction with electric field (Ex) oscillating sinusoidally (with time t and distance z) in the xz plane and the magnetic field (Hy) oscillating in the yz plane. The two fields are orthogonal in direction and have the same phase, as required by the form of Maxwell’s equations: only if these conditions obtain can the two fields mutually sustain each other. Note also that the two fields must oscillate at right angles to the direction of propagation, Oz. Electromagnetic waves are transverse waves. The frequency of the wave described by equation (A1.1.1) is given by: f ¼
v 2p
Figure A1.1.1. Sinusoidal electromagnetic wave.
© 2006 by Taylor & Francis Group, LLC
4
An introduction to optoelectronics
and its wavelength by:
l¼
2p k
where v and k are known as the angular frequency and propagation constant, respectively. Since f intervals of the wave distance l pass each point on the Oz axis per second, it is clear that the velocity of the wave is given by: c¼fl¼
v : k
The free-space wave equation shows that this velocity should be identified as follows: c0 ¼
1 ð10 m0 Þ1=2
ðA1:1:2Þ
where 10 is a parameter known as the electric permittivity, and m0 the magnetic permeability, of free space. These two quantities are coupled, independently of equation (A1.1.2), by the fact that both electric and magnetic fields exert mechanical forces, a fact which allows them to be related to a common force parameter, and thus to each other. This ‘force-coupling’ permits a calculation of the product 10m0 which, in turn, provides a value for c0, using equation (A1.1.2). (Thus Maxwell was able to establish that light in free space consisted of electromagnetic waves.) We can go further, however. The free-space symmetry of Maxwell’s equations is retained for media which are electrically neutral and which do not conduct electric current. These conditions obtain for a general class of materials known as dielectrics; this class contains the vast majority of optical media. In these media the velocity of the waves is given by: c ¼ ð110 mm0 Þ21=2
ðA1:1:3Þ
where 1 is known as the relative permittivity (or dielectric constant) and m the relative permeability of the medium. 1 and m are measures of the enhancement of electric and magnetic effects, respectively, which are generated by the presence of the medium. It is, indeed, convenient to deal with new parameters for the force fields, defined by: D ¼ 110 E B ¼ mm0 H where D is known as the electric displacement and B the magnetic induction of the medium. More recently they have come to be called the electric and magnetic flux densities, respectively. The velocity of light in the medium can (from equation (A1.1.3)) also be written as c¼
c0 ð1mÞ1=2
ðA1:1:4Þ
where c0 is the velocity of light in free space, with an experimentally determined value of 2:997925 £ 108 m s21 : For most optical media of any importance we find that m < 1; 1 . 1 (hence the name ‘dielectrics’). We have already noted that they are also electrical insulators. For these, then, we may write equation (A1.1.4) in the form: c
b I λR
Iλ (0) a
n1
n2 b crit
b
I λ(d )
Figure C4.1.5. Behaviour of light incident on a dielectric interface, demonstrating refraction and total internal reflection.
We shall now move on to consider inelastic processes, where either light energy is lost totally, or where some light is re-emitted with a change in photon energy.
Absorption Absorption is a loss of light in a material, involving an internal energy exchange, resulting in the destruction of a photon. It usually takes place only at specific wavelengths, corresponding to a defined transition between two energy states. Such a transition occurs when the natural frequency of charge displacement in the material is in resonance with the frequency of the incident radiation. The absorption involves a process that leads to a new excited (either stable or meta-stable) energy state in the material. In the case of electronic transitions in an atom, this involves creation of a new electron density distribution. In the case of molecules, it is possible instead to excite a new resonant, vibrational or rotational, mode of the molecule. An electronic transition, from the highest occupied molecular orbital (HOMO) to the lowest unoccupied molecular orbital (LUMO), as described mathematically by equation C4.1.3, is demonstrated in figure C4.1.6. Note in figure C4.1.6, the phases of the orbitals are also given, in order to explain the symmetry of the orbitals, which causes specific transitions, shown by single, double or triple arrows. These demonstrate the differences in intensity of the transition. According to transition rules, only transitions between even and odd states are permitted. These HOMO and LUMO states correspond to bonding or antibonding orbitals [10, 11]. The energy required for the transition is provided by the radiation and the process is known as (induced) absorption. Within this absorption band, anomalous dispersion is often observed (i.e. the refractive index increases with wavelength). Depending on the molecular environment and possible pathways of deactivation (see table C4.1.2), the new excited state can exist for a time varying over the wide range of 10213 – 1023 s. From the Schroedinger equation, the corresponding energy states can be calculated as a set of eigenvalues, by using the electronic, vibrational, and rotational eigenfunctions and inserting the boundary conditions that are appropriate to the molecular structure. The relevant energy levels or states in the UV/VIS spectral region usually correspond to electronic levels, whereas in the infrared area they correspond to the energies of molecular vibrational modes.
© 2006 by Taylor & Francis Group, LLC
1410
Spectroscopic analysis
+
–
s*
antibindend +
–
–
+
p*
np * n
nichtbindend pp *
+ –
ss*
p
bindend +
s
Figure C4.1.6. Bonding and anti-bonding orbitals (HOMO, LUMO) and their transition possibilities.
Energy level diagrams In figure C4.1.7, a Jablonski energy level diagram for a typical organic molecule is depicted. It shows the energy levels and allowable transitions. It shows both electronic and vibrational states, but rotational states have been omitted to keep the diagram simple. At room temperature, most of the molecules are resting in the lowest electronic and vibrational state, so resonance absorption usually takes place to various excited higher-level vibrational levels or electronic states. To illustrate these, to the right of this energy level diagram, three types of resulting intensity spectra (absorbance (A), fluorescence (F), phosphorescence (P)) are shown as a function of wavenumber. The relative strengths of absorptions depend on the momentum of the transitions and are determined by a set of spectroscopic selection rules. Radiative transitions requiring a change of electron spin are not allowed and, in organic molecules, a normal ground state has a singlet property. Thus, a normal absorption spectrum is for a transition from S0 to S1. Such a transition occurs within a very short time, typically ,10215 s. Usually only the first excited electronic state is important in spectroscopy, since all higher states have very short lifetimes (relaxation times). For organic molecules, usually paired spins are present (singlet states), whereas in inorganic transition metal complexes the ground state can be a triplet and other states with a large number of unpaired electrons may occur according to Hund’s rule. An important example of a triplet ground state molecule is oxygen. Electronic chromophores Unlike the case for infrared spectra, where vibrational bands are very useful for the analysis of components, or of functional groups, the occupancy of, and energy between, the electronic levels (i.e. those which usually give rise to UV/VIS spectra) does not depend so much on the molecular structure or on interactions with the matrix. However, changing the electronic ground or excited states can lead to a
© 2006 by Taylor & Francis Group, LLC
Theoretical principles Table C4.1.2.
1411
Possible deactivation process from the vibrational ground state of the first excited singlet state.
Process type
Abbreviation
Radiationless transition
Te
Thermal equilibration
Relaxation from a high vibrational level within the present electronic state. Lifetime depends on: Matrix Inner deactivation (transfer of energy in torsional vibrations)
Ic
Internal conversion
Isoelectronic transition within the same energy level system from the vibrational ground state of a higher electronic state into the very high energy vibrational state of a lower electronic state
Isc
Intersystem crossing, intercombination
Isoelectronic transition into another energy level system (S $ T ), usually from the vibrational ground state in the electronically excited state; respective radiative transition is forbidden because of the spin inversion prohibition (except for heavy nuclei). Therefore phosphorescence is a ‘forbidden’ process
F
Fluorescence
Without spin inversion from, e.g. S1 to S0 (provided that lifetime of the electronically excited state is 1028 s) within singlet system
P
Phosphorescence
Out of triplet into singlet system (provided that lifetime is 1023 s) very low probability, only possible at low temperatures or in a matrix Photoinduced reaction starting from the S1 term leading to ionization, cleavage, or in a bimolecular step to a new compound or an isomer (trans– cis), provided that lifetime in excited state is relatively long
Spontaneous emission as a radiative process
Name of process
Photochemical reactions
Process description
shift in the relative position of the electronic level or to a change in the degree of polarization. In general, UV/VIS spectra are less easily used to characterize chemical components. However in some specific cases, for example steroids, incremental rules can be determined that allow both determination of, and discrimination between, some of these molecules. Various transitions between electronic and vibrational levels are shown in the energy level diagram in figure C4.1.7. Depending on the electronic states involved, these transitions are either called ss * ; pp * ; or np * transitions. They have been marked in figure C4.1.6. In aromatic compounds, electronattracting or electron-repelling components, such as functional groups in the ortho-, para- or metapositions of an aromatic ring, can affect these energy levels and they may also be changed by solvent effects or with pH changes. Linewidth and line broadening effects Although the basic absorption process occurs at discrete wavelengths, there are many mechanisms that can effectively broaden the effective absorption line.
© 2006 by Taylor & Francis Group, LLC
1412
Spectroscopic analysis
Figure C4.1.7. Jablonski energy level diagram, showing electronic and vibrational energy levels, and the resulting spectra for different transitions and pathways of deactivation.
The linewidth of absorptions is a very important aspect of spectroscopy [4], but we will present only a very brief discussion of the more important mechanisms. The main ways in which the effective linewidth of an absorption can be effected are: .
Velocity or movement effects, causing translational Doppler shifts
.
Rotations of the molecule
.
Interactions with other atoms in a molecule
.
Interaction with other atoms or molecules by collisions
.
Interaction with nearby atoms or molecules via electric field effects (Ligand fields)
Doppler shifts are usually only important for gas absorption lines. Single-direction wavelength shifts of this nature will always occur in any fluids that are moving rapidly (e.g. fast-moving gas jets), but the most usual effect is due to the rapid movement of gas molecules, which is defined by well-known gas laws, and of course depends strongly on the gas temperature. Molecular rotations can occur according to the number of degrees of freedom of the molecule and these will cause a set of discrete energy (frequency) shifts, depending on the quantized rotational levels. Interaction with other atoms in the molecule increases, for example, the number of degrees of freedom of vibrational and rotational transitions (e.g. number of possible modes of vibration) so molecules containing more than about six atoms usually tend to lack a fine-line absorption band structure. A simple gas with atoms or molecules having a fine line absorption structure at low pressure will exhibit increasing broader lines as pressure is increased. This is due to an effect called collision broadening or pressure broadening and is due to the additional energy levels that arise due to collisions.
© 2006 by Taylor & Francis Group, LLC
Theoretical principles
1413
If the pressure is increased sufficiently, any closely spaced sets (manifolds) of narrow absorption lines will eventually merge into a continuous band. Eventually, in the condensed state, the absorption is almost invariably of a broadband nature. This is because the large number of possible energy levels in absorbing bands can produce a near continuum. Electronic energy levels, for example, are affected by the electronic fields of nearby molecules, a phenomenon known as Ligand field interaction. Bandshifts of electronic levels Apart from line broadening, line-shifts can occur. As shown in figure C4.1.8, the pp * and np * transitions are influenced differently by polar solvents. If a cyclohexane solvent is replaced by methanol, the polarity increases considerably and, accordingly, the pp * band is red-shifted by several nanometres, a so-called bathochromic effect, and the np * band is shifted to the blue (hypsochromic). Such bandshifts can be used to obtain information regarding the properties of the energy levels and transitions. A change in intensity of the absorption band is described as either hyperchromic or hypochromic, respectively, depending on whether there is an increase or decrease in intensity. Quantifying absorption levels in media Absorption of samples is measured in units of absorbance, which is the log of the ratio of the light energy (I0) entering a test specimen to the energy leaving (Iout). The transmission, T, is given by I0/Iout and the absorbance, A, is given by log101/T, so: Absorbance; A ¼ log10 ðI out =I 0 Þ An absorbance of 1 therefore implies only 10% transmission. One should take care in the use of the word ‘absorbance’, as defined above and as measured by instruments, as this word seems to suggest the optical loss is only due to absorption, whereas, in cases of turbid samples, it might actually arise from a combination of absorption and scattering. Red shift pÆp * log e
Blue shift n Æp *
Cyclohexan (C)
p* Methanol (M)
n p (C)
(M) l
Figure C4.1.8. Illustration of band shifts caused by polar solvents.
© 2006 by Taylor & Francis Group, LLC
1414
Spectroscopic analysis
The power, P(l), transmitted through a sample in a small wavelength interval at a centre wavelength l, is given by Lambert’s law: PðlÞ ¼ P0 ðlÞexp½2aðlÞl where P0(l) is the power entering the sample in this wavelength interval, a(l) is the attenuation coefficient of the material at wavelength l, and l is the optical path-length through the sample to the point at which P(l) is measured. This is only true for homogeneous non-turbid, non-fluorescent samples. The sample can be said to have a transmission T(l), at the wavelength l, where: TðlÞ ¼ exp½2aðlÞl Alternatively, the sample can be said to have an absorbance A(l), where: AðlÞ ¼ log10 ½1=TðlÞ ¼ log10 ½P0 ðlÞ=PðlÞ ¼ 0:43aðlÞl The factor 0.43, which is the numerical value of log(e), has to be included to account for the use of log10 for decadic absorbance calculations, whereas natural (to base e) exponents are normally used for attenuation coefficients. According to the Beer –Lambert law, the value of A for an absorbing analyte is given by: A ¼ logðI 0 =IÞ ¼ MCL where M is the (decadic) molar extinction coefficient, C is the molar concentration and L is the optical path-length. With merely measuring the intensity when using broadband radiation, there is a deviation from the above law whenever the absorption varies with wavelength. Then, each spectral component will obey the law, but the overall intensity will not. In all cases, when only measuring total intensity, care must be taken to avoid errors in calibration and/or absorption coefficient measurement. Without care, it is easy to get wrong values of absorption coefficients or to appear to be measuring the wrong compound. Fortunately, most transmission or absorption measurements are taken using spectrometers that show the full transmission or absorbance spectrum, so the behaviour of each component can be viewed separately. Even at single wavelengths, however, the law has the wrong dependence on concentration if there is clustering or association of molecules or if concentration-dependent or pH dependent chemical change occurs.
Frustrated total internal reflection, evanescent field absorption In contrast to what is usually taught in classical optics, total internal reflection is not really ‘total’ [10] at the interface between the media. A fuller examination, using either classical geometric optics or quantum optics, predicts that part of the radiation will penetrate the interface and then, for a certain distance, be guided outside the optically denser medium, leading to a lateral shift in the apparent point of reflection from the interface. This field that extends into the less dense medium is called the evanescent field. In the medium having the lower refractive index, it decreases exponentially away from the interface. Any absorbing medium within an evanescent-field penetration depth, d, of this field, can absorb the radiation and result in an attenuation of the otherwise total reflectance. The value of d (measured from
© 2006 by Taylor & Francis Group, LLC
Theoretical principles
1415
the interface) is given by equation (C4.1.7): d¼
l qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2p n2 sin2 Q2 2 n21
ðC4:1:7Þ
This penetration depth is typically of order of half the wavelength of guided radiation. Thus absorption of the evanescent wave will reduce the intensity of the otherwise ‘totally internally reflected’ reflected light, via this electric field vector coupling. This effect is called frustrated total internal reflection. It can be made practical use of for measurement of highly absorbing samples, which may absorb (or scatter) far too much light to be able to measure them by traditional methods. For example, complex samples such as paints, foodstuffs or biological materials, which might absorb or scatter light very strongly, can be analysed for their strongly absorbing components, by placing them in contact with a high index glass plate and measuring the internally reflected light. Fluorescence Fluorescence is an inelastic energy relaxation process, which can follow absorption of light in a material. In most materials, absorption of light (i.e. energy loss from a photon) merely causes heating of the absorbing material, with all the absorbed energy being converted to internal kinetic energy (for example, to excite molecular vibrations or phonons). However, in some materials, only part of the energy of an optically excited state is dissipated as heat and one or more photons, of lower energy than the incident one is radiated. This most commonly involves an internal process, which generates a photon and one or more phonons, with a total energy equal to that of the absorbed incident photon. The fluorescence is therefore usually at a longer wavelength (lower photon energy) than that of the incident light (this is called a Stokes process). The emission of light by fluorescence has no preferred direction and is said to be omni-directional. It is also randomly polarized. Of course a single photon fluorescent event must, by its nature, have the direction and polarization of the single photon emitted, but over a period of time the average scattered energy from many such events is omni-directional and so fluorescent light is randomly polarized. This aspect of fluorescence can be used, apart from wavelength and temporal-persistence (decay lifetime, after pulsed illumination) differences, to distinguish it from Rayleigh and/or Raman scattered light. Fluorescence detection is a valuable technique in chemical sensing where it can be used to directly monitor certain fluorescent chemicals (e.g. poly-aromatic hydrocarbons, mineral oil, fluorescein tracer dye, etc). However, by deliberately introducing reactive fluorophores, which will act as a chemical indicator, it can also be used to monitor reactions or reagents having no fluorescence. Many optical techniques have been used [13] and many optical sensing applications of these are covered in detail in the proceedings of a series of ‘Europt(r)ode’ congresses. There is also an excellent textbook ‘Optical Sensors’, edited by R. Narayanaswamy and O.S. Wolfbeis [14]. Chemiluminescence and bioluminescence Fluorescent light can arise as the result of chemical reactions, an effect known as chemiluminescence. The reaction must be of a type to leave electrons in excited states that can then decay radiatively, usually with the emission of visible photons. Such reactions are now very commonly seen in the plastic-tube-enclosed chemical ‘light sticks’ that children are often given, but which also have more serious uses for emergency lighting or as a distress signal at sea or in mountains. These lights operate by breaking chemical-filled ampules, enabling the reactants to mix and produce the light.
© 2006 by Taylor & Francis Group, LLC
1416
Spectroscopic analysis
A compound commonly used to produce green light is called luminol, clearly deriving its name from its light-emitting properties. Luminol (C8H7N3O2) is known by several other chemical names, including 5-amino-2,3-dihydro-1,4-phthalazine-dione, o-aminophthalyl hydrazide, 3-aminophthathic hydrazide and o-aminophthaloyl hydrazide. When luminol is added to a basic solution of oxidizing compounds, such as perborate, permanganate, hyperchlorite, iodine or hydrogen peroxide, in the presence of a metallic-ion catalyst, such as iron, manganese, copper, nickel or cobalt, it undergoes an oxidation reaction to produce excited electronic states, which decay to give green light. The strongest response is usually seen with hydrogen peroxide. Because photo-multipliers can detect single photon events in the darkened state, very low concentrations of oxidizing agents can be measured, including oxidizing gases such as ozone, chlorine and nitrogen dioxide. Numerous biochemicals can also cause a light-emitting reaction and hence be detected. Nature has, however, used chemiluminescence long before man, involving a phenomenon known as bioluminescence, where the chemicals are generated within biological organisms. Well-known examples of this include the light from glowworms and fireflies and the dull glow arising from the bioluminescence of myriads of tiny organisms seen in agitated seawater, for example in the wake of boats. Many deep-sea creatures use bioluminescence, either to make themselves visible or to confuse prey. The reactions are essentially of the same type as that with luminol, except that the origin of the chemicals is from biological reactions. Raman spectroscopy Raman spectroscopy [15] relies on a form of inelastic scattering, where the periodic electric field of the incident radiation again induces a dipole moment in a material, but, in this case, there is an interaction with material vibrations, or phonons, which results in a change of photon energy. The effect depends on the polarizability, a, of the bond, at the equilibrium inter-nuclear distance and on the variation of this distance, P, according to equation (C4.1.9) below. This equation has three terms—the first represents the elastic Rayleigh scattering (scattering at the same frequency as the incident radiation: 00 transition).
a ¼ a0 þ
›a x ›x
ðC4:1:8Þ
In the equation below, there are polarizability terms that have the effect of reducing or increasing the re-radiated frequency, compared with that of the incident light, by an amount equal to the molecular vibrational or/and rotational frequency. This causes so-called Stokes (downward) and anti-Stokes (upward) shifts in the re-radiated optical frequency. This light is referred to as Raman scattered light. 1 ›a p ¼ a0 E0 cos 2pn0 t þ x E ½cos 2pðn0 2 n1 Þt þ cos 2pðn0 þ n1 Þt |fflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflffl} 2 ›x 0 0
ðC4:1:9Þ
Rayleigh scattering
The coupling of vibrational eigenfrequencies (n1) of the scattering molecule to the exciting frequency (n0) is sometimes called a photon – phonon interaction process. Raman spectroscopy occurs with selection rules that are different to those for normal vibrationalabsorption infrared spectroscopy. Because of this, many users regard analysis by Raman scattering and by conventional absorption-based infrared spectroscopy to be complementary techniques. Unfortunately, the Raman scattering intensity is very weak, even when compared with Rayleigh scattering. Despite this, however, it is in many cases now a preferred technique, since it allows use of low-cost visible or near-infrared semiconductor lasers, with compact instrumentation.
© 2006 by Taylor & Francis Group, LLC
Spectroscopic methods and instrumentation
1417
Multi-photon spectroscopic processes Apart from the simple processes described above, there are many light absorbing and/or re-emission processes that involve more than one photon. It is beyond the present text to discuss multi-photon spectroscopy, but it is worth making a few short comments. Examples include 2-photon absorption, where it requires simultaneous involvement of two photons to excite an absorbing level (very unlikely occurrence, therefore a weak effect) or where the first photon excites to a meta-stable level and then the second photon excites it to a higher level from this intermediate level. Of course, many photons can be involved if an appropriate set of meta-stable levels is present.
C4.1.3
Spectroscopic methods and instrumentation
We shall now outline some of the methods and instrument types commonly used in spectroscopy. This has always been a very fast developing field, so only the basic methods will be discussed. The interested reader should consult specialist texts for more details of instrument design and manufacturers data for the latest performance figures. The fastest developing component areas are those of (a) low-noise detector arrays, both 1-D and 2-D types, and (b) compact high-power semiconductor laser sources for Raman and fluorescence excitation, where performances are now achieved that were barely conceivable only 10 years ago. Just as an example of these, to illustrate the type of components needed, a compact fibre-compatible spectrometer source and detection unit with a broadband source and concave diffraction grating is shown in figure C4.1.9. In most cases, Xenon or halogen lamps are a convenient source of broadband white light, which (often with the help of a rear reflector, to aid launch efficiency) is coupled into a fibre optic guide and then directed to the measurement probe. Here, transmitted or reflected light is collected and, via the coupler guided back to a miniaturized spectrometer, which contains the diffraction grating and a photodiode diode array, such as a CCD detector chip. Below, we shall discuss the components of typical spectrophotometer instruments, then go on to discuss instrumentation and measurement methods.
fiber optics
light source diode array
Figure C4.1.9. Spectrometer detection unit for reflectometric measurements, containing a white light source and a CCD camera. (A mirror is used in the light source to increase launch efficiency by imaging the filament on itself to ‘fill in’ gaps in the filament.)
© 2006 by Taylor & Francis Group, LLC
1418 C4.1.3.1
Spectroscopic analysis Spectrometer components
We shall now briefly review the components that are used in spectroscopic instruments, starting with light sources. Light sources Whenever an intense monochromatic light source or an intense light source that can be scanned over a small spectral range is required, the source of choice is nearly always a laser. This is clearly the optimum type of source for photon correlation spectroscopy or for Raman spectroscopy, as a monochromatic source is required. However, broadband sources, such as incandescent lamps, are desired for fullspectrum spectrophotometry, as intense narrowband sources, such as lasers or low-pressure mercury lamps, either cannot be used for observing spectral variations over such a wide range or would be prohibitively expensive. Since most types of incandescent light sources were discussed in detail in Chapter A1.3, we shall cover only the most interesting spectral features of these broader band radiation sources, in particular types covering the important UV/VIS/NIR spectral regions. Tungsten lamps are low cost and stable sources, which give a useful output over the visible/near-infrared region (0.4 – 3.0 mm). Electrically heated thermal sources, such as Nernst emitters, or modern variants, are usually used to cover the infrared region beyond 3 mm. High-pressure arc and discharge lamps are attractive sources in the visible and ultraviolet, as they can provide the desired broad spectral emission, but with significantly higher spectral intensity than tungsten lamps. In the visible and near IR region, the low cost and stable output of tungsten sources often means they are the preferred source. If operated at slightly reduced voltage, their lifetime is excellent too. However, discharge lamps are far more effective sources than tungsten in the blue and violet regions and are far better in the UV region. Thus, the type of discharge lamp most often used to cover these regions in commercial spectrophotometers is the Deuterium lamp, as this is the source that can provide the desired broadband performance at shorter optical wavelengths. Figure C4.1.10 provides a simple comparison of discharge lamps. It is worth mentioning one more broadband source that is very useful for spectroscopy, and that is the Xenon flashlamp. The output spectrum of this is, of course, very similar to that of the Xenon arc lamp shown above. However, because it is only operated in short pulses and at a very low duty cycle (maybe a 100 mS flash every 0.01 s), it can be driven at very high current density without destroying the electrodes. This provides an even brighter source than the continuously operated arc-lamp version, albeit for very short ‘on’ periods. These lamp sources also cover the UV, visible and part of the near infrared spectrum. They are not only very compact and have low average power consumption, but they can also provide a high peak energy output. This can give an optical signal at the detector that may be orders of magnitude higher than the thermal noise threshold of many optical receivers. They can therefore be used with optical systems, such as fluorimeters, where large energy losses often occur, yet still provide a good signal/noise ratio at the detector. Because of their pulsed output, they are also well suited for fluorescence lifetime measurements, provided the measured lifetimes are longer than the decay time of light output from the pulsed lamp. Components for optical dispersion (wavelength separation and filtering) Prisms Many older spectrometers used prisms, which were constructed of optical glass or vitreous silica. These exhibit a change of refractive index with wavelength, thereby giving wavelength dependent refraction, as
© 2006 by Taylor & Francis Group, LLC
Spectroscopic methods and instrumentation High-pressure xenon arc lamp
Relative intensity
Deuterium lamp 1
1419
365
Medium-pressure mercury lamp
435 546
0.8 404
578
0.6 313 0.4
486
656
254
0.2
0
200
300
400
500
600
700
800 Wavelength [nm]
Figure C4.1.10. Comparison of emission spectra of a few discharge lamps and arc lamp sources.
shown in figure C4.1.11. Collimating the input light and refocusing onto a slit behind the prism, it is possible to separate out ‘monochromatic’ radiation. In the ultraviolet, vitreous silica prisms are still occasionally used as wavelength dispersion elements, since they transmit well and their otherwise rather low material dispersion is a little higher in this region than in the visible, but even despite this, prisms are rarely used in spectrometers now. Because
Figure C4.1.11. Dispersion using a prism: radiation incident on the prism is refracted and partially dispersed as it is deflected in w direction towards the base of the prism. An exit slit selects the wavelength from the ‘rainbow’ of radiation. Usually, the red wavelength is less refracted than the blue one.
© 2006 by Taylor & Francis Group, LLC
1420
Spectroscopic analysis
the prism material must be transparent and have a significant optical dispersion, greater problems with material selection arise in the mid-infrared, so this was the historical reason why diffraction gratings were first used in this wavelength region. Diffraction gratings Diffraction gratings are based on coherent interference of light after reflection from, or refraction through, an optical plate with multiple parallel grooves. The dispersion of a grating depends on the spacing of the grooves, according to the well-known Bragg condition that describes the diffraction angle, i.e. the angle at which there is constructive interference of the diffracted light. In the more common reflective diffraction gratings, the surface has parallel grooves, of a ‘saw-tooth’ cross-section, and these grooves have reflective surfaces. Apart from a high surface reflectivity, the angle of the ‘saw-tooth’ crosssection of the grooves to the plane of the surface is important to achieve good diffraction efficiency. The angle of the groove profile is called the blaze angle and this defines a blaze wavelength at which the diffracted power is maximized at the desired diffraction angle. The blaze angle can be chosen to maximize diffraction for the first order, or for a higher order if desired (see figure C4.1.12) [7]. Diffraction gratings were initially machined directly in a material that acted as the mirror reflector and the greater ease of making gratings that had a greater inter-groove spacing, meaning machining tolerances were less difficult, was another historical reason why gratings were first made for the IR region. Most diffraction gratings are now made by one of two different processes. The first process, to make ‘replica’ gratings, involves pressing (moulding) of an epoxy-resin-coated glass substrate against a ruled (or ‘master’) grating that has been previously coated with an aluminium or gold film, with pretreatment to ensure this film will ‘release’ easily from the surface of the master when the epoxy has set and the two substrates are separated. This process duplicates the contours when the replica grating substrate, with the reflective metal film now attached to the outer surface of the resin, is pulled away from the master. The other production process, which is becoming ever more sophisticated, is to use photo-lithography to etch a grating in the material. The regular spacing is achieved by exposing photoresist using two converging parallel light beams, to give a regular set of parallel and equally spaced interference fringes. These gratings can now be made by exposing the resist and etching the substrate in small steps, in a way that allows control of the blaze angle. These are called holographic gratings, because of the interference, or holography process that is used to expose the photo-resist. Optical detectors Optical detectors and their associated amplifiers have been discussed in considerable detail in earlier sections, so there is no need to discuss them at length here. However, it is useful to comment that the most common single-element detectors used in wavelength-scanned spectrophotometers include photomultipliers (mainly for the UV range, below 400 nm), silicon photodiodes (visible and near infrared range from 400 to 1000 nm) and photoconductive detectors (usually to cover above 1000 nm wavelength range). Infrared detectors are often cooled, using thermo-electric Peltier devices or mechanical thermal-cycle engines, to improve their inherently worse noise performance. Optical detector arrays Detector arrays are most commonly used in the visible near IR region, primarily based on silicon (400 –1050 nm) or GaInAs (500 –1800 nm). One of the simplest approaches is to use a linear array of discrete photodiodes, each with its own separate pre-amplifier. It is more common in the silicon (400 –1050 nm) wavelength range to use self-scanned arrays, however.
© 2006 by Taylor & Francis Group, LLC
Spectroscopic methods and instrumentation
1421
Incidence beam l3
a b
∆1
–1. Order
l1
Intensity
0. Order
∆
1
S
l2
l
(all wavelengths same direction)
∆2 1. Order l1
l (l1Q2
Iλ(0)
evanescent field
Q1 n1 n2
Au Iλ(0) Q2
Q crit Q2
Iλ(d)
attenuated total reflectance
specular reflectance
total reflection above critical angle
surface plasmon resonance
total internal reflection fluorescence
Figure C4.1.13. Illustration of the evanescent field caused by ‘total internal reflection’. It can be partially absorbed (attenuated total reflection), or can excite fluorescence and surface plasmon resonance.
© 2006 by Taylor & Francis Group, LLC
1424
Spectroscopic analysis
Measurement of indirect effects of absorption: photothermal and photoacoustic spectroscopy Photothermal spectroscopy is a means of detecting or monitoring the absorption of materials, usually liquids or gases, by observing the resulting heat. The idea is that an intensity modulated intense light source is used to illuminate the sample and this raises its temperature when light energy is lost from the beam. An intense laser beam is most commonly used. This first heats the fluid and then becomes deflected by the transverse variation of refractive index that occurs when the resulting heated region induces convection currents. Such currents give rise to thermal gradients. In addition thermal-lensing effects can occur, which cause variation of beam diameter after passing through the fluid, because there is a higher temperature rise along the central axis of the beam. Whichever method is used, the resulting deflections or beam diameter changes can be observed on optical detectors, often with some means of enhancing the sensitivity to changes in the beam position or diameter by using lenses and spatial filters. Photoacoustic spectroscopy is a special case of photothermal spectroscopy, where the light from an intense source, again usually a laser, is intensity modulated, for example with a pulse, step or sinusoidal waveform, and the resulting changes in thermal expansion due to the heating are observed, usually with a traditional acoustic microphone. Gases have low specific heats and large expansion coefficients, so these are relatively easy to detect by this method. The light source can also be swept in optical frequency (or a broadband source can be passed through a swept optical filter) to allow spectral variations over an absorption line, or over a wider spectral region, to be observed. A major advantage of the method for the measurement of liquids is that it can be used with turbid samples, for example ones containing small bubbles, or a suspension of non-absorbing scattering particles. In a conventional spectrometer, it is clearly not normally possible to distinguish between the light lost from the collimated beam due to elastic scattering (turbidity) and the light lost due to absorption. Monitoring of chemiluminescence As discussed earlier, when luminol is added to a basic solution of oxidizing compounds, such as perborate, permanganate, hyperchlorite, iodine or hydrogen peroxide, in the presence of a metallic-ion catalyst, such as iron, manganese, copper, nickel or cobalt, it undergoes an oxidation reaction to produce the excited states that decay to give green light. The strongest response is usually seen with hydrogen peroxide. In order to detect, and measure, this weak green light, it is best to use a photomultiplier, with some form of light-reflecting cavity (light-integrating chamber) to ensure most of the light hits the sensitive photocathode. Because photo-multipliers have good green-light sensitivity and can detect single photon events in the darkened state, very low concentrations of oxidizing agents can be measured, including hazardous oxidizing gases such as ozone, chlorine and nitrogen dioxide. Numerous biochemicals can also cause a light-emitting reaction and hence be detected. A particularly useful reaction for law enforcement is the one that luminol has with blood, enabling crime scenes to be sprayed with this compound and then be viewed in the dark, when telltale glows appear, wherever traces of blood are present. Chemoluminescence is the basis of a number of commercial chemical sensors for important biochemicals. Instrumentation for fluorescence spectroscopy Fluorescence spectroscopy involves illumination of a sample with a monochromatic or filtered light source and observing the re-radiated signal, which is almost invariably at a longer wavelength than the incident light. It is common to observe either the fluorescence spectrum or the fluorescent decay curve,
© 2006 by Taylor & Francis Group, LLC
Spectroscopic methods and instrumentation
1425
following pulsed excitation from a pulsed source. The latter is usually a pulsed laser or filtered light from a Xenon flashlamp. For fluorescence spectra, the most common method is to use a modified spectrophotometer where part of the omni-directional fluorescent light is observed instead of the transmitted light. This can be done either with a dedicated instrument, or can be performed using a standard commercial spectrophotometer instrument fitted with a fluorescence attachment, which has appropriate optics to gather fluorescent light. There is no need to modulate the light source for this measurement, unless this is needed to allow lock-in (phase-sensitive) detection methods to remove signals from unmodulated background light. When performing these measurements, considerable care has to be taken with the following filter-related aspects: .
Removal of any longer-wavelength ‘side-band’ light from the light source, which could be elastically scattered in the sample or the instrument and mistaken for fluorescence.
.
Very careful filtering of the ‘fluorescent’ light signal to remove any elastically scattered source light.
The first aspect of source light filtering is less of a problem when using laser sources, although semiconductor lasers can have some residual spontaneous-emission component at longer wavelengths and gas or ion lasers can have residual light from the plasma discharge. Particular care is needed with incandescent, arc lamp or Xenon flashlamp sources, with their broadband emission. In a commercial spectrophotometer instrument, or a dedicated instrument (spectrofluorimeter), the built-in grating spectrometer acts as a very good filter, provided ‘stray light’ levels are low and also has the advantage that the excitation wavelength can be tuned if desired. Additional rejection of long wavelength light is usually done with ‘short-pass’ dichroic multi-layer edge-filters. The problem of removing elasticallyscattered source light from the ‘fluorescent’ light signal can be done in several ways. Narrow-band dielectric laser mirrors make excellent rejection filters in transmission mode, as these can be designed with a reflectivity of 99.9% or higher. Dichroic long-pass edge filters are also now available with excellent performance. In addition, there is a wide range of long-pass optical glass filters, which have semiconductor-band-edge type transmission behaviour, commonly having short-wavelength (i.e. shorter than the band-edge) absorbance as high as 106 in a 2-mm-thick filter. Care must be taken with these, however, as many of these fluoresce at longer wavelengths, so the first filtering stage is preferably done with a dielectric filter. Note that these filtering problems are even more acute for Raman scattering, so will be discussed further in the section below. When it is desired to examine fluorescent lifetimes, the severity of the filtering problem is reduced by several orders of magnitude as the pulsed source will usually be no longer emitting as the intensity-decay curve of the fluorescence it initiated is now observed. However, strong light pulses can upset many sensitive detection systems or associated electronics, so even here some degree of filtering is still desirable to remove light at the incident wavelength and some source filtering may be necessary too if there is a spontaneous light decay in the laser source. When measuring fluorescent lifetime, a fast detector may be needed. Fluorescent decay is commonly in the form of an exponentially-decaying curve, but lifetimes can typically vary from days, in the case of phosphorescence, to less than nanoseconds. (Important examples of samples having decay times in nanoseconds are the organic dyes often used in dye laser systems and some semiconductor samples with short excited state lifetimes.) When measuring very fast fluorescent decays, it is common to use photon-counting systems using photomultiplier (PMT) detectors. These have the advantage of high internal gain of the initial photo-electrons, so the input noise level of even a fast electronic pre-amplifier is easily exceeded. Also, as the desired detection time is reduced, by using a fast PMT and amplifier, the effective peak current represented by a given-size ‘bunch’ of electrons, which will arrive at the anode
© 2006 by Taylor & Francis Group, LLC
1426
Spectroscopic analysis
from a single-photon event, actually increases as the time interval over which it is measured reduces (Current ¼ charge/time). Thus, using fast photon-counting technology, where each photon count is fed into sets of digital registers according to its time of arrival, very fast fluorescent decay curves can be measured. It is now becoming more common to design photon counting systems with avalanche photodiode detectors, which are operated in a so-called ‘Geiger’ mode, where the incoming photon causes a full (but reversible, after the bias voltage is re-applied) conductive ‘breakdown’ of the reversebiased detector diode. If there is more than one fluorophore or more than one fluorescent decay process occurring, the decay make take the form of a bi- or multi-exponential decay curve, equivalent to linear addition of two or more exponentially-decaying functions. In simple cases, these can be separated with software, but in some cases appropriate choice of excitation wavelength may help to isolate individual curves in mixtures of fluorophores. Another common way of measuring fluorescence lifetime is to intensity-modulate the source with a periodic waveform, usually a sinusoidal or square-wave signal, and observe the phase delay in the fluorescent signal intensity waveform, relative to that of the incident signal. This is commonly done using an electronic system, which can multiply the detected fluorescence signal by an electronic signal analogue of the incident-intensity waveform, and then averages the product over a set time interval. If desired, more than one multiplication can be performed, using reference signals with different phases. Such an electronic detector system has several names, which include vector voltmeter, lock-in amplifier or phase-sensitive detector. Essentially it enables the phase difference between the two signals to be measured. The advantage is that cheaper, simpler detectors, such as silicon photodiodes, can now be used, as the illumination duty cycle is improved (to 50%, in the case of square-wave modulation), which helps to improve the signal to noise ratio, but the disadvantage is that the shape of the decay curve cannot be seen. Another significant disadvantage is that the system now requires much better optical filtering, as any residual optical crosstalk, where elastically scattered light from the source might be detected, will alter the effective phase of the detected ‘fluorescent’ signal. An important feature is that fluorescence detection can be performed with highly scattering samples, such as roughly cut or even powdered materials, and can be used to analyse the surface of opaque materials, as a clear transparent sample is not required. Also as the transmitted light level is not measured, very little sample preparation is needed. Another minor advantage is the slightly shorter excitation wavelength means it can be focussed to a slightly smaller diffraction-limited spot, enabling its use in fluorescence microscopes, which excite the specimen via reasonably conventional optics. These advantages apply even more to Raman spectroscopy, which will be dealt with below, so they will be repeated again there for greater emphasis. A particular problem with fluorescence detection is that many materials will fluoresce, particularly if illuminated with UV light. These include, starting with particularly troublesome examples, organic dyes, compounds of rare-earth metals, chlorophyll, ruby, long-pass optical absorption filters, mineral oils, human sweat, many adhesives and even some optical glasses. Instrumentation for Raman spectroscopy Raman spectroscopy can use conventional optical materials with low-cost semiconductor lasers and high-performance cooled CCD detector arrays. A major advantage is that it can be used with highlyscattering samples, such as roughly-cut or even powdered materials, and even analyse the surface of opaque materials. Obviously, a clear transparent sample is not required, as the transmitted light level is not measured, so very little sample preparation is needed. Another major advantage is the short excitation wavelength that means it can be focussed to a smaller diffraction-limited spot, enabling its use in Raman microscopes, which excite the specimen via reasonably conventional optics, and allow spatial
© 2006 by Taylor & Francis Group, LLC
Spectroscopic methods and instrumentation
1427
resolution that would be impossible with direct IR excitation. Confocal microscopy is also possible using Raman methods. Raman spectroscopy requires a laser source to excite the very weak Raman scattering and a highly sensitive spectrometer to detect the very weak scattered light. The sample is illuminated with focussed laser light and the Raman scattered light is collected. Its power spectrum is analysed, as a function of either optical wavelength, and often presented in terms of its power spectrum in wavenumbers, an optical unit of frequency more commonly used by chemists. Even in clear samples, such as optical glasses, which are only very weak Rayleigh scatterers, the Raman light intensity is usually several orders of magnitude below that even from this elastic scattering. In early days of Raman work, it was necessary to use either laser sources having a power of 1 Watt or more, with very sensitive photomultiplier detection systems, having substantial post-detector averaging to measure the low levels of signal. A major part of the problem was that highly-scattering samples, such as cut materials or powders, can have elastic scattering levels 5 or more orders of magnitude higher than clear samples. Thus, it is necessary to separate out the very much stronger elastic scattering that can occur at the same wavelength as that of the incident laser light, and which might be between 5 and 9 orders of magnitude higher. It was the extensive optical filtering and the scanned (one-wavelength-at-a-time) nature of the spectrometers used that caused the systems to have very poor optical efficiency. The wavelength filtering to achieve separation usually requires at least two or maybe three stages of optical filtering to recover the desired Raman light, free from the undesired elastically-scattered light. In the early days of Raman instrumentation, it was common practice to use rather large double or triple monochromators (i.e. 2 or 3 cascaded grating spectrometers), to reduce crosstalk from so-called stray light in the spectrometers themselves, which might otherwise have caused undesirable elastically scattered light to strike the detector array sites on which the desired Raman light would normally be diffracted. Even then, simply using multiple monochromators was not always sufficient to achieve full separation of elastically scattered light, particularly as any scattered incident light could cause fluorescence on any contaminated optical surfaces it struck. If this occurred, no degree of spectral selection could eliminate it, as the fluorescence band could then overlap the desired Raman bands. Second generation Raman systems reduced this problem by directing the light though a holographic Raman-notch filter, which is a compact optical filter designed just to reject a very narrow band of light, centred at the incident laser wavelength. It was easier to construct this compact component using low-fluorescence materials, thereby easing the problem of further separation in the dispersive spectrometers. With such a filter present in the detection system, a single spectrometer could be used, saving both space and cost. However, at the same time, another modern technology emerged in the form of greatly-improved self-scanned silicon detector arrays, having very low noise and high quantum efficiency and being capable of providing even better low-light performance than photomulipliers on each tiny pixel in the array. This has allowed a compact state-of-the-art detector array to be placed in the output focal plane of a single monochromator, eliminating the need for a narrow output slit that rejected most of the light and now allowing all the wavelength components of the Raman spectrum wavelengths to be measured at the same time. A more recent alternative to the Raman notch filter is the use of dichroic band-edge filters. These interference filters, made in the usual way with a stack of dielectric layers on glass, have a longwavelength pass characteristic that removes not only the undesirable elastically-scattered light, but also the anti-Stokes Raman light. These new filters are highly effective, are of relatively low cost, but are far more stable than the holographic ones used for notch filters, which were based on various polymer materials. A minor disadvantage is the removal of anti-Stokes signals, but as these are not used in 90% of applications, this is not a major disadvantage. The most troublesome residual problem with Raman systems is, as it always has been, that of undesirable background signals due to fluorescence. It has been described how the effects of this could
© 2006 by Taylor & Francis Group, LLC
1428
Spectroscopic analysis
be reduced in the spectrometer, but sample fluorescence is more difficult to remove. Fortunately, using near-infrared illumination, typically at 800 nm wavelength or above, goes a long way towards reducing the background fluorescence of most common materials and contaminants. There are also various signal-subtraction methods that can be used. These can, for example, take advantage of the polarization dependence of Raman or of the temporal decay behaviour of fluorescence, but these are beyond the scope of this introductory text. All the above developments have given many orders of magnitude improvement in the spectrometer and detection system, to a situation where now, Raman scattering is often a preferred technique to absorption spectroscopy. To repeat the Raman-specific advantages mentioned above, a major feature is that, as a good optical transmission is not required, the method can be used with highly-scattering samples such as roughly-cut or powdered materials, and even with opaque ones. It can also be used for high-resolution microscopy, because of the smaller diffraction limited spot at the shorter excitation wavelengths. It is beyond the scope of this text to go into detail on other techniques for enhancing Raman signals, but it is useful to mention two briefly. The first is resonance Raman scattering, where the excitation wavelength is close to an absorption line of the material to be monitored. The second is surfaceenhanced Raman spectroscopy (SERS), where the use of a metal surface, or a surface covered with metal particles, can enhance, greatly, the signal from Raman scattering. A few materials, such as silver or silver particles, are highly effective for this. Both of the above methods can enhance the Raman signal by between 4 and 8 orders of magnitude, depending on conditions. Being a surface effect, however, the SERS method is clearly very sensitive to surface preparation conditions and to subsequent treatment or contamination. Instrumentation for photon correlation spectroscopy The use of photon correlation spectroscopy for particle detection was discussed earlier. It is particularly useful for determination of particle size, over a range of a few nm to a few mm, simply by measurements of scattered signals. Very small particles will undergo fast random movements, as molecular collisions move them about (Brownian motion). In a conventional instrument, a polarized monochromatic TEM00 laser beam is usually used to illuminate the sample by focussing to a narrow beam waist, and a fixed detector is used to observe the scattered light from the beam-waist area. The laser source must be stable in intensity to a small fraction of 1% and usually has a power of a few mW. Originally gas lasers were used, but now compact semiconductor lasers are replacing them. A single spatial mode (single ‘speckle’) in the far field will exhibit more rapid intensity changes with small scattering particles present when compared with the same conditions for larger particles. In order to detect the changes of phase, the scattered light is traditionally imaged through a very small hole to act as a spatial filter, in order to provide the greatest intensity modulation index (greatest fractional change in optical intensity, as the optical phase changes). Clearly, a large detector, without any pinhole aperture is unsuitable, as it would average the light from many interference points, so would see a smaller modulation index. For successful operation, all the optics must be very clean and additional opaque baffles are often used to reduce stray light. Samples must be prepared very carefully to avoid highly-scattering large dust particles or bubbles and clearly particle aggregation (clustering) must be avoided. Following optical detection of the intensity changes, electrical spectral analysis (frequency analysis) of the signal scattered, for example at 908, could potentially yield valuable information on particle size, with smaller particles giving higher intensity modulation frequencies. Unfortunately, very small particles also scatter very weakly and, to act as an effective spatial filter, the receiving aperture has to be very small. As a result, the received photon flux is usually very low, sometimes only a few photons per second,
© 2006 by Taylor & Francis Group, LLC
Spectroscopic methods and instrumentation
1429
making frequency analysis of a very noisy (random photon event) signal much more difficult. A preferred alternative method, which is more suitable for use at low photon flux levels, is to use a method called photon correlation spectroscopy (PCS). Here, instead of analysing the frequency of detected signals, the times of arrival of many individual photon pulses are correlated. The decay time of what, for monodisperse (single-size) particles, is usually an exponentially-decaying correlation function, can be derived using digital correlator systems. As stated earlier, this can, with knowledge of the temperature and viscosity of the fluid in which the particles are suspended, be related to the hydrodynamic radius of the particles using the Stokes – Einstein equation. The method works best with single-size (monodisperse) particles, but more complex correlation functions from suspensions of two or even three particle types/sizes can be inverted using the Laplace transformation. The photon correlation method works well at these very low light flux levels, but requires the use of single photon counting detectors, such as photomultipliers or silicon avalanche photodiodes (APDs). Significant developments have been made to actively quench APD photon counters that are operated in avalanche mode, at very high reverse bias, to allow fast recovery from their photon-induced breakdown and be ready to detect a subsequent photon. This allows instrumentation to take advantage of their superior quantum efficiency in the near infrared [16]. Other developments have been on the design of fast correlators to process the photon count signals and recover particle size information. PCS can typically achieve an accuracy of order 0.5% in particle size on monodisperse samples. With more sophisticated signal processing, it is possible, provided conditions are suitable, to derive estimates of particle size distribution, polydispersity, molecular weight estimates (using Svedberg’s equation), rotational diffusion behaviour and particle shape and many other parameters. The greatest practical problem is usually when large particles are present, either as an undesirable contaminant or as an inevitable feature of the sample to be monitored, as scattering from just a few large particles can often be intense enough to totally dominate the weak signals from very many much smaller ones. Measurement with optical fibre optic leads We shall now discuss how optical fibres can be used with various forms of spectroscopic instrumentation, and discuss the advantages and penalties of using them. Generally, if they can be used efficiently, optical fibre leads offer tremendous advantages. First, expensive instrumentation can stay in a safe laboratory environment, free from risk of damage from chemicals, weather, or careless handling. Second, the remote measurement probe can be very small, robust and immune to chemical attack. Third, there is no need to transport chemicals or other samples to the instrument, so real-time online measurements are possible, with no risk to personnel. In-fibre light delivery and collection for transmission measurements Transmission (and hence absorption or turbidity) measurements can be most easily achieved over optical fibre paths by using a commercial spectrophotometer with specially-designed extension leads. Many manufacturers now offer such systems as standard attachments. Typically, they have a unit that fits into the cell compartment of a standard instrument, with a first (focusing) lens that takes the collimated light that would normally pass through the sample chamber and focusses it instead into a large-core-diameter (usually . 200 mm) optical fibre down-lead and with a second lens that re-collimates light, that has returned back from the sample area (via the return fibre lead), to reform a low-divergence beam suitable for passage back into the detection section of the instrument. There is also a remote measurement cell in the sample-probe, connected to the remote end of both these fibre leads. In this remote sample area, a first lens collimates light (coming from the spectrometer,
© 2006 by Taylor & Francis Group, LLC
1430
Spectroscopic analysis
via the down-lead) into a local interrogation beam. This beam passes through the remote measurement cell, after which a second lens collects the light and refocuses it into the fibre return lead going to the spectrometer instrument. Such optical transformations lead to inevitable losses of optical power (due to reflections, aberrations and misalignments) of typically 10– 20 dB (equivalent to losing 1– 2 units of absorbance in the measurement range). However, most modern spectrophotometer instruments have a typical dynamic range of .50 dB, so this optical loss is a price that many users are prepared to pay in order to achieve a useful remote measurement capability. It should be noted that the optical power losses usually occur mainly due to misaligments and the imperfections of the focusing and re-collimation optics, plus Fresnel reflection losses at interfaces, rather than arising from fibre transmission losses. If suitably collimated beams were to be available in the instrument, if large core diameter fibres could be used to connect to and from the probe and if all optics, including fibre ends, could be anti-reflection coated, there should really be very little loss penalty. Such losses therefore arise primarily because of the need for the fibre leads to be as flexible as possible (so hence choice of small diameter fibres) and the usual need to compromize design on grounds of cost (although, like all such attachments, they are very expensive to buy.) There are many other probe head designs that are possible. The simplest design, for use with measurement samples showing very strong absorption, is simply to have a probe that holds the ends of the down-lead and return fibre in axial alignment, facing each other across a small measurement gap, where the sample is then allowed to enter. Losses are low for fibre end spacing of the same order as the fibre diameter or less, but rapidly increase with larger gaps. The probe is far easier to miniaturize and to handle if the fibre down-lead and return lead are sheathed, in parallel alignment, in one cable. Use of such a cable can be accommodated using a right-angled prism or other retro-reflecting device to deflect the beam in the probe tip through the desired 1808 that allows it to first leave the outgoing fibre, pass through a sample and then enter the return fibre. Use of a directional fibre coupler at the instrument end allows use of a single fibre, but then any residual retro-reflection from the fibre end will be present as a crosstalk signal, adding light signal components that have not passed through the medium. Clearly there are many variants of such optical probes, some involving more complex optics (e.g. multi-pass probes), some constructed from more exotic materials to withstand corrosive chemicals. A very simple option that has often been used with such single fibre probes, for monitoring the transmission of chemical indicators, is to dissolve the indicator in a polymer that is permeable to the chemical to be detected and also incorporate strongly scattering particles in the polymer. When a small piece of such a polymer is formed on the fibre end, the particles give rise to strong backscattered light and the return fibre guides a portion of this to the detection system. This backscatter light had of course to pass through the indicator polymer in its path to and from each scattering particle, so the returning light is subject to spectral filtering by the indicator. Although this is a very lossy arrangement, it is extremely cheap and simple and has formed the basis of many chemical sensors, for example ones using pH indicators. In-fibre light delivery and collection for Raman scattering and fluorescence Raman scattering and fluorescence can also be measured via fibre leads, but the use of fibres causes far more loss of light due to the wide-angle re-radiation patterns characteristic of both of these. However, the potential value of these methods, particularly of Raman scattering, for chemical sensing has meant workers will continue to persevere to get useful performance, despite the low return light levels encountered with fibre-coupled systems. Both these mechanisms involve excitation of a sample with light, usually at a wavelength shorter than the scattered light to be observed, and then the re-emitted light is collected and narrow-band filtered.
© 2006 by Taylor & Francis Group, LLC
Spectroscopic methods and instrumentation
1431
The loss due to launching of the excitation laser light into a fibre is usually negligible with Raman, as narrow-line laser sources are used, but ultimately the launch power limit may be set by non-linear processes or, in the case of large-core multimode fibres, by optical damage thresholds. Similar excitation can be used for low-level fluorescence monitoring, provided no photo-bleaching or other photodegradation of the monitored substance can occur at high illumination intensity. The main potential loss is therefore that of light collection. Minimum additional loss due to collection of Raman light via optical fibres is at least a factor of 90 worse than using conventional optics, thus making the already low light levels about 2 orders of magnitude worse. Despite this, a number of fibre-remoted Raman chemical sensor probes are appearing as commercial items. Evanescent field methods of in-fibre light delivery and collection: frustrated total internal reflection Frustrated internal reflection measurements are clearly attractive for use with multimode optical fibres, as the guidance in the fibre depends on such internal reflections. As stated already, total internal reflectance is often not total. One marked departure from traditional school teaching occurs during reflection at curved interfaces, where light can, under certain circumstances, be lost by radiation into the less dense medium, even when the angle of incidence is above the well-known critical angle. This occurs in multimode optical fibres if light is launched to excite skew rays, provided the light is launched at an angle to the fibre axis that is too large to expect guidance by total internal reflection, assuming rays were passing through the fibre axis. Thus, even if the actual angle of incidence on the fibre core cladding interface for these skew rays is greater than the calculated critical angle, light will be lost into the cladding to form what are called leaky rays. However, even with light incident on flat surfaces, so assuming no leaky rays are possible, there is always a small penetration of light into the less dense medium. Another aspect that is not normally taught in elementary optical texts is that the reflected light beam, although being reflected at the same angle to the normal as the incident light, undergoes a lateral shift in the direction of the interface plane between the two media. It is said to suffer a so-called Goos-Ha¨nchen shift (this is like a displacement of the ‘point’ of reflection at the interface). A fuller examination, using either classical geometric optics or quantum optics, predicts that part of the radiation, the evanescent field, penetrates the interface and then, for a certain distance, is guided outside the optically denser medium, leading, when it returns, to a lateral shift in the apparent point of reflection from the interface. The concept of the evanescent field, which decreases exponentially away from the interface in the medium having the lower refractive index, was introduced earlier. This evanescent field can be usefully used for various fibre-based experiments, as shown in figure C4.1.13. As stated earlier, material or molecules within a penetration depth, d of this field can absorb the radiation and result in an attenuated total reflectance. The absorption can be enhanced further using thin metal layers on the fibre, to cause evanescent field enhancement due to plasmon resonance. This effect will be discussed in more detail later. Another application of this evanescent field is that fluorescent materials or fluorophores close to the interface can absorb this evanescent field radiation and induce fluorescence. The evanescent field can be used to monitor effects very close to the interface, since absorption clearly cannot take place beyond the penetration depth, so no fluorophores in the bulk are monitored. In-fibre light delivery and collection for photon correlation spectroscopy (PCS) In photon correlation spectroscopy systems, monomode fibres are very well suited for both delivery and collection of light. A single-mode optical fibre not only makes an excellent delivery medium for a beam launched efficiently from a gas or semiconductor laser, but it can also form a near-ideal
© 2006 by Taylor & Francis Group, LLC
1432
Spectroscopic analysis
single-spatial-mode optical filter to replace the conventional pinhole used for traditional systems. The PCS technique is therefore easily adaptable to perform remote measurement with optical fibres. There is very little penalty in using optical fibres, as the lasers can be launched with high efficiency, and because a fibre-based single-spatial-mode receiving filter will not lose any more light energy than the alternative of a tiny hole in a metal plate. The fibre, in fact, makes a near ideal spatial mode filter. Specialized optoelectronics and signal processing methods for spectroscopy This section will look at specialized spectroscopic methods such as (e.g. scanned filters, modern fixed detector arrays and use of Hadamard and Fourier transform signal processing methods. In simple dispersive spectrometer instruments, the desired selection of optical wavelength or frequency is achieved by monochromators using a prism or diffraction grating, with the necessary collimation and refocussing optics. The spectrum is then recorded sequentially in time, as the frequency or wavelength of the light transmitted by the filter is scanned. An alternative possibility, discussed in this section, is to use various parallel multiplexing techniques, where all wavelengths are monitored at the same time. There are two generic lines of development that have gained success in recent years. The first of these, which has already been mentioned above, is simply to use multiple detectors, either a discrete-multi-element photodiode array with separate amplifiers, or a self-scanned CCD detector array. Both of these enable the parallel detection of all wavelengths, so more efficient use of the available light. These components were discussed above. The second generic approach is to pass the light through a more complex multi-wavelength optical filter, which is capable of passing many wavelengths at once, but where the spectral transmission varies as it is scanned with time. Light then finally impinges on a single optical detector. The transmitted spectrum is then finally decoded by applying mathematical algorithms to the observed temporal variations in the detected signal as the complex filter is scanned. Two common variations of this so-called transform approach are used, first, the Hadamard and second, the Fourier method. Both methods have the advantage of parallel multiplexing, thereby achieving a co-called Fellget advantage [16]. Both methods have the additional advantage of high throughput [17], since a single narrow slit is no longer used, but a large area (normally a circular hole) allows far more light to enter. In both of these transform methods, mathematical analysis is used to decode optical signals that have arisen from superposition of radiation at many different wavelengths. In the Hadamard spectrometer, an encoded mask, with a transmissive (or reflective) pattern of markings, is positioned in the focal exit plane of a normal dispersive spectrometer and then a mathematical transformation called the Hadamard matrix is applied. The coded mask usually has a fine-detail binary orthogonal code on it, forming a pattern of elements like 010010100110111001010, such that the detected transmitted (or reflected) signal varies with the position of this mask [18]. Using the Hadamard matrix to perform the desired mathematical transform, the transmitted spectrum can be reconstructed. Fourier transform spectrometers require a filter having a sinusoidal variation of optical transmission, which can have its spectral period (wavelength or frequency between successive peaks in transmission) modulated with time in a defined manner. All 2-path optical interferometers, as their path difference is varied with time, conveniently demonstrate the desired sinusoidal transmission changes, which are a natural consequence of the interference process. The most convenient form of this to use in instruments is the Michelson interferometer configuration [19, 20]. The simplest way to appreciate how a Fourier transform spectrometer operates is to first consider what will happen with monochromatic light (e.g. single-frequency laser). Here, if the path difference of
© 2006 by Taylor & Francis Group, LLC
Case studies of spectroscopic methods
1433
the interferometer is increased linearly with time, the intensity transmitted by the interferometer will vary in a purely sinusoidal manner with time, i.e. the output from the optical detector will be a single electronic frequency. Any more complex optical spectra can be considered to be composed of a superposition of many such pure optical frequencies and each of these single-frequency components will generate a pure electronic frequency or tone, so will give its own unique electronic signature. The temporal signal from the detector will be composed of a linear superposition of all such signals. Decoding of such an electronic signal, to recover all such single-frequency sinusoidal signal components, is a standard problem in electronic spectrum analysers and the well-known solution is that of Fourier analysis, hence the use of the words Fourier transform spectrometer to describe its optical equivalent. In all of these transform methods, the instantaneous condition of the complex optical filter is known from the position of the interferometer (Fourier method) or the coded mask (Hadamard method), thus allowing decoding of detected signals to produce the desired spectral output. Before finishing this section, it is instructive to discuss the relative strengths and weaknesses of each approach. In fundamental signal to noise ratio terms, it is preferable to use dispersive optics with a fully parallel array detector, as for example used to great advantage in modern CCD array spectrometers. Unfortunately, such detectors are only available in the visible and near infrared region, as detector technology for other regions is poor. Because of this, there is far greater use of Fourier transform methods in the mid- and far-IR regions, where it is only necessary to have one (often cooled) highperformance single detector. Then the Hadamard and Fourier methods gain advantage over using a single detector with a scanned narrowband filter, as more wavelengths are transmitted through the filter at any one time, and a greater optical throughput can be used in the optical system. The Fourier transform method has a further advantage, however, when high spectral resolution is desired, as it is difficult, particularly in the IR region, to make detector arrays with a small enough spacing to resolve very closely spaced wavelengths, which would otherwise need very large dispersive spectrometers to achieve high resolution. C4.1.4
Case studies of spectroscopic methods
In the following sections, we shall now discuss several case studies, which give examples of recent research on optical sensing, and for convenience, several of these are from the work of the team of Prof. Gauglitz. Some of the descriptions will require the introduction of some further basic ideas, so the reader can best appreciate the methods used. C4.1.4.1 Guided radiation and its use to detect changes in the real or imaginary (absorption) part of refractive index Waveguide theory was discussed in detail in Chapter A2.1 and evanescent fields were discussed earlier in this one, but here we briefly review how waveguides can detect samples in the environment close to the waveguide, by observing their influence on the evanescent field. It is often desired to monitor changes in absorption or refractive index in a medium. Quite a few methods to interrogate changes in the real or imaginary part of the refractive index in the environment near a waveguide are known. With evanescent field methods, the field is usually accessed at some selected point along the fibre and then it is only necessary to monitor the fibre transmission. An early method for refractive index monitoring, by Lukosz [21] (see figure C4.1.14), was to embed a grating on the surface of a slab (rectangular cross-section) waveguide, taking advantage of the modification of the Bragg grating condition when the index of the external medium changes. The original concept has since been modified by many scientists. The effective refractive index that is seen by light in the waveguide depends not only on the index of the medium close to the interface, but also on the
© 2006 by Taylor & Francis Group, LLC
1434
Spectroscopic analysis z Lx
analyte
n
waveguide substrate
a
a
angle
Figure C4.1.14. Grating coupler in its originally devised form [21]. Coupling is at a maximum for an angle of incidence at a Bragg angle, which depends on the refractive index of the medium next to the waveguide.
angle of incident (interrogating) radiation a, the grating constant g, the wavelength l and the grating order m: neff ¼ nenv sin a þ m l=g
ðC4:1:10Þ
To avoid complex alignment, a monochromatic light source (such as shown in figure C4.1.15, using a laser) can be combined with a CCD receiving array [22]. This allows measurements without any need for sophisticated adjustable mechanical alignment systems, such as goniometers. Another, very interesting approach has been introduced by R. Kunz [23, 24], using embossed polycarbonate gratings, which can be produced in a very simple way, as shown in figure C4.1.16. Cheap disposable chips can be produced and the properties can be easily graded, as a function of distance across the device, either by modifying the groove spacing of the grating perpendicular to the waveguide structure or the depth of the waveguide below the grating. By varying either of these, in a manner that changes (tapered separation) along the out-coupling grating, variations of refractive index can be detected. By means of this simple trick, there is no need for either a goniometric approach or angleselective measurements. A simple linear detector array will determine the position along the grating at
CCD array
HeNe-Laser lens
waveguide grating
Figure C4.1.15. Grating coupler arrangement, which avoids the need for complex goniometric positioning by allowing light diffracted at different angles to be incident on different pixels on a diode array.
© 2006 by Taylor & Francis Group, LLC
Case studies of spectroscopic methods light source
1435
detector array
sensitive region
Figure C4.1.16. Grating couplers using spatial gradients of grating line spacing or of depth of grating below surface. The diagram shows the use of either variation in thickness of the waveguide or non-parallel groves.
which optimum out-coupling has occurred and this of course depends on the refractive index in the environment of the waveguide. Figure C4.1.17 shows another approach using a prism coupler, which is a common means of coupling light into slab waveguides. This technique is also sometimes called a resonant mirror method. A layer of lower refractive index is placed as a buffer layer between the prism and the waveguide. An incident beam (polarized at 458 to the normal in the plane of incidence) will be reflected at the base of the prism in a manner dependent on the wavelength, the angle of incidence and the optical properties of the prism and/or the waveguide. The incident beam can excite TE (transversal electric) and/or TM (transversal magnetic) modes of the waveguide and the modes of the waveguide can re-couple into the prism, resulting in a phase shift relative to the directly reflected beam. Because both propagating modes (TE and TM) travel with different velocities within the waveguide and are differently influenced by the refractivity of the medium in the evanescent field region [25], the plane of polarization of the output light is in general elliptical, with a polarization state depending on the relative phase delay. The process is similar to the polarization changes that occur in a bi-refringent crystal. Direct interferometers, i.e. ones not making use of polarization changes, are also interesting and make useful arrangements for interrogation of refractive index. In a commonly used configuration, radiation is guided via two arms of a Mach –Zehnder interferometer (see figure C4.1.18 [26]) and one of the arms is covered by a sensing layer. This is typically a polymer sensing film, but it may also be a more complex bio-molecular recognition layer. Guided radiation propagates within these two arms with different phase velocities, resulting in a phase shift that, after interferometric superposition at the
incident beam
reflected beam
coupling layer (~1000 nm) FTIR-border TIR-border
evanescent field
resonant layer (~100 nm)
Figure C4.1.17. Principle of the prism coupler (note: sometimes called a resonant mirror).
© 2006 by Taylor & Francis Group, LLC
1436
Spectroscopic analysis
Figure C4.1.18. Depiction of the two-arm Mach – Zehnder sensing interferometer. The variations of intensity response (lower right) due to changes of refractive index within the refractive-index sensitive layer (superstrate) are shown. The superposition of partial beams is shown on the left, with different relative phases.
coupling junction, can be measured as an intensity change, which is now dependent on the refractive index of the medium in contact with the recognition layer. The phase of the beams, and hence the signal intensity, depends, of course, on the optical wavelength, the interaction length, and the waveguide properties. The phase shift-dependent intensity is given by equation C4.1.11 below, where L is the interaction length, Dneff is the change in the effective refractive index within the waveguide arm (such changes caused by changes in the evanescent field region), and k is the wave vector. IðDFÞ=I 0 ¼ 1=2½1 þ cosðLkDneff Þ
ðC4:1:11Þ
In figure C4.1.18, the interference pattern is shown as a superposition of the two propagating waves in the two different waveguide arms. Another possible configuration is the well-known Young interferometer [27] arrangement. Instead of recombining the two waveguides in the planar layer, the two beams are now focussed in one plane only, using a cylindrical lens, and directed onto a CCD array, where they form an interference pattern, as shown in figure C4.1.19.
© 2006 by Taylor & Francis Group, LLC
Case studies of spectroscopic methods
semiconductor laser
IO device
1437
cylindric lens
CCD array
1.2 1.0 0.8 0.6
d
0.4 0 r
0.2 0.0 –0.2 –1.5 –1.0
0.0
0.5
1.0
1.5
x [mm]
Figure C4.1.19. Young’s interferometer: this shows a waveguide version of the interferometer and spectral detection of the superimposed beams and interference pattern in the far field. (A cylindrical lens improves the signal intensity by reducing spread due to diffraction in the vertical direction.)
Surface plasmon resonance is a means of enhancing the evanescent field intensity using thin metal films. The surface plasmons (electrons) in a metal film on its surface can be excited by a guided wave in the waveguide. The excitation has a dependence on the refractive index of the medium on top of the metal film, particularly at the resonance condition, where the intensity of the radiation propagating in the waveguide is substantially reduced by the stronger resonance coupling [28]. It can be used as a detection principle, either directly, using a waveguide, or indirectly, using a waveguide structure in combination with a prism. One arrangement, where a slab waveguide is coated with a buffer layer and a metal film, is shown in figure C4.1.20. Originally, surface plasmon resonance was introduced to the scientific community by Kretschmann [29] and Raether [30]. Two approaches are commonly used. The first uses monochromatic light with a fixed angle of incidence, and then the resonance condition, which determines where light falls on a position-sensitive linear photodiode array (see figure C4.1.21a), is monitored [31]. At present, this is one of the most commonly used optical methods for interrogating bio-molecular systems. It has been commercialized by Biacore for examination of affinity reactions (www.biacore.com). Another approach is shown in figure C4.1.21b, where the optical system is arranged so that it receives only a narrow angular range of light. With this system, the need to meet the resonance condition causes a narrow range of wavelengths from a broadband white light source to be selectively attenuated [32]. Using a diode-array spectrometer, it is then possible to record the wavelength of the ‘dip’ in the spectrum of the internally reflected signal. metal film
∆
buffer layer
waveguide
Figure C4.1.20. Waveguide-based surface plasmon resonance.
© 2006 by Taylor & Francis Group, LLC
wavelength angle
1438
Spectroscopic analysis (a)
lens F
F
(b) diode-array spectrometer broadband light source
fiber
lens F
Polarizer
inlet
F
outlet
Figure C4.1.21. Surface plasmon resonance. (A) Monitoring of angular dependence, with a narrowband light source. (B) Monitoring of wavelength dependence, at fixed angle, with white light input and spectral readout, using a diode-array spectrometer.
In addition to the above examples of interrogation principles, many other approaches have been published [33]. Recent publications review more details of new developments and applications of these methods [34 –38]. C4.1.4.2
Multiple reflections, interference
Apart from refractometry, reflectometry is frequently used in sensing. For this reason, a brief introduction to white light interferometry will be given below giving an example of an approach similar to the Fabry – Perot interferometer. For many decades, ellipsometry has been known to examine the properties of thin film layers. As mentioned above, reflection and refraction takes place at each interface between media of different refractive index. The partially reflected beams from the interfaces at each side of a layer will superimpose and exhibit interference. The intensity of reflection depends on the wavelength, the angle of incidence of radiation, the refractive index of the layer and the physical thickness of this layer. In ellipsometry [34, 35] polarized light is used and thus the refractive index and the physical thickness of the layer can be determined separately. In normal reflectometry, no polarization state is selected, resulting in a very simple, robust and easy-to-use method for monitoring effects in these layers. As can be seen in figure C4.1.22, the two partial beams can show constructive or destructive interference or any state between, depending on the above mentioned parameters. In this figure, the path
© 2006 by Taylor & Francis Group, LLC
Case studies of spectroscopic methods (a) monochromatic
It(λ) n d′
1439
(b) spectral 0.8 0.6
nd
0.4 0.2
I2′ I2
I1
0.0 400 I0(λ)
500
600 700 wavelength [nm]
Figure C4.1.22. Reflectometric interference spectroscopy (RIfS) [36, 37], where the superposition of partial beams reflected at the interfaces of a thin layer is measured. Any superposition causes constructive and destructive interference, resulting in a wavelength dependence. When white-light incident radiation is used, this results in the interference spectrum given on the right side of the diagram. If just one reflected wavelength (monochromatic) from the layer is measured, the intensity of the reflected radiation varies in a periodic manner as the optical thickness of the layer (i.e. change in thickness or refractive index) is changed. This is shown by the dotted vertical line in the diagram.
of light is shown at an angle in order to be able to visualize the different partial beams but the light beam will normally be at an angle close to normal to the surface. For the simple case of a non-absorbing layer, having multiple reflections at only two interfaces, equation C4.1.12 describes the intensity modulation with wavelength. Changes in the properties of the layer, particularly of a polymer layer, can be induced by adsorption of analytes such as gases and liquids. These can change the effective optical thickness of the layer, by varying either the refractive index or the physical thickness of this layer (swelling). Adsorption into this layer is most likely in the case of polymer films. In order to monitor biomolecular interactions, the effects of various affinity reactions can also be observed. Il ¼ I1 þ I2 þ 2
C4.1.4.3
pffiffiffiffiffiffiffiffiffiffiffi 4pnd I 1 I 2 cos l
ðC4:1:12Þ
Experimental apparatus for the RIfS method
Figure C4.1.23 shows the two different approaches for measuring gases or liquids. These have the combined advantages of small cell volume and the possibility of remotely monitoring interference effects in the cell via fibre optic leads. Sample preparation is an important aspect of spectroscopy and examples of arrangements for this for both gas and liquid are shown in figure C4.1.24. In the case of gases, a flow controller is usually used, which allows multi-analyte detection. In the case of liquid samples, a flow injection analysis system containing pumps is usually used (preferably syringe pumps) along with selection valves. Monitoring the reflection from Fabry – Perot polymer layers involves cheap and simple layers that can be made from many different thicknesses, to enable sensing arrays to be constructed. By monitoring the response of many different layers/materials to the same chemical influence, a different response
© 2006 by Taylor & Francis Group, LLC
1440
Spectroscopic analysis
Figure C4.1.23. Cell compartment for measuring gases and liquids, using RIfS.
Figure C4.1.24. Sample preparation is achieved either by flow controllers (for gas) or by a flow injection analysis setup (liquids).
pattern to each individual constituent can be measured and, when combined with a pattern recognition system, the arrangement can make a simple, but effective, form of opto-electronic nose.
C4.1.5
Conclusions
We have now completed our simple introduction to the subject of spectroscopy, in order to give an overview to non-specialist readers. This area is sufficiently complex to fill large specialist textbooks, so it is hoped the reader will forgive us where there are inevitable omissions. We have tried to give a short introduction to the basic theory, a brief practical overview of components and instruments and finally introduce a few examples of recent research areas, mainly using fibre optics. Clearly the interested reader can gain more insight into this fascinating subject from what are usually more-voluminous specialist textbooks, many of which have been mentioned in the references and further reading.
© 2006 by Taylor & Francis Group, LLC
Further reading
1441
References [1] Naumer H, Heller W and Gauglitz G (eds) 2003 Untersuchungsmethoden in der Chemie chapter 8 (Weinheim: Wiley-VCH) [2] Gauglitz G 1994 Ultraviolet and visible spectroscopy Ullmann’s Encyclopedia of Industrial Chemistry Vol. B5 (Weinheim: Wiley-VCH) [3] Ingle J D Jr and Crouch S R 1988 Analytical Spectroscopy (Englewood Cliffs, NJ: Prentice-Hall) [4] Svehla G (ed) 1986 Analytical visible and ultraviolet spectrometry Comprehensive Analytical Chemistry Vol. XIX (Amsterdam: Elsevier) [5] Skoog D A and Leary J J 1992 Principles of Instrumental Analysis (Fort Worth: Saunders College Publishing) [6] Born M and Wolf E 1980 Principles of Optics (New York: Pergamon) [7] Hecht E and Zajac A 1974 Optics (Reading: Addison-Wesley) [8] van der Hulst H C 1957 Light Scattering by Small Particles (New York: Wiley) [9] Stacey K A 1956 Light-Scattering in Physical Chemistry (London: Butterworths) [10] Harrick N J 1979 Internal Reflection Spectroscopy (New York: Harrick Scientific Corporation) [11] Naumer H, Heller W and Gauglitz G (eds) 2003 Untersuchungsmethoden in der Chemie chapter 15 (Weinheim: Wiley-VCH) [12] Klessinger M (ed) 1995 Excited States and Photochemistry of Organic Molecules (New York: John Wiley & Sons Inc.) [13] Wolfbeis O S (ed) 1991 Fibre Optic Chemical Sensors and Biosensors (Boca Raton: CRC Press) [14] Narayanaswamy R and Wolfbeis O S 2004 Optical Sensors (Berlin, Heidelberg: Springer Verlag) [15] Colthup N B, Daly L H and Wiberley S E 1990 Introduction to Infrared and Raman Spectroscopy (New York: Academic Press) [16] Brown R G W and Smart A E 1997 Appl. Optics 36 7480– 7492 [17] Fellgett P B 1958 J. Phys. Radium 19 187– 191 [18] Jacquinot E 1954 J. Opt. Soc. Am. 44 761 –765 [19] Griffiths P R 1975 Chemical Infrared Fourier Transform Spectroscopy (New York: Wiley) [20] Griffiths P R and De Haseth J A 1986 Fourier Transform Infrared Spectrometry (New York: Wiley) [21] Nellen P h M and Lukozs W 1993 Biosens. Bioelectron. 8 129–147 [22] Brandenburg A and Gombert A 1993 Sens. Actuators B 17 35–40 [23] Kunz R E, Edlinger J, Curtis B J, Gale M T, Kempen L U, Rudigier H and Schuetz H 1994 Proc. SPIE-Int. Soc. Opt. Eng. 2068 313–325 [24] Kunz R E 1991 Proc. SPIE-Int. Soc. Opt. Eng. 1587 98 [25] Cush R, Cronin J M, Stewart W J, Maule C H, Molloy J and Goddard N J 1993 Biosens. Bioelectron. 8 347 –354 [26] Ingenhoff J, Drapp B and Gauglitz G 1993 Fresenius J. Anal. Chem. 346 580–583 [27] Brandenburg A and Henninger R 1994 Appl. Opt. 33 5941–5947 [28] Piraud C, Mwarania E, Wylangowski G, Wilkinson J, O’Dwyer K and Schiffrin D J 1992 Anal. Chem. 64 651 –655 [29] Kretschmann E 1971 Z. Phys. 241 313–324 [30] Raether H 1977 Phys. Thin Films 9 145–261 [31] Liedberg B, Nylander C and Lundstro¨m I 1983 Sens. Actuators B 4 299–304 [32] Brecht A, Gauglitz G and Striebel C 1994 Biosens. Bioelectron. 9 139 –146 [33] Gauglitz G 1996 Opto-chemical and Opto-immuno sensors Sensors Update Vol I (Weinheim: VCH Verlagsgesellschaft mbH) [34] Liebermann T and Knoll W 2000 Colloids and Surfaces A 171 115 [35] Anton van der Merwe P 2001 Surface plasmon resonance Protein–Ligand Interactions: Hydrodynamics and Calorimetry (Oxford: Oxford University Press) [36] Kinning T and Edwards P 2002 Optical Biosensors (eds) F S Ligler, T Rowe and A Chris (Amsterdam: Elsevier) [37] Kuhlmeier D, Rodda E, Kolarik L O, Furlong D N and Bilitewski U 2003 Biosens. Bioelectron. 18 925 [38] Klotz A, Brecht A, Barzen C, Gauglitz G, Harris R D, Quigley Q R and Wilkinson J S 1998 Sens Actuators B51 181
Further reading Dyer S A et al 1992 Hadamard methods in signal recovery Computer-enhanced Analytical Spectroscopy Vol. III (ed) P C Jurs (New York: Plenum Press) Azzam R M A and Bashara N M 1989 Ellipsometry and Polarized Light (Amsterdam: North Holland) Arwin H 2003 Ellipsometry in life sciences Handbook of Ellipsometry (eds) G E Jellison and H G Tompkins (Park Ridge, NJ: Noyes Publications) Gauglitz G and Nahm W 1991 Fresenius.Z Anal Chem. 341 279–283 Brecht A, Gauglitz G, Kraus G and Nahm W 1993 Sensors and Actuators B11 21–27 Brecht A, Gauglitz G and Go¨pel W 1998 Sensor applications Sensors Update Vol. III (eds) H Baltes, W Go¨pel and J Hesse (Weinheim: Wiley-VCH) Gauglitz G Optical sensors: Principles and selected applications, Anal. Bioanal. Chem. 381 141 –155 Smith B C 1995 Fundamentals of Fourier Transform Infrared Spectroscopy (Boca Raton: CRC Press) Mark H 1991 Principles and Practice of Spectroscopic Calibration (New York: John Wiley) Robinson J W (ed) 1974 CRC Handbook of Spectroscopy, Vols I–III (Ohio: CRC Press)
© 2006 by Taylor & Francis Group, LLC
Intelligent surveillance
1443
C4.2 Intelligent surveillance Brian Smith C4.2.1
Introduction
Remote surveillance in the form of closed circuit television (CCTV) has been available almost since the invention of television. It has the great advantage in providing a remote ‘eye’ to capture the scene, which may often be in an environment hostile to a viewer. However, a viewer is still required to analyse the data, and one may be looking for sparse events, such as an intruder breaking into a building. Images however can now easily be captured in a digital format which in turn can be subject to algorithmic analysis and search techniques in a computer. This leads to automation of the analysis and viewing burden, and is often described as ‘intelligent surveillance’. The real benefit, however, often derives as much from the automation of the decision-making task as the analysis applied to the images. Applications of image processing for intelligent surveillance include ocular iris or fingerprint matching, facial recognition and intruder detection. This chapter is in the form of a case study of automatic license plate reading (ALPR), which illustrates virtually every aspect of the general problem, with a general discussion of applications of digital CCTV following. C4.2.2
Fundamentals
Each intelligent surveillance system typically breaks down into a number of sub-systems. These typically include: .
Image generation. The greatest returns on investment come from engineering the scene, the illumination and/or the sensor technology so that the object(s) of interest are readily distinguishable in the image, and the irrelevant detail is suppressed as far as possible. It may well be possible to engineer an overall control loop whereby the scene is adjusted by analysing a metric on the detected object, e.g. the illumination is changed dynamically to improve the contrast between the object of interest and its background.
.
Image digitization. High-speed digital circuits have made this a readily obtainable off the shelf solution.
.
Feature detection. Typically there are time constraints to the amount of computing that can be applied to a recognition problem. The effect of the computing can greatly be enhanced if it is concentrated on the features of interest rather than the scene in general. The art of pattern recognition is generally defined through the choice of the best feature set that can be extracted to adequately and accurately describe the object of interest in the time available. Some features like the average contrast or average brightness of the scene are trivial to compute.
© 2006 by Taylor & Francis Group, LLC
1444
Intelligent surveillance
.
Feature-set classification. This is the classification of the feature vector into objects of interest, e.g. the objective may be to recognize printed characters. The pixels in the image are reduced to features. The features should be well chosen so that when the vector of features associated with one object clearly separates it from the vector associated with another. For example, there might be a feature which is the presence of the stroke in a ‘Q’, which seeks to separate it from an ‘O’. Classification in this case would be making a decision on the presence or absence of the stroke feature.
.
Decision outcome. Real systems will have an action which is dependent on the outcome of the classification, e.g. to raise an alarm on the detection of smoke, or to raise a barrier upon reading a permitted vehicle license plate.
C4.2.3 C4.2.3.1
Example systems Automatic license plate reading
The task of license plate reading is generally made easier through the adoption of state-wide and national, even international standards for vehicle license plates. The format of the license plate is thus generally well-defined. However, the environmental conditions under which one is seeking to do so are very harsh. Salt and road-dirt can mask the letters; one has to cope with vehicles travelling at a wide variety of speeds, and possibly close together; the weather and lighting conditions can vary enormously. Typically, 24/365 operation is required, and the system should not distract the driver in any way. Image generation Freezing vehicle motion The standard exposure time for CCTV cameras is 20 ms in Europe (16.7 ms, USA). During this time, a vehicle travelling at 100 kph (60 mph) will move approximately 0.5 m (18 in) and the license plate will be unreadable. To reduce the blurring caused by the movement of the vehicle, a much shorter exposure time must be used, typically 1 ms or less. CCD-based CCTV cameras achieve this by means of ‘electronic shuttering’. The picture information falling on the sensor results in an electrical charge that is accumulated for the first 19 ms (15.7 ms, USA) of the field and is then discarded by dumping it into the substrate of the sensor. Accumulation of picture information restarts at this point but can now only be for the final 1 ms of the field. Illuminating the license plate A 1 ms exposure time reduces the sensitivity of the camera to about 6% of its unshuttered sensitivity. Interfering light sources that operate continuously, such as vehicle head lamps, are therefore much reduced because the camera only responds to light from these sources for a short time. Unfortunately, the reflected light from continuously operating IR illuminators, such as filtered incandescent lamps, is also reduced by the same proportion and a more powerful IR illuminator will be required to ensure adequate picture quality. It is true to say that with this type of illuminator 94% of the light is wasted. A novel solution to this problem has been devised [1]. Many types of infrared-light-emitting diode (IR-LED) can be pulsed to high current levels for short periods of time. With suitable drive circuitry, the IR energy that would be emitted by the LEDs during the normal 20 or 16.7 ms camera exposure time can be compressed into the 1 ms exposure time of the shuttered camera. Consequently, the camera will still receive substantially the same IR energy from the pulsed IR-LED illuminator as it would have
© 2006 by Taylor & Francis Group, LLC
Example systems
1445
received if unshuttered, but with the interfering effects of the headlights reduced to about 1/20. The IRLED approach has also enabled the camera-illuminator to be classified as eye-safe with a nominal ocular hazard distance (NOHD) of 0 m, without magnification [2 – 4]. This is in sharp contrast to other forms of illumination such as filtered halogen, where the NOHD is much greater. Overcoming headlights The effect of headlights can be reduced still further. IR-LEDs emit most of their power over a relatively narrow band of wavelengths, typically 60 nm wide, whilst incandescent head lamps emit over a band of wavelengths from UV through visible into the middle IR part of the spectrum (a waveband from 350 to 2700 nm, filtered by the head lamp glass). Figure C4.2.1 compares the measured spectra of a quartzhalogen lamp and a 950 nm IR-LED. By incorporating a suitable optical bandpass filter between the lens and the CCD the majority of the light from head lamps can be filtered out. This gives a further factor of ten reduction of light from the head lamp relative to the reflected light from the IR-LED illuminator.
Figure C4.2.1. Head lamp and IR-LED spectra. (Courtesy of PIPS Technology Ltd.)
By combining pulsed IR illumination with optical filtering, the effect of head lamps can be reduced by about 200 times relative to the light reflected from the license plate. Furthermore, if the illuminator and camera are co-aligned, the high reflection capability of retro-reflective license plates, which are mandatory in many countries, may be used. The IR power required, and hence the overall illuminator power, is minimized. The illuminator can be fitted around the camera lens within a single housing, and the P362 traffic camera shown below consumes a maximum electrical power of 15 W. The low power enables mobile systems to be deployed, e.g. mounted on a police vehicle. Furthermore, the costs of electricity for a continuously operated high power illuminator are considerable and lamp changes are required every few months. The P362 camera also includes a second wide angle ‘overview’ colour camera within the same housing to capture a contextual image of the vehicle (figure C4.2.2). Daytime operation The high current pulsing of the IR-LED illuminator and spectral filtering of the camera are also effective against sunlight. The sun can be approximated to a black body radiator at a temperature of about 5800 K but the spectrum is modified by absorption by atmospheric gases, predominantly water vapour. Figure C4.2.3 shows the spectrum of sunlight measured in central England in September 1996.
© 2006 by Taylor & Francis Group, LLC
1446
Intelligent surveillance
Figure C4.2.2. P362 camera with integral illuminator and overview camera. (Courtesy of PIPS Technology Ltd.)
Figure C4.2.3. Spectrum of sunlight showing H2O absorption in vicinity of 950 nm. (Courtesy of PIPS Technology Ltd.)
As can be seen, there is an atmospheric ‘hole’ with a centre wavelength of 950 nm and with a width similar to the emission spectrum of a 950 nm IR-LED. Thus atmospheric water vapour adds significantly to the suppression of sunlight during daytime operation. The depth of the ‘hole’ at 950 nm is dependent upon the amount of precipitable water vapour in the atmosphere, but is a nonlinear function in that the initial few millimetres of precipitable water deplete as much energy as the last few centimetres [5]. Overcoming variations in license plate reflectivity A film of dirt on the surface of the plate degrades the reflection factor of retro-reflection license plates. The variability between clean and dirty plates can be countered by changing the energy of the illumination pulse on a field-by-field basis. For example, a repeating sequence of three illumination pulses, high, medium and low energy, can be used so that a clean plate that might overload the camera
© 2006 by Taylor & Francis Group, LLC
Example systems
1447
Figure C4.2.4. Three energy flash sequence. (Courtesy of PIPS Technology Ltd.)
on medium or high energy can be read during the low energy field, whilst a dirty plate which gives an unusable image on the low energy field will give a much better image on the medium or high energy fields. The energy can be varied either by changing the current through the LEDs or by changing the duration of the pulse as shown in figure C4.2.4 — the shorter the pulse the less the energy. Depending on the deployment constraints of the camera (e.g. from a motorway bridge), high speed traffic may only be present within the field of view of the camera for a few fields, so it is vital that different illuminations are tried on an a priori basis rather than seeking to respond to the image content on a field by field basis. Closed loop control may be used on the settings as a whole as a ‘slow loop’ enabling the settings to respond to weather and lighting changes. Digitization and feature detection A modern high resolution video CCD has 720 pixels per line, with 288 (240, USA) video lines in the vertical direction per video field. When digitized at full resolution there will therefore be 202 kB of data per video field (assuming 1 byte per pixel). At 50 fields-per-second (60, USA), the volume of data that has to be processed rapidly is over 10 MB s21. This is beyond the capabilities of all but the very fastest processors. The problem can, however, be greatly simplified if the license plate is detected within the image by special purpose hardware, and then only the plate image or ‘patch’ is subject to optical character recognition (OCR) rather than the whole video field. A hardware ‘plate-finder’ can be used to form a thresholded measure of the spectral features of the video in real time in two dimensions [1]. A license plate can therefore be detected ‘on-the-fly’ as the video signal is generated. This frees up the general purpose processor to only have to deal with plate patches. License plates can therefore be detected without any trigger to say when the vehicle is present. An alternative is to have a secondary vehicle detection system, such as a high speed inductive loop buried in the road surface, detect the vehicle and act as a vehicle presence trigger to the system. Thus, the plate detection problem is removed through the use of dedicated hardware. Classification The patch is then segmented into characters and conventional OCR techniques, either using template matching upon a set of features extracted from the image or by means of a ‘neural network’ with a learning capability. Whilst a template matching approach can require more modification for a new fontset than a neural network, the absence of the need for a large plate training database is a positive advantage. Syntax checking can then be applied to the character string to resolve ambiguities, such as ‘0’ (zero) versus ‘O’ (oh).
© 2006 by Taylor & Francis Group, LLC
1448
Intelligent surveillance
The characters on the license plate must be of a sufficient size in the image to be read reliably. With the size of characters used on UK plates (80 mm high), the scene viewed by the camera at the vehicle distance will usually be 2– 2.5 m width and so a license plate capture camera will only view a single lane. The general processing can be carried out on a PC hardware platform. However, for roadside deployment a single board running an embedded operating system, without a hard disk or fans, is much more preferable for robust operation; e.g. the P367 video recognizer shown below can handle up to four high speed streams of traffic [1] (figure C4.2.5).
Figure C4.2.5. P367 video recognizer for roadside deployment. (Courtesy of PIPS Technology Ltd.)
Decision outcomes Automatic reading of a vehicle license plate has many applications. These include journey time measurement systems (JTMS — automatic congestion detection), policing (detecting vehicles on a ‘black-list’), toll and bus-lane enforcement (allowing permitted vehicles on a ‘white-list’, and capturing violators), speed enforcement (average speed over known distance) and access control (detecting fraud such as ticket swaps). The largest system in the world for monitoring traffic congestion by measuring journey time is deployed by Trafficmaster Ltd in the UK [6]. Over 3500 ALPR cameras and associated videorecognizers are installed at regular intervals along major roads and highways. License plates are read at each site, shortened to protect vehicle identity and transmitted to a central location in an encrypted form as shown in figure C4.2.6. As has been seen, great care is taken in the design of the system to extract the license plate from the surrounding scene as soon as possible in the processing chain. This enables a very cost-effective solution to be produced, and results in a system which is robust to real-world conditions and can be deployed at the roadside to minimize communications costs. C4.2.3.2
Digital CCTV Surveillance
The advent of relatively inexpensive digital video technology has revolutionized CCTV surveillance. Critical to the design of such systems is the manipulation of large amounts of digital data. A good quality uncompressed colour image generates about 400 kB of digital data every 20 ms (16 ms if 60 Hz operation). Thus one is faced with a digital stream of over 20 MB of data per second from just a single
© 2006 by Taylor & Francis Group, LLC
Example systems
1449
Figure C4.2.6. Example showing two journey time measurement sites.
video source! This is within the capacity of a modern computer disk, provided there is sufficient computing bandwidth to route the stream to disk. However, communication costs become substantial at these data rates even over short distances. There is however typically a great deal of redundant data within an image which may be compressed to reduce the data transmission bandwidth. Spatially, there will often be large areas of similar texture or colour. Since the eye is not as sensitive to high spatial frequencies as it is to low frequencies, the image may be transformed into the spatial frequency domain, and a lower number of bits used to represent it by reducing the number of bits used to quantize the higher frequencies to which the eye is not as sensitive. JPEG compression (using linearly spaced discrete cosine transforms) and wavelet compression (using octave-spaced windowed ‘wavelets’) is readily available on modern computers [7]. Both of these approaches allow the data volume to be traded off against image quality. Typically, compression artefacts are difficult to spot at compressions , 3:1, though the image quality for surveillance purposes remains practically useful at much higher compression ratios, as high as 20:1. The so-called motion JPEG (M-JPEG) is simply a stream of JPEG encoded video fields, with each video field being treated as a still image. There is also a great deal of redundancy between successive images. Typically, there will not be very much movement within the scene from one image to the next given that there is only 20 ms between successive fields. Simple compression schemes might only transmit the differences between successive images, but they require a reference video field to be generated periodically against which the differences are computed. Video image quality may be sustained with high compression in scenes containing movement if areas of similar content are identified on a field to field basis with the amount of motion between the identical areas being transmitted rather than the areas themselves. These concepts have been embodied in the MPEG set of standards and MPEG-2 is widely available on computers. For digital surveillance, however, a simple field-by-field encoding such as used in M-JPEG or wavelet offers many advantages. These include the encoding technology being much simpler,
© 2006 by Taylor & Francis Group, LLC
1450
Intelligent surveillance
and the ease with which one can move around the encoded images in a real-time sense. A single JPEG or wavelet encoded video field includes all the data to re-construct it. MPEG-2 compression required the nearest reference field to be found, and all changes applied between that field and the point in question to be computed. M-JPEG fields can simply be interleaved from multiple sources into a single digital stream and easily re-constructed into separate video outputs; this is much more difficult with MPEG-2. Compression at source enables computing communications and infrastructure to be used for intelligent surveillance. The benefits of this include: .
Direct recording to computer disk, with random access replay.
.
Transmission over the Internet.
.
Automatic back-up of data with original quality preserved indefinitely.
.
Remote access to data from a standard PC computer.
.
Encryption at source for high-security applications.
.
Intelligent alarms, automatically routed via internet or mobile phone.
The area of ‘intelligent alarms’ perhaps offers the greatest scope for innovation. Automatic surveillance software will permit areas of the scene to be interactively ‘zoned’ on the displayed image and any rapid movement within the prescribed area detected. This is straightforward under controlled lighting conditions, where the natural variability within the area of interest is limited. Out-of-doors this is more difficult with lighting changes or pan/tilt camera movements being relatively rapid, and simple field-by-field differencing is likely to throw up many false alarms. However, the nature of such changes is predictable — camera movement will by definition be transitory and cause all pixels to change; changes in lighting will not change the local spatial frequency spectrum. Signal processing techniques can now economically provide solutions for many of these situations, and products are available to support ‘record-on-event’ working [8, 9]. Distinguishing between types of moving object, say between a dog and a person, may generally involve a training exercise, so that thresholds within the system are set to trigger on the person only. The early detection of smoke or fire is now possible by video analysis [8]. The many well-known models of early combustion, including the wispy nature of smoke in the very early stages of a fire, have been characterized as images. Software can now be deployed to analyse video in real time and detect such features as wispy smoke from an electrical fault or the flicker characteristic of flames when there is little smoke present. It is also possible to distinguish between the steam naturally present in a manufacturing process, and abnormal smoke from an electrical fault [10]. C4.2.4
Conclusion
Digital video processing is making the intelligent processing of video images at source feasible for many real-world situations. Previously this required central processing with the resultant high band-width transmission of the images. Distributing the processing to the point of capture enables a few bytes of information describing the event to be transmitted where previously tens of megabytes of data were required to be transmitted per second to achieve a result. References [1] www.pipstechnology.com [2] British Standard EN 60825-1:1994 Amendment 2 and IEC 825-1:1993 Safety of Laser Products
© 2006 by Taylor & Francis Group, LLC
References [3] [4] [5] [6] [7] [8] [9] [10]
1451
American Conference of Government Industrial Hygienists 1996 Threshold Limit Values and Biological Exposure Indices Sliney D and Wolbarsht M 1980 Safety with Lasers and Other Optical Sources (New York: Plenum) Iqbal M 1983 An Introduction to Solar Radiation (New York: Academic) www.trafficmaster.co.uk Watkinson J 2000 The Art of Digital Video (Woburn, MA: Focal) www.issecurity.com www.pi-vision.com www.intelsec.com
© 2006 by Taylor & Francis Group, LLC
Optical actuation and control
1453
C4.3 Optical actuation and control George K Knopf C4.3.1
Introduction
Actuators are devices that perform mechanical work in response to a command or control signal. The device can be separated into two parts: the actuator shell and the method of actuation. The shell is the basic structure of the actuator and, often, contains deformable or moving parts. Pneumatic cylinders and mechanical linkages are large-scale actuator shells. Examples of deformable microactuator shells include cantilever beams, microbridges, diaphragms, and torsional mirrors [1]. The main function of any shell design is to provide a mechanism for the desired actuation method to produce useful work. The actuation method is the means by which a control signal is converted to a force that is applied to the actuator shell and creates physical movement. The output of the overall system is the desired response given as a displacement, force, or pressure value. The different methods of actuation take advantage of mechanical, electrostatic, piezoelectric, magnetic, thermal, fluidic, acoustic, chemical, biological, or optical principles. Although optically activated actuators are probably the least developed of all force generating structures, they offer several interesting design features. All-optical circuits and devices have advantages over the conventional electronic components, because they are activated by photons instead of currents and voltages. In many of these designs, the photons provide both the energy and control signal into the system to initiate the desired response. Furthermore, optical systems are free from current losses, resistive heat dissipation, and friction forces that greatly diminish the performance and efficiency of conventional electronic or electro-mechanical systems. The negative effects of current leakage and power loss are greatly amplified, as design engineers strive for product miniaturization through the exploitation of nanotechnology. Optical actuators can be interfaced with fibre optics and other types of optical wave guide used in developing integrated optical circuits. The opportunity to interface optical actuators directly with fibre optic sensors has enabled a variety of smart devices and engineered structures to be developed. Optical fibre sensors have been designed to measure linear displacement, rotation, force, pressure, sound, temperature, flow, and chemical quantities [2]. Optical sensors and actuators are ideal components for smart structures because they are immune from electromagnetic interference, safe in hazardous or explosive environments, secure, and exhibit low signal attenuation. Systems that directly combine fibreoptic sensors or free-space optical sensors with optical actuators have been termed ‘control-by-light’ systems (figure C4.3.1). Since the level of light attenuation along the fibre optic is low, the optical sensors and actuators can be located at great distances from the measurement environment in order to ensure electrical isolation. An optoelectronic control system can be constructed from a variety of lightwave technologies and control strategies. The complexity of the controller design and the control loop is very much application
© 2006 by Taylor & Francis Group, LLC
1454
Optical actuation and control
Figure C4.3.1. Schematic view of the basic components of a ‘control-by-light’ system as described by Jones and McKenzie [9].
dependent. It is possible to replace many of the electrical components with optical analogues in order to increase the processing speed or to enhance performance. One possible design for a single-input singleoutput (SISO) optical controller exploits the functionality of an electro-optic spatial light modulator. A spatial light modulator (SLM) is an optical device that controls the spatial distribution of the intensity, phase, and polarization of transmitted light as a function of electrical signals or a secondary light source [3]. SLMs are the basic information processing elements of nearly all optical systems and provide real time input –output interfaces with peripheral electronic circuitry. Figure C4.3.2 is a block diagram that shows the basic control strategy for regulating the displacement, d(t), of a light activated actuator. The electro-optic SLM enables the intensity of the light source to be modified with respect to the error between the reference and feedback values, that is uðtÞ ¼ Kði2 ðtÞ 2 i1 ðtÞÞ
ðC4:3:1Þ
where K is the fixed system gain and u(t) is the control signal given by light intensity. Most commercially available SLMs have problems with high resolution, large bandwidth, long-term stability, high speed, and low cost. Many of the shortcomings are directly related to the physical limitations of the materials used in the device. The fundamental and unique characteristics of light activated optical actuators and control systems are explored in this chapter. The more commonly studied light-driven and control-by-light systems that exclusively use off-the-shelf optoelectronic devices, such as photocells and photodiodes, to generate a current to drive an electromagnetic motor directly [4], are not discussed. Here, the primary means of actuation is to project light onto an actuator shell in an effort to generate mechanical deformation that,
© 2006 by Taylor & Francis Group, LLC
Optical actuators
1455
Figure C4.3.2. Block diagram of an optical controller used to regulate the behaviour of a flexible diaphragm optical actuator.
in turn, produces the desired displacement or force. Light is used both to initiate movement and control the actuation mechanism to perform work. The effect of optical actuation and control is illustrated by several innovative system designs. C4.3.2
Optical actuators
Optical actuators will indirectly or directly transform light energy into the desired structural displacement. Indirect optical methods exploit the ability of light to generate heat and directly influence the thermal properties of gases, fluids, and solids. On the other hand, direct optical methods use photons to interact with the photosensitive properties of the material used to construct the actuator shell and, thereby, cause mechanical deformation. An example of direct optical microactuation occurs when a light source is used to generate electrostatic forces that move a silicon microcantilever beam [1]. Indirect optical methods often have simpler designs and generate more actuation power than direct optical methods. However, these methods of actuation utilize thermal and phase-transformation properties of fluids and solids, and can be comparatively slow. In contrast, direct optical microactuators are fast but produce small forces. Direct optical microactuators can be designed for a specific application using the proven fabrication techniques of micromachining or semiconductor doping and etching. Direct optical actuators are, therefore, crucial in the development of sophisticated micromachines that are constrained by spatial, low noise, and power requirements. A large number of indirect and direct optical actuators have been reported in the literature. Several of the most promising concepts are now briefly described (table C4.3.1). C4.3.2.1
Indirect optical actuators
Opto-thermal expansion of fluid Many indirect optical methods used for mechanical actuation take advantage of the heat generated by the light source to create the desired force or pressure. When a simple gas is heated, it expands
© 2006 by Taylor & Francis Group, LLC
1456
Optical actuation and control Table C4.3.1.
Examples of different optical actuators.
Actuator principle
References
Indirect actuation Phase transformatioin of fluids Optopneumatic converter Phase transformation of solids Light propulsion system Direct actuation Radiation pressure Optical micromanipulation Solar sails Microactuators Electrostatic pressure Photostrictive effect Photothermal vibration Photo-induced phase transition material
[5, 1] [6, 7, 33, 9, 28, 34, 35, 8] [36, 12] [13] [14, 15, 18, 17] [19, 20] [1, 21] [22, 23] [27] [25]
according to the ideal gas law Pr V l ¼ nr T
ðC4:3:2Þ
where Pr is the gas pressure, Vl is the gas volume, T is the absolute temperature, r is the gas constant ð0:0821 litre atm mol21 K21 Þ; and n is the number of moles. The optically actuated silicon diaphragm device shown in figure C4.3.3 exploits this simple principle to deflect a flexible diaphragm in order to perform mechanical work. The cavity is filled with a gas or oil that expands when heated by the light source. As the diaphragm expands under pressure it produces the desired deflection, d. The displacement produced by a diaphragm actuator, d, at the centre from its equilibrium position, is given in [1] Pr ¼
4a1 b 16a2 f ðy Þb Y s0 d þ d3 L4 L2 12y
Figure C4.3.3. Schematic view of an optically actuated silicon diaphragm.
© 2006 by Taylor & Francis Group, LLC
ðC4:3:3Þ
Optical actuators
1457
where Pr is the applied pressure, L is the length, s0 is the residual stress, Y=ð1 2 y Þ is the bi-axial modulus, and b is the thickness of the diaphragm. The dimensionless parameters a1, a2 and f(y ) depend on the geometry of the diaphragm. For a square diaphragm f ðy Þ ¼ 1:075 2 0:292y ; a1 ¼ 3:04; and a2 ¼ 1:37: Microfabricated flow controllers [1] have speeds of 21 ms in air-flow and 67 ms in oil-flow, and show sensitivities of 304 and 75 Pa mW21, respectively. Mizoguchi et al [5] used this simple concept to create a micropump that included an array of five closed diaphragm actuated devices called microcells (see figure C4.3.4). Each microcell consisted of a pre-deflected 800 mm £ 800 mm square membrane that was micromachined in 0.25 mm3 of silicon and filled with Freon 113, a liquid with a boiling point of approximately 47.58C. A carbon-wool absorber was placed inside the cell to convert the incident light from the optic fibre into heat. The microcell exhibited a relatively large deflection, approximately 35 mm, when the cell’s contents were heated and the Freon 113 underwent a phase change from liquid to gas. The fluid that is being transported by the pump is fed into a flow channel between the glass plate and deflecting membrane using very small harmonic movements. The harmonic order of the membrane’s deflection determines the fluid flow rate and direction. The small quantities of Freon in each cell allowed relatively low optical powers to be used to change the phase of the liquid to gas, giving the large membrane deflections needed to operate the pump. The microcell was fabricated and operated by a laser source with not more than 10 mW. The micropump achieved a head pressure of approximately 30 mmag and flow rate of 30 nl/cycle. In the late 1980s, a large-scale optopneumatic converter using a closed photoacoustic cell and a flexible membrane nozzle arrangement, figure C4.3.5, was developed at Brunel University [6 – 8]. The converter was used to operate a double-acting pneumatic cylinder and a standard 21– 100 kPa valve actuator in an incremental manner. The basic design was a miniature cylindrical cell (5 mm3 in volume), which had a thin elastic membrane stretched across one end. An optical fibre rested in a hole on the opposite face of the membrane. A diode laser produced an average of 3.5 mW of optical power that
Figure C4.3.4. Cross section of a single microcell in the optically driven micropump. (Adapted from [5].)
© 2006 by Taylor & Francis Group, LLC
1458
Optical actuation and control
Figure C4.3.5. Schematic diagram of the Brunel converter with an opto-fluidic cell and flapper nozzle valve. (Diagram modified from [6].)
was delivered to the cell. Inside the cell there was a small amount of air and a carbon absorber. The energy transmitted from the light source caused the carbon absorber to heat up, and the transfer of the generated heat to the surrounding air in the cell resulted in an increase in pressure that forced the flexible membrane to expand outward. The membrane movement was detected by a change in the back-pressure of the air jet from the flapper nozzle valve. A differential configuration of two cells was used by Hale et al [6] to produce a system that would respond only to changes in the backpressure from two converters. A direct differential pressure of 500 Pa was produced 0.5 s after the laser was switched on, and a maximum of 1400 Pa was produced after the laser was switched on for a longer period of time. The conversion factor is quoted as 400 Pa mW21 [6]. An alternative optical-to-pressure converter was developed by Jones and McKenzie [9], which utilized more recent micromachining techniques to miniaturize the previous design concept and achieve higher response speeds. Light from a 10 mW continuous wave diode laser, delivered by an optic fibre, enters the cell of the converter through the glass cell back. Again, the basic design of the cell contained a fibrous absorber and air. The square cells with integral diaphragms were micromachined with cell volumes of approximately 0.9 mm3. The converter produced a response time of 21 ms and a conversion factor (pressure/optical power) of 304 Pa mW21 in air, and 67 ms at 81 Pa mW21 in oil. Phase transformation of solids Other approaches to indirect optical microactuation exploit the volume expansion and contraction characteristics exhibited by a number of unique materials. These solids experience a discontinuous change in their volume, near the structural phase transformation temperature, that is significantly larger than the linear volume change that occurs due to normal thermal expansion in most materials. Shape memory alloys (SMAs) and some gels, such as polyacrylamide, exhibit reproducible phase transformation effects, and the corresponding phase transformation temperature can be adjusted over a wide range of temperatures making them ideal materials for constructing optically driven microactuators. SMAs, such as 50/50 Ni –Ti, are a group of metal alloys that can directly transform thermal energy into mechanical work. The shape memory hysteresis effect, figure C4.3.6, is the result of a martensite-toaustenite phase transformation that occurs as the Ni – Ti material is heated, and its subsequent reversal during cooling. The hysteresis effect implies that the temperature at which the material undergoes a phase change during heating is different from the temperature that causes the same material to return to
© 2006 by Taylor & Francis Group, LLC
Optical actuators
1459
Figure C4.3.6. A schematic of the Ni– Ti wire during contraction and a typical plot of the changes in SMA material properties as the temperature is increased and decreased. In this illustration, As is the start of the austenite phase, Af is the finish of the austenite phase, Ms is the martensite start temperature, and Mf is the martensite finish temperature.
the martensite state during cooling. The hysteresis effect is typically of the order of 208C for SMA material [10, 11]. The alloy can be formed into a wire or strip at high temperatures when it resides in an austenitic condition. Increasing the temperature of a pre-loaded Ni – Ti wire, originally at ambient temperature, will cause the material to undergo a phase transformation and move the position of the attached load to a distance of approximately 4% of the overall wire length. In essence, the small force created during the contraction period can be used to perform mechanical work [10]. The reduction in the wire length can be recovered by cooling the material back to ambient temperature. The number of times the Ni – Ti material can exhibit the shape memory effect is dependent upon the amount of strain, and consequently, the total distance through which the wire is displaced. The amount of wire deflection is also a function of the initial force applied. The mechanical properties of a typical 200 mm wire are shown in table C4.3.2. The amount of pull-force generated for the applied current is significant for the size of the wire. Thicker wires will generate greater forces but require larger currents and longer cooling time. For example, a 200 mm Ni –Ti wire produces four times more force (,5.8 N) than a 100 mm wire, but takes 5.5 times as long (,2.2 s) to cool down once heating has ceased [11]. Although SMA materials exhibit unique and useful design characteristics, such as large power/weight ratio, small size, cleanness and silent actuation, the successful application of the material has been limited to a small number of actuation devices that require small linear displacements. An example of an optically driven walking machine that employs SMA is shown in figure C4.3.7 [12]. The miniaturized machine consists of two parts: a body made up of SMA and springs, and feet made up of magnets and temperature sensitive ferrites. The feet stick to the carbon steel floor due to magnetic
© 2006 by Taylor & Francis Group, LLC
1460
Optical actuation and control Table C4.3.2.
Mechanical properties of the Ni– Ti wire.
Parameter
Characteristic
Phase
Transformation temperature Density Young’s modulus Young’s modulus Yield strength Yield strength Transformation strain
908C 6.45 g cc21 83 GPa 28 – 41 GPa 195– 690 MPa 70 – 140 MPa Max 8% 6% 4%
Austenite Martensite Austenite Martensite For a single cycle For 100 cycles For 100 000 cycles
Source: [11].
force balance caused by the incident light beam, and the body repeats stretching and shrinking using the deformation of SMAs caused by the switching ON and OFF of the projected light beam. Certain gels that undergo a phase transformation between a solid and liquid state can also be used as an optically activated microactuator. These gels are mostly fluidic and composed of a tangled network of polymer strands. A balance of forces within the material maintains this state until it is disturbed by very small perturbations introduced by optical, thermal, electrical, or chemical influences. These perturbations cause the material to undergo a phase transformation that can drastically alter the volume of the gel by forcing it to shrink or swell by a factor of several hundred times. For example, the polymer network may lose its elasticity and become compressible as the temperature is lowered. The gel will collapse below a critical temperature because the elasticity becomes zero and compressibility becomes infinite. These changes are discontinuous around the critical temperature and result in large volume changes for an infinitesimal change in temperature.
Figure C4.3.7. Basic structure of a light activated walking machine described by Yoshizawa et al [12].
© 2006 by Taylor & Francis Group, LLC
Optical actuators
1461
Figure C4.3.8. The diameter of N-isopropylacrylamide/chlorophine copolymer gel as a function of optical power. (Graph adapted from [1].)
Figure C4.3.8 shows the effect of visible light on N-isopropylacrylamide/chlorophine copolymer gel [1]. The ambient temperature of the gel is 31.58C and the illumination provided by the visible light, with a focused spot size of 20 mm, results in nearly 50% change in the gel’s diameter. The actuation speed is several seconds for a gel sample with a diameter of approximately 200 mm. Alternatively, the same actuating material can be illuminated by UV light that initiates an ionization reaction in the gel which creates, in turn, an osmotic pressure that induces swelling. This process is, however, slow when compared to heating the gel directly with white light illumination. Light propulsion system Indirect optical actuation has also been proposed as the mechanism for a new and innovative vehicle propulsion system. Recent experiments sponsored by NASA and the US Air Force have demonstrated that a highly polished, reflective object can be propelled into the air by a pulsed IR laser beam originating from the earth’s surface [13]. An early functional prototype was constructed from an ordinary aircraft-grade aluminium with a forward covering (or aeroshell), an annular (ring-shaped) cowl, and an aft part consisting of an optic and expansion nozzle. The spin-stabilized object measured 10 cm in diameter and weighed 50 g. By using a 10 kW carbon dioxide laser pulsing 28 times per second, the object has been propelled to altitudes of up to 30 m in approximately 3 s. The reflective surfaces on the object were used to focus the light beam into a ring, where it heated the surrounding air to a temperature that is nearly five times hotter than the surface of the sun [13], causing the air to expand rapidly and produce the required thrust action. During atmospheric flight,
© 2006 by Taylor & Francis Group, LLC
1462
Optical actuation and control
the forward section compresses the air and directs it to the engine inlet. The annular cowl takes the brunt of the thrust. The aft section serves as a parabolic collection mirror that concentrates the IR laser light into an annular focus, while providing another surface against which the hot-air exhaust can press. Myrabo [13] claims that the design offers automatic steering such that if the craft starts to move outside the beam, the thrust inclines and pushes the vehicle back. Although promising, this concept of light propulsion needs further research and development on the optical and control systems in order to realize its potential. C4.3.2.2
Direct optical actuators
In contrast, direct methods of optical actuation attempt to use the photons in a light stream to induce mechanical deformations within the actuator shell without heating the surrounding gases, liquids, or solids. Although direct optical actuators are fast and energy efficient, they have achieved limited use because they produce relatively small forces and displacements. However, these characteristics and the reduction in resistive heat dissipation and electrical current losses make the direct optical actuators ideal for developing micro-robots, micro-electro-mechanical systems (MEMS), and nanotechnology. Furthermore, many of these optically driven microactuators can be designed for specific applications using proven microfabrication techniques such as laser-material removal, semiconductor doping, and etching. Optically induced movement by radiation forces The micromanipulation of particles is, perhaps, the most direct use of light irradiation to both power and control the movement of microscopic machines. Ashkin [14] was the first to suspend and levitate minute particles using the forces created by light radiation pressure. A later experiment performed by Ashkin and Dziedzic [15] trapped a 20 mm glass sphere in a vertical laser beam using a 0.25 W TEM00 0.515 mm laser beam. The glass micro-sphere was able to sit in an on-axis position for a period of hours because the transverse forces, caused by the difference in refractive indices of glass and air, pushed the particle towards the region of maximum light intensity, thereby creating a potential well. However, the manoeuvrability of suspended spheres could only be observed by scattering the light of the supporting beam or an ancillary laser with a very low power. Ashkin and Dziedzic [16] further demonstrated that with a small refinement on this simple experiment it was possible to assemble aggregates of two, three and four micro-spheres in mid-air with two laser beams. The driving force in this micromanipulation technique is the radiation pressure generated by streams of photons striking a surface. An individual photon exhibits a momentum [17] described by M¼
hv c
ðC4:3:4Þ
where h is Planck’s constant, v is the optical frequency, and c is the speed of light. The force generated by a single photon is the change in momentum of the photon as it is absorbed or reflected by the surface. If the photon strikes a surface with 100% absorption, then the corresponding force on the structure is F¼
DM hv h ¼ ¼ Dt cDt lDt
ðC4:3:5Þ
where the optical frequency is given by v ¼ c=l and l is the wavelength of the light source. In contrast, if the photon strikes a mirror surface with 100% reflectivity, then the force is doubled because the surface
© 2006 by Taylor & Francis Group, LLC
Optical actuators
1463
is recoiled with enough momentum to stop the photon and send it back, thereby increasing the momentum by a factor of 2. The total force due to numerous photons striking a mirror surface is estimated as the force –time product generated by a single photon times the number of photons per second as a function of light beam power, Ft ¼
2h Pl l hc
¼2
P c
ðC4:3:6Þ
where the light beam at power P provides Phc=l photons per second. Again, if the surface absorbs 100% of the light, then only half the force is generated per photon and the total force is decreased proportionately. The force generated by light radiation is applicable to both individual particles and ensembles of particles, and has been proposed as the principal technique for optically manipulating small shaped objects for the assembly of micromachines. Higurashi et al [18] demonstrated how carefully shaped fluorinated polyimide micro-objects with a cross-sectional radius of 6– 7.5 mm can be rotated. In this series of experiments, radiation forces near the focal point are used to position and rotate the micro-object about the laser beam axis as shown in figure C4.3.9. These micro-objects with a low relative refractive index are surrounded by a medium with higher index, and are optically trapped by exerting radiation pressure through their centre openings using a strongly focused trapping laser beam. The pressure is exerted on the inner walls of the object. The micro-objects are both trapped and rotated by radiation pressure when the horizontal cross sections of these objects show rotational symmetry. In addition, the rotation speed versus optical power and the axial position of the laser focal point were investigated for high relative refractive index microobjects [18]. The rotational speed, with respect to optical power, was found to be in the range of 0.4 –0.7 rpm mW21. The forces generated by light radiation are also being explored as a means for creating propulsion for space travel [19, 20]. A largely conceptual spacecraft is being developed by the Jet Propulsion Laboratory (JPL) and NASA that uses a solar sail with a very large size, low mass, highly reflective surface that receives a high-powered energy beam from an earth-bound laser. The large size of the sail relative to its mass enables the simple craft to receive sufficient light radiation to propel it forward. The projected photons from the concentrated coherent light source can cause two effects on the sail surface that impact acceleration. First, the photons from the incident light beam collide elastically with the electromagnetic field surrounding the atoms in the sail material and are reflected from the surface. Second, the photons are absorbed by the sail material and generate heat. The amount of usable power transmitted to the sail is constrained by these thermal effects because heating the material reduces its reflective properties. This is a critical design issue because a highly reflective surface will produce a significantly larger force than a light-absorbing surface and, thereby, greater acceleration. However, the temperature on the surface of the sail can be lowered by coating the reverse side with a material that will efficiently radiate most of the generated heat. During operation, the solar sail must sustain its acceleration in order to reach the high velocities required for travelling great distances in space. In theory, the maximum attainable velocity is limited by the duration of the laser ‘efficiently’ illuminating the moving target. For long distant travel, the light sail must take full advantage of the coherence in the laser beam. Coherence implies that the energy in the light beam will be reasonably undiminished up to a distance known as the diffraction distance. Beyond this point the power from the light source is quickly reduced. Furthermore, the diffraction distance of any laser source is governed by the size of the laser’s
© 2006 by Taylor & Francis Group, LLC
1464
Optical actuation and control
Figure C4.3.9. Schematic illustrating the origin of optical trapping and optically induced rotation of a low relative refractive index micro-object ðn1 . n2 Þ: This experimental set-up was used to observe optically induced rotation of fluorinated polyimide micro-objects. (Illustration modified from [29].)
aperture. A laser system powerful enough to propel a craft, either within or outside the Earth’s atmosphere, would probably be constructed from hundreds of smaller lasers arranged in an array. In this case, the effective aperture size of the light source would roughly be the diameter of the entire array. Maximum power will be transferred to the distant sail when an array is packed as densely as possible.
© 2006 by Taylor & Francis Group, LLC
Optical actuators
1465
Optically controlled silicon microactuators Silicon microactuators can be excited directly by an optical light signal using a number of different techniques. One optically controlled silicon microactuator, based on the mechanics of a parallelplate capacitor as shown in figure C4.3.10, has been developed by Tabib-Azar [1]. The method uses photo-generated electrons to change the electrostatic pressure on a thin silicon (Si) cantilever beam mounted on an insulating post overhanging an Au ground plane to form a capacitor given by C