2,978 599 13MB
Pages 914 Page size 615.72 x 836.16 pts Year 2010
T H I R D
E D I T I O N
TRANSFORMS APPLICATIONS AND
HANDBOOK
he Electrical Engineering Handbook Series Series Editor
Richard C. Dorf University of California, Davis
Titles Included in the Series he Avionics Handbook, Second Edition, Cary R. Spitzer he Biomedical Engineering Handbook, hird Edition, Joseph D. Bronzino he Circuits and Filters Handbook, hird Edition, Wai-Kai Chen he Communications Handbook, Second Edition, Jerry Gibson he Computer Engineering Handbook, Vojin G. Oklobdzija he Control Handbook, William S. Levine CRC Handbook of Engineering Tables, Richard C. Dorf Digital Avionics Handbook, Second Edition, Cary R. Spitzer he Digital Signal Processing Handbook, Vijay K. Madisetti and Douglas Williams he Electrical Engineering Handbook, hird Edition, Richard C. Dorf he Electric Power Engineering Handbook, Second Edition, Leonard L. Grigsby he Electronics Handbook, Second Edition, Jerry C. Whitaker he Engineering Handbook, hird Edition, Richard C. Dorf he Handbook of Ad Hoc Wireless Networks, Mohammad Ilyas he Handbook of Formulas and Tables for Signal Processing, Alexander D. Poularikas Handbook of Nanoscience, Engineering, and Technology, Second Edition, William A. Goddard, III, Donald W. Brenner, Sergey E. Lyshevski, and Gerald J. Iafrate he Handbook of Optical Communication Networks, Mohammad Ilyas and Hussein T. Moutah he Industrial Electronics Handbook, J. David Irwin he Measurement, Instrumentation, and Sensors Handbook, John G. Webster he Mechanical Systems Design Handbook, Osita D.I. Nwokah and Yidirim Hurmuzlu he Mechatronics Handbook, Second Edition, Robert H. Bishop he Mobile Communications Handbook, Second Edition, Jerry D. Gibson he Ocean Engineering Handbook, Ferial El-Hawary he RF and Microwave Handbook, Second Edition, Mike Golio he Technology Management Handbook, Richard C. Dorf Transforms and Applications Handbook, hird Edition, Alexander D. Poularikas he VLSI Handbook, Second Edition, Wai-Kai Chen
THIRD
EDITION
TRANSFORMS AND
APPLICATIONS HANDBOOK Editor-in-Chief
ALEXANDER D. POULARIKAS
Boca Raton London New York
CRC Press is an imprint of the Taylor & Francis Group, an informa business
MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software.
CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2010 by Taylor and Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number: 978-1-4200-6652-4 (Hardback) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Transforms and applications handbook / editor, Alexander D. Poularikas. -- 3rd ed. p. cm. -- (Electrical engineering handbook ; 43) Includes bibliographical references and index. ISBN-13: 978-1-4200-6652-4 ISBN-10: 1-4200-6652-8 1. Transformations (Mathematics)--Handbooks, manuals, etc. I. Poularikas, Alexander D., 1933- II. Title. III. Series. QA601.T73 2011 515’.723--dc22 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com
2009018410
Contents Preface to the Third Edition ........................................................................................................................................................ vii Editor .................................................................................................................................................................................................. ix Contributors ..................................................................................................................................................................................... xi
1
Signals and Systems ........................................................................................................................................................... 1-1 Alexander D. Poularikas
2
Fourier Transforms ............................................................................................................................................................ 2-1 Kenneth B. Howell
3
Sine and Cosine Transforms ............................................................................................................................................ 3-1 Pat Yip
4
Hartley Transform ............................................................................................................................................................. 4-1 Kraig J. Olejniczak
5
Laplace Transforms ............................................................................................................................................................ 5-1 Alexander D. Poularikas and Samuel Seely
6
Z-Transform ........................................................................................................................................................................ 6-1 Alexander D. Poularikas
7
Hilbert Transforms ............................................................................................................................................................ 7-1 Stefan L. Hahn
8
Radon and Abel Transforms ........................................................................................................................................... 8-1 Stanley R. Deans
9
Hankel Transform .............................................................................................................................................................. 9-1 Robert Piessens
10
Wavelet Transform .......................................................................................................................................................... 10-1 Yulong Sheng
11
Finite Hankel Transforms, Legendre Transforms, Jacobi and Gegenbauer Transforms, and Laguerre and Hermite Transforms ...................................................................................................................... 11-1 Lokenath Debnath
12
Mellin Transform ............................................................................................................................................................. 12-1 Jacqueline Bertrand, Pierre Bertrand, and Jean-Philippe Ovarlez
13
Mixed Time–Frequency Signal Transformations ...................................................................................................... 13-1 G. Fay Boudreaux-Bartels
14
Fractional Fourier Transform ........................................................................................................................................ 14-1 Haldun M. Ozaktas, M. Alper Kutay, and Çagatay Candan v
vi
15
Contents
Lapped Transforms .......................................................................................................................................................... 15-1 Ricardo L. de Queiroz
16
Zak Transform .................................................................................................................................................................. 16-1 Mark E. Oxley and Bruce W. Suter
17
Discrete Time and Discrete Fourier Transforms ...................................................................................................... 17-1 Alexander D. Poularikas
18
Discrete Chirp-Fourier Transform ............................................................................................................................... 18-1 Xiang-Gen Xia
19
Multidimensional Discrete Unitary Transforms ....................................................................................................... 19-1 Artyom M. Grigoryan
20
Empirical Mode Decomposition and the Hilbert–Huang Transform ................................................................. 20-1 Albert Ayenu-Prah, Nii Attoh-Okine, and Norden E. Huang
Appendix A: Functions of a Complex Variable ................................................................................................................ A-1 Appendix B: Series and Summations .................................................................................................................................... B-1 Appendix C: Definite Integrals ............................................................................................................................................... C-1 Appendix D: Matrices and Determinants .......................................................................................................................... D-1 Appendix E: Vector Analysis ................................................................................................................................................... E-1 Appendix F: Algebra Formulas and Coordinate Systems ............................................................................................... F-1 Index ............................................................................................................................................................................................. IN-1
Preface to the Third Edition The third edition of Transforms and Applications Handbook follows a similar approach to that of the second edition. The new edition builds upon the previous one by presenting additional important transforms valuable to engineers and scientists. Numerous examples and different types of applications are included in each chapter so that readers from different backgrounds will have the opportunity to become familiar with a wide spectrum of applications of these transforms. In this edition, we have added the following important transforms: 1. 2. 3. 4. 5. 6.
Finite Hankel transforms, Legendre transforms, Jacobi and Gengenbauer transforms, and Laguerre and Hermite transforms Fraction Fourier transforms Zak transforms Continuous and discrete Chirp–Fourier transforms Multidimensional discrete unitary transforms Hilbert–Huang transforms
I would like to thank Richard Dorf, the series editor, for his help. Special thanks also go to Nora Konopka, the acquisitions editor for engineering books, for her relentless drive to finish the project. Alexander D. Poularikas MATLAB1 is a registered trademark of The MathWorks, Inc. For product information, please contact: The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098 USA Tel: 508 647 7000 Fax: 508-647-7001 E-mail: [email protected] Web: www.mathworks.com
vii
Editor Alexander D. Poularikas received his PhD from the University of Arkansas, Fayetteville, Arkansas, and became a professor at the University of Rhode Island, Kingston, Rhode Island. He became the chairman of the engineering department at the University of Denver, Denver, Colorado, and then became the chairman of the electrical and computer engineering department at the University of Alabama in Huntsville, Huntsville, Alabama. Dr. Poularikas has published seven books and has edited two. He has served as the editor in chief of the Signal Processing series (1993–1997) with Artech House and is now the editor in chief of the Electrical Engineering and Applied Signal Processing series as well as the Engineering and Science Primer series (1998 to present) with Taylor & Francis. He is a Fulbright scholar, a lifelong senior member of the IEEE, and a member of Tau Beta Pi, Sigma Nu, and Sigma Pi. In 1990 and in 1996, he received the Outstanding Educators Award of the IEEE, Huntsville Section. He is now a professor emeritus at the University of Alabama in Huntsville. Dr. Poularikas has authored, coauthored, and edited the following books: Electromagnetics, Marcel Dekker, New York, 1979. Electrical Engineering: Introduction and Concepts, Matrix Publishers, Beaverton, OR, 1982. Workbook, Matrix Publishers, Beaverton, OR, 1982. Signals and Systems, Brooks=Cole, Boston, MA, 1985. Elements of Signals and Systems, PWS-Kent, Boston, MA, 1988. Signals and Systems, 2nd edn., PWS-Kent, Boston, MA, 1992. The Transforms and Applications Handbook, CRC Press, Boca Raton, FL, 1995. The Handbook for Formulas and Tables for Signal Processing, CRC Press, Boca Raton, FL, 1998, 2nd edn. (2000), 3rd edn. (2009). Adaptive Filtering Primer with MATLAB, Taylor & Francis, Boca Raton, FL, 2006. Signals and Systems Primer with MATLAB, Taylor & Francis, Boca Raton, FL, 2007. Discrete Random Signal Processing and Filtering Primer with MATLAB, Taylor & Francis, Boca Raton, FL, 2009.
ix
Contributors Nii Attoh-Okine Civil Engineering Department University of Delaware Newark, Delaware Albert Ayenu-Prah Civil Engineering Department University of Delaware Newark, Delaware Jacqueline Bertrand National Center for Scientific Research University of Paris Paris, France Pierre Bertrand Department of Electromagnetism and Radar French National Aerospace Research Establishment (ONERA) Palaiseau, France G. Fay Boudreaux-Bartels University of Rhode Island Kingston, Rhode Island Çagatay Candan Department of Electrical and Electronics Engineering Middle East Technical University Ankara, Turkey Stanley R. Deans University of South Florida Tampa, Florida Lokenath Debnath Department of Mathematics University of Texas-Pan American Edinburg, Texas
Artyom M. Grigoryan Department of Electrical and Computer Engineering The University of Texas San Antonio, Texas Stefan L. Hahn Warsaw University of Technology Warsaw, Poland Kenneth B. Howell University of Alabama in Huntsville Huntsville, Alabama Norden E. Huang Research Center for Adaptive Data Analysis National Central University Chungli, Taiwan
Mark E. Oxley Department of Mathematics and Statistics Graduate School of Engineering and Management Air Force Institute of Technology Wright-Patterson Air Force Base, Ohio Haldun M. Ozaktas Department of Electrical Engineering Bilkent University Ankara, Turkey Robert Piessens Catholic University of Leuven Leuven, Belgium Alexander D. Poularikas University of Alabama in Huntsville Huntsville, Alabama
M. Alper Kutay The Scientific and Technological Research Council of Turkey National Research Institute of Electronics and Cryptology Ankara, Turkey
Ricardo L. de Queiroz Xerox Corporation Webster, New York
Kraig J. Olejniczak University of Arkansas Fayetteville, Arkansas
Yulong Sheng Department of Physics, Physical Engineering and Optics Laval University Quebec, Canada
Jean-Philippe Ovarlez Department of Electromagnetism and Radar French National Aerospace Research Establishment (ONERA) Palaiseau, France
Samuel Seely (deceased) Westbrook, Connecticut
Bruce W. Suter Air Force Research Laboratory Information Directorate Rome, New York
xi
xii
Xiang-Gen Xia Department of Electrical and Computer Engineering University of Delaware Newark, Delware
Contributors
Pat Yip McMaster University Hamilton, Ontario, Canada
1 Signals and Systems 1.1
Introduction to Signals ............................................................................................................... 1-1 Functions (Signals), Variables, and Point Sets and Power Signals
1.2
.
Energy
.
Definition of Distributions
.
The Delta Function
.
Convolution Properties
Correlation................................................................................................................................... 1-19 Orthogonality of Signals ........................................................................................................... 1-19 Introduction . Legendre Polynomials . Hermite Polynomials . Laguerre Polynomials Chebyshev Polynomials . Bessel Functions . Zernike Polynomials
1.6
.
Extensions of the Sampling Theorem
Asymptotic Series ....................................................................................................................... 1-52 Asymptotic Sequence . Poincaré Sense Asymptotic Sequence . Asymptotic Approximation Asymptotic Power Series . Operation of Asymptotic Power Series
Alexander D. Poularikas University of Alabama in Huntsville
.
Sampling of Signals.................................................................................................................... 1-47 The Sampling Theorem
1.7
.
Convolution and Correlation .................................................................................................. 1-13 Convolution
1.4 1.5
Limits and Continuous Functions
Distributions, Delta Function.................................................................................................... 1-4 Introduction . Testing Functions The Gamma and Beta Functions
1.3
.
.
References ................................................................................................................................................ 1-55
1.1 Introduction to Signals A knowledge of a broad range of signals is of practical importance in describing human experience. In engineering systems, signals may carry information or energy. The signals with which we are concerned may be the cause of an event or the consequence of an action. The characteristics of a signal may be of a broad range of shapes, amplitudes, time duration, and perhaps other physical properties. In many cases, the signal will be expressed in analytic form; in other cases, the signal may be given only in graphical form. It is the purpose of this chapter to introduce the mathematical representation of signals, their properties, and some of their applications. These representations are in different formats depending on whether the signals are periodic or truncated, or whether they are deduced from graphical representations. Signals may be classified as follows: 1. Phenomenological classification is based on the evolution type of signal, that is, a perfectly predictable evolution defines a deterministic signal and a signal with unpredictable behavior is called a random signal. 2. Energy classification separates signals into energy signals, those having finite energy, and power signals, those with a finite average power and infinite energy. 3. Morphological classification is based on whether signals are continuous, quantitized, sampled, or digital signals.
4. Dimensional classification is based on the number of independent variables. 5. Spectral classification is based on the shape of the frequency distribution of the signal spectrum.
1.1.1 Functions (Signals), Variables, and Point Sets The rule of correspondence from a set Sx of real or complex number x to a real or complex number y ¼ f (x)
(1:1)
is called a function of the argument x. Equation 1.1 specifies a value (or values) y of the variable y (set of values in Y) corresponding to each suitable value of x in X. In Equation 1.1 x is the independent variable and y is the dependent variable. A function of n variables x1, x2, . . . , xn associates values y ¼ f (x1 , x2 , . . . , xn )
(1:2)
of a dependent variable y with ordered sets of values of the independent variables x1, x2, . . . , xn. The set Sx of the values of x (or sets of values of x1, x2, . . . , xn) for which the relationships (1.1) and (1.2) are defined constitutes the domain of the function. The corresponding set of Sy of values of y is the Sx range of the function. 1-1
1-2
Transforms and Applications Handbook
A single-valued function produces a single value of the dependent variable for each value of the argument. A multiple-valued function attains two or more values for each value of the argument. The function y(x) has an inverse function x(y) if y ¼ y(x) implies x ¼ x(y). A function y ¼ f (x) is algebraic of x if and only if x and y satisfy a relation of the form F(x, y) ¼ 0, where F(x, y) is a polynomial in x and y. The function y ¼ f (x) is rational if f (x) is a polynomial or is a quotient of two polynomials. A real or complex function y ¼ f(x) is bounded on a set Sx if and only if the corresponding set Sy of values y is bounded. Furthermore, a real function y ¼ f(x) has an upper bound, least upper bound (l.u.b.), lower bound, greatest lower bound (g.l.b.), maximum, or minimum on Sx if this is also true for the corresponding set Sy. 1.1.1.1 Neighborhood Given any finite real number a, an open neighborhood of the point a is the set of all points {x} such that jx aj < d for any positive real number d. An open neighborhood of the point (a1, a2, . . . , an), where all ai are finite, is the set of all points (x1, x2, . . . , xn) such that jx1 a1j < d, jx2 a2j < d, . . . , and jxn anj < d for some positive real number d. 1.1.1.2 Open and Closed Sets A point P is a limit point (accumulation point) of the point set S if and only if every neighborhood of P has a neighborhood contained entirely in S, other than P itself. A limit point P is an interior point of S if and only if P has a neighborhood contained entirely in S. Otherwise P is a boundary point. A point P is an isolated point of S if an only if P has a neighborhood in which P is the only point belonging to S. A point set is open if and only if it contains only interior points. A point set is closed if and only if it contains all its limit points; a finite set is closed.
1.1.2 Limits and Continuous Functions 1. A single-value function f (x) has a limit lim f (x) ¼ L, L ¼ finite
x!a
as x ! a{ f (x) ! L as x ! a} if and only if for each positive real number e there exists a real number d such that 0 < jx aj < d implies that f (x) is defined and j f (x) Lj < e. 2. A single-valued function f (x) has a limit lim f (x) ¼ L, L ¼ finite
x!1
as x ! 1 if and only if for each positive real number e there exists a real number N such that x > N implies that f (x) is defined and j f(x) Lj < e.
TABLE 1.1 Operations with Limits limx!a [ f (x) þ g(x)] ¼ limx!a f (x) þ limx!a g(x) limx!a [bf (x)] ¼ b limx!a f (x)
limx!a [ f (x)g(x)] ¼ limx!a f (x) limx!a g(x) f (x) limx!a f (x) limx!a (limx!a g(x) 6¼ 0) ¼ g(x) limx!a g(x) a may be finite or infinite.
1.1.2.1 Operations with Limits If limits exist, Table 1.1 gives the limit operations. 1.1.2.2 Asymptotic Relations between Two Functions Given two real or complex functions f(x), g(x) of a real or complex variable x, we write 1. f(x) ¼ O[ g(x)]; f(x) is of the order g(x) as x ! a if and only if there is a neighborhood of x ¼ a such that j f(x)=g(x)j is bounded. 2. f(x) g(x); f(x) is asymptotically proportional to g(x) as x ! a if and only if limx!a[ f(x)=g(x)] exists and it is not zero. 3. f(x) ffi g(x); f(x) is asymptotically equal to g(x) as x ! a if and only if lim [f (x)=g(x)] ¼ 1:
x!a
4. f(x) ¼ o[g(x)]; f(x) becomes negligible compared with g(x) if and only if lim [f (x)=g(x)] ¼ 0:
x!a
5. f (x) ¼ w(x) þ O[g(x)] if f (x) w(x) ¼ O[g(x)] f (x) ¼ w(x) þ o[g(x)] if f (x) w(x) ¼ o[g(x)]
1.1.2.3 Uniform Convergence 1. A single-valued function f(x1, x2) converges uniformly on a set S of values of x2, limx1 !a f (x1 , x2 ) ¼ L(x2 ) if and only if for each positive real number e there exists a real number d such that 0 < jx1 aj < d implies that f(x1, x2) is defined and j f(x1, x2) L(x2)j < e for all x2 in S (d is independent of x2). 2. A single-valued function f(x1, x2) converges uniformly on a set S of values of x2 limx1 !1 f (x1 , x2 ) ¼ L(x2 ) if and only if for each positive real number e there exists a real number N such that for x1 > N implies that f(x1, x2) is defined and j f(x1, x2) L(x2)j < e for all x2 in S. 3. A sequence of functions f1(x), f2(x), . . . converges uniformly on a set S of values of x to a finite and unique function lim fn (x) ¼ f (x)
x!1
1-3
Signals and Systems m X
if and only if for each positive real number e there exists a real integer N such that for n > N implies that j fn(x) f (x)j < e for all n in S. 1.1.2.4 Continuous Functions 1. A single-valued function f (x) defined in the neighborhood of x ¼ a is continuous at x ¼ a if and only if for every positive real number e there exists a real number d such that jx aj < d implies j f(x) f(a)j < e. 2. A function is continuous on a series of points (interval or region) if and only if it is continuous at each point of the set. 3. A real function continuous on a bounded closed interval [a, b] is bounded on [a, b] and assumes every value between and including its g.l.b. and its l.u.b. at least once on [a, b]. 4. A function f(x) is uniformly continuous on a set S and only if for each positive real number e there exists a real number d such that jx Xj < d implies j f(x) f(X)j < e for all X in S. If a function is continuous in a bounded closed interval [a, b], it is uniformly continuous on [a, b]. If f(x) and g(x) are continuous at a point, so are the functions f(x) þ g(x) and f(x) f(x).
i¼1
j f (xi ) f (xi1 )j < M for all partitions a ¼ x0 < x1 < x2 < < xm ¼ b
of the interval (a, b). If f(x) and g(x) are of bounded variation in (a, b), then f(x) þ g(x) and f(x)g(x) are of bounded variation also. The function f(x) is of bounded variation in every finite open interval where f(x) is bounded and has a finite number of relative maxima and minima and discontinuities (Dirichlet conditions). A function of bounded variation in (a, b) is bounded in (a, b) and its discontinuities are only of the first kind. Table 1.2 presents some useful mathematical functions.
TABLE 1.2
Some Useful Mathematical Functions
1. Signum function
1.1.2.5 Limits 1. A function f(x) of a real variable x has the right-hand limit limx!aþ f(x) ¼ f(aþ) ¼ Lþ at x ¼ a if and only if for each positive real number e there exists a real number d such that 0 < x a < d implies that f(x) is defined and j f(x) Lþj < e. 2. A function f(x) of a real variable x has the left-hand limit limx!a f(x) ¼ f(a) ¼ L at x ¼ a if and only if for each positive real number e there exists a real number d such that 0 < a < d implies that f(x) is defined and j f(x) Lj < e. 3. If limx!a f(x) exists, then limx!aþ f(x) ¼ limx!a f(x) ¼ limx!a f(x). Consequently, limx!a f(x) ¼ limx!aþ f(x) implies the existence of limx!a f(x). 4. The function f(x) is right continuous at x ¼ a if f(aþ) ¼ f(a). 5. The function f(x) is left continuous at x ¼ a if f(a) ¼ f(a). 6. A real function f(x) has a discontinuity of the first kind at point x ¼ a if f(aþ) and f(a) exist. The greatest difference between two of these number f(a), f(aþ), f(a) is the saltus of f(x) at the discontinuity. The discontinuities of the first kind of f(x) constitute a discrete and countable set. 7. A real function f(x) is piecewise continuous in an interval I if and only if f(x) is continuous throughout I except for a finite number of discontinuities of the first kind.
sgn(t) ¼ 2. Step function
3. Ramp function 4. Pulse function
1 t>0 0 t¼0 1 t < 1
n 1 1 1 u(t) ¼ þ sgn(t) ¼ 0 2 2 r(t) ¼
Ðt
t>0 t a
jtj < a
jtj > a 1 < t < 1
7. Gaussian function 2
8. Error function
Properties:
ga (t) ¼ eat ,
1 < t < 1
1 (1)n t 2nþ1 2 Ðt 2 P 2 erf (t) ¼ pffiffiffiffi 0 et dt ¼ pffiffiffiffi p p n¼0 n!(2n þ 1)
erf (1) ¼ 1, erf (0) ¼ 0, erf (t) ¼ erf (t)
1.1.2.6 Monotonicity 1. A real function f(x) of a real variable x is a strongly monotonic in the open interval (a, b) if f(x) increases as x increases in (a, b) or if f(x) decreases as x decreases in (a, b). 2. A function f(x) is weakly monotonic in (a, b) if f(x) does not decrease, or if f(x) does not increase in (a, b). Analogous definitions apply to monotonic sequences. 3. A real function of a real variable x is of bounded variation in the interval (a, b) if and only if there exists a real number of M such that
(
9. Exponential function 10. Double exponential 11. Lognormal function
12. Rayleigh function
erfc(t) ¼ complementary error function 2 Ð1 2 ¼ 1 erf (t) ¼ pffiffiffiffi t et dt p f (t) ¼ eat u(t), t 0
f (t) ¼ eajtj ,
1 < t < 1
1 2 f (t) ¼ e‘n t=2 , 0 < t < 1 t f (t) ¼ tet
2
=2
, 0n m < (1) n!d(t) mn m d d(t) d(t) n ¼ (1)n m! d t , > dt m m n! dt mn : 0,
du(t t0 ) dt
dsgn(t) ¼ 2d(t) dt
32. d[r(t)] ¼
5. d(t) ¼ d(t); d(t) ¼ even function Ð1 6. 1 d(t)f (t)dt ¼ f (0) Ð1 7. 1 d(t t0 )f (t) ¼ f (t0 )
15.
du(t) dt
30. d(t t0 ) ¼ 31.
Delta Functional Properties
Example The first derivative of the functions is d d (2u(t þ 1) þ u(1 t)) ¼ (2u(t þ 1) þ u[ (t 1)]) dt dt ¼ 2d(t þ 1) d(t 1) d d ([2 u(t)] cos t) ¼ (2 cos t u(t) cos t) dt dt ¼ 2 sin t d(t) cos t þ u(t) sin t ¼ (u(t) 2) sin t d(t) i d h p u(t p) sin t u t dt 2 h p i h p i ¼ d t d(t p) sin t þ u t u(t p) cos t 2 2 p h p i þ u t u(t p) cos t ¼d t 2 2
1-10
Transforms and Applications Handbook ðp
Example
"
# d u þ p2 d u p2 cosh ud( cos u)du ¼ cosh u
p
þ
p
du sin 2 sin 2 p p p p ¼ cosh þ cosh 2 2 p ¼ 2cosh 2
The values of the following integrals are 1 ð
e2t sin 4 t
1 1 ð
1
2 d2 d(t) 2 d dt ¼ (1) [e2t sin 4 t]jt¼0 ¼ 2 2 4 ¼ 16 dt 2 dt 2
dd(t 1) d2 d(t 2) dt (t þ 2t þ 3) þ2 dt dt 2 3
¼
1 ð
1
(t 3 þ 2t þ 3)
dd(t 1) dt þ 2 dt
1 ð
1
ðp
1.2.5 The Gamma and Beta Functions The gamma function is defined by the formula
(t 3 þ 2t þ 3)
d2 d(t 2) dt dt 2
G(z) ¼
1 ð
et t z1 dt,
Re{z} > 0
(1:52)
0
¼ (1)(3t 2 þ 2)jt¼1 þ (1)2 2(6t)jt¼2
We shall mainly concentrate on the positive values of z and we shall take the following relationship as the basic definition of the gamma function:
¼ 5 þ 24 ¼ 19
Example G(x) ¼
The values of the following integrals are
1 ð
et t x1 dt, x > 0
(1:53)
0
ð4 0
ð4 ð4 3 1 4t 3 e d t dt e4t d(2t 3)dt ¼ e4t d 2 t dt ¼ 2 2 2 0
0
The gamma function converges for all positive values of x are shown in Figure 1.2. The incomplete gamma function is given by
1 3 1 ¼ e42 ¼ e6 2 2 ð4
ð4
ð4
ðt g(x, t) ¼ t x1 et dt, x > 0, t > 0
1 e4t d(3 2t)dt ¼ e4t d[ (2t 3)]dt ¼ e4t d(2t 3)dt ¼ e6 2
0
1 ð
0
1
eat d(sin t)dt ¼
¼ ¼
1 ð
0
eat
1 X
1 ð
1 X
1 (1)n
1 X
1 anp e (1)n
n¼1
n¼1
0
The beta function is a function of two arguments and is given by
d(t np) dt (1)n
n¼1
1
1
(1:54)
ð1
B(x, y) ¼ t x1 (1 t)yt dt, x > 0, y > 0
(1:55)
0
eat d(t np)dt
6
Г(x)
4
Example 2
The values of the following integrals are –4 2p ð
2p
eat d(t 2 p2 )dt ¼
2p ð
2p
eat
1 [d(t p) þ d(t þ p)]dt 2p
2 –2 –4
1 ap [e þ eap ] ¼ 2p cosh ap ¼ p
–2
–6
FIGURE 1.2
4
x
1-11
Signals and Systems
The beta function is related to the gamma function as follows: B(x, y) ¼
G(x)G(y) G(x þ y)
Hence we obtain t
t
If we set u ¼ e in Equation 1.54, then 1=u ¼ e , loge(1=u) ¼ t, (1=u)du ¼ dt, and [loge(1=u)]x1 ¼ tx1, for the limits t ¼ 0 u ¼ 1, and t ¼ 1 u ¼ 0. Hence G(x) ¼
t
x1 t
e dt ¼
0
ð0 1
x1 1 1 loge u du u u
ð1
x1 1 du ¼ loge u
t x1 et dt ¼
0
¼2
1 ð
(1:57)
¼4
1 ð 0
0
21 3 2 3 p=2 ð 1 ð ð 2 2 2 4 ey dy5 ex dx ¼ 4 4 er r dr 5du
2
and thus
e
t
0
¼
1 ð 0
0
(1:58) pffiffiffiffi 1 G ¼ p 2
Setting x þ 1 in place of x we obtain t
0
0
p 1 ¼4 ¼p 2 2
1.2.5.2 Properties and Specific Evaluations of G(x)
G(x þ 1) ¼
0
0
2
m2x1 em dm
xþ11
(1:64)
21 32 1 3 ð ð 1 2 2 G2 ¼ 4 2ex dx54 2ey dy5 2
m2(x1) em 2m dm
0
1 ð
(1:63)
Hence its square value is
0
1 ð
n ¼ 0, 1, 2, . . .
G(n) ¼ (n 1)!, n ¼ 1, 2, . . . 1 To find G we first set t ¼ u2 2
0
Starting from the definitions and setting t ¼ m2 (dt ¼ 2m dm) we obtain (limits are the same) 1 ð
G(n þ 1) ¼ nG(n) ¼ n(n 1)! ¼ n!,
1 1 ð ð 1 2 1=2 t G e dt ¼ 2eu du, (t ¼ u2 ) ¼ t 2
0
G(x) ¼
G(4) ¼ G(3 þ 1) ¼ 3G(3) ¼ 3 2 1:
(1:56)
1.2.5.1 Integral Expressions of G(x)
1 ð
G(3) ¼ G(2 þ 1) ¼ 2G(2) ¼ 2 1,
dt ¼
1 ð
1 Next let us find the expression for G n þ for integer positive 2 value of n. From Equation 1.61 we obtain
t x et dt
0
t x d(et ) ¼ t x et j1 0 þ
1 ð
xt x1 et dt
0
¼ xG(x)
(1:59)
From the above relation we also obtain (1:60)
G(x) ¼ (x 1)G(x 1)
(1:61)
G(x) ¼
G(x 1) , x
1 2n þ 1 2n þ 1 2n þ 1 1 G 1 ¼G ¼ G nþ 2 2 2 2 2n 1 2n 1 ¼ G 2 2 2n 1 2n 3 2n 3 ¼ G 2 2 2 If we proceed to apply Equation 1.61, we finally obtain
G(x þ 1) x
G(x) ¼
(1:65)
x 6¼ 0, 1, 2, . . .
(1:62)
From Equation 1.53 with x ¼ 1, we find that G(1) ¼ 1. Using Equation 1.59 we obtain G(2) ¼ G(1 þ 1) ¼ 1G(1) ¼ 1 1 ¼ 1,
pffiffiffiffi 1 (2n 1)(2n 3)(2n 5) . . . (3)(1) p G nþ ¼ 2n 2
(1:66)
Similarly we obtain pffiffiffiffi 3 (2n þ 1)(2n 1)(2n 3) . . . (3)(1) p G nþ ¼ 2nþ1 2 pffiffiffiffi 1 (2n 3)(2n 5) . . . (3)(1) p G n ¼ 2n1 2
(1:67) (1:68)
1-12
Transforms and Applications Handbook
1.2.5.3 Remarks on Gamma Function
Example To find the ratio G(x þ n)=G(x n) where n is a positive integer and x n 6¼ 0, 1, 2, . . . , we proceed as follows [see Equation 1.61]: G(x þ n) (x þ n 1)G(x þ n 1) ¼ G(x n) G(x n) ¼ ¼
(x þ n 1)(x þ n 2)G(x þ n 2) ¼ G(x n)
(x þ n 1)(x þ n 2)(x þ n 3) (x þ n 2n)G(x þ n 2n) G(x n)
¼ (x þ n 1)(x þ n 2) (x n)
(1:69)
1. The gamma function is continuous at every x except 0 and the negative integers. 2. The second derivative is positive for every x > 0, and this indicates that the curve y ¼ G(x) is concave upward for all x > 0. 3. G(x) ! þ 1 as x ! 0 þ through positive values and as x ! þ 1. 4. G(x) becomes, alternatively, negatively infinite and positively infinite at negative integers. 5. G(x) attains a single minimum for 0 < x < 1 and is located between x ¼ 1 and x ¼ 2. The beta function is defined by
Example
ð1
B(x, y) ¼ t x1 (1 t)y1 dt,
Applying Equation 1.61 we find 2n G(n þ 1) ¼ 2n nG(n) ¼ 2n n(n 1)G(n 1) ¼ ¼ 2n n(n 1)(n 2) 2 1 ¼ 2n n! ¼ (2 1)(2 2)(2 3) (2 n) ¼ 2 4 6 2n
(1:70)
(1:71)
(1:75)
ð0 ð1 B(y, x) ¼ t y1 (1 t)x1 dt ¼ (1 s)y1 sx1 ds 1
ð1
¼ sx1 (1 s)y1 ds ¼ B(x, y)
(1:76)
0
where we set 1 t ¼ s. If we set t ¼ sin2u, dt ¼ 2sin u cos u du and the limits of u are 0 and p=2, then
Example Based on the Legendre duplication formula
G(2n) G n þ 12 ¼ pffiffiffiffi 12n G(n) p2
y>0
From the above definition we write
0
If n 1 is substituted in place of n, we obtain 2 4 6 (2n 2) ¼ 2n1 G(n)
x > 0,
0
(1:72)
B(x, y) ¼
p=2 ð
2 sin2x1 u cos2y1 u du
(1:77)
0
pffiffiffiffi we can find the ratio G n þ 12 =ð pG(n þ 1)Þ as follows:
G n þ 12 G(2n)212n G(2n)212n 2n pffiffiffiffi ¼ ¼ pG(n þ 1) G(n)G(n þ 1) G(n)2n G(n þ 1)
The integral representation of the beta function is given by
B(x, y) ¼
G(2n)21n ¼ G(n)2 4 6 2n
1 ð 0
ux1 du , (u þ 1)xþy
x > 0, y > 0
(1:78)
Set t ¼ pt in Equation 1.52 and find the relation
(see previous example). But 1 2 3 4 5 (2n 2)(2n 1) 1 3 5 (2n 1) ¼ 2 4 (2n 2) G(2n) (1:73) ¼ n1 2 G(n)
1 ð
ept t z1 dt ¼
G(z) , Re{p} > 0 pz
(1:79)
0
Next set p ¼ 1 þ u and z ¼ x þ y in the above equation to find that
and hence
G n þ 12 1 3 5 (2n 1) pffiffiffiffi ¼ pG(n þ 1) 2 4 6 2n
(1:74)
1 1 xþy ¼ G(x þ y) (1 þ u)
1 ð 0
e(1þu)t t xþy1 dt
(1:80)
1-13
Signals and Systems TABLE 1.4 Gamma and Beta Function Relations
Substituting Equation 1.80 in Equation 1.78, we obtain 1 G(x þ y)
1 ð
et t xþy1 dt
G(x) ¼ G(x þ y)
1 ð
et t y1 dt ¼
B(x, y) ¼
0
0
1 ð
G(x) ¼
0
G(x)G(y) G(x þ y)
(1:81)
It can be shown that B(p, 1 p) ¼
p , sin pp
0 1 (a þ 1)n
we set t ¼ (a þ 1)1 y. Hence 1 ð
t
n1 (aþ1)t
e
dt ¼
n1 y dy ey aþ1 aþ1 n
¼ (a þ 1)
1 ð
y n1 x ey dy ¼
0
et t x1 dt
x>0 2
0
G(1 x) G(x) ¼ x G(n) ¼ (n 1)! pffiffiffiffi 1 ¼ p G 2 pffiffiffiffi 1 1 3 5 (2n 1) p ¼ G nþ n 2 2 pffiffiffiffi 3 (2n þ 1)(2n 1)(2n 3) (3)(1) p G nþ ¼ nþ1 2 2 pffiffiffiffi 1 (2n 3)(2n 5) (3)(1) p G n ¼ n1 2 2 2 4 6 2n G(n þ 1) ¼ 2n G(2n) ¼ 1 3 5 (2n 1)G(n)21n
G(2n) G n þ 12 ¼ pffiffiffiffi 12n p2 G(n) p G(x)G(1 x) ¼ sin xp nn pffiffiffiffiffiffiffiffiffi 2pn þ h n! ¼ e G aþ1 c a bt c t e dt ¼ 0 cb(aþ1)=c Ð1 B(x, y) ¼ 0 t x1 (1 t)y1 dt Ð p=2 B(x, y) ¼ 0 2 sin2x1 u cos2y1 u du
Ð1
ux1 du B(x, y) ¼ 0 (u þ 1)xþy G(x)G(y) B(x, y) ¼ G(x þ y) Ð1
1 ð 0
0
0
Ð1
2u2x1 eu du x1 Ð1 1 dr G(x) ¼ 0 log r G(x þ 1) G(x) ¼ x G(x) ¼ (x 1)G(x 1) G(x) ¼
eut ux1 du
Ð1
G(n) (a þ 1)n
To evaluate the integral
Ð1 0
1 ð
x 6¼ 0, 1, 2, . . . x 6¼ 0, 1, 2, . . . n ¼ 1, 2, 3, . . . , 0! ¼ 1
n ¼ 1, 2, . . . n ¼ 1, 2, . . . n ¼ 1, 2, . . . n ¼ 1, 2, . . . n ¼ 1, 2, . . . n ¼ 1, 2, . . . x 6¼ 0, 1, 2, . . . n ¼ 1, 2, . . . , 0 h 1 < < n! 12n a > 1, b > 0, c > 0 x > 0, y > 0 x > 0, y > 0 x > 0, y > 0
p sin xp B(x, y) ¼ B(x þ 1, y) þ B(x, y þ 1)
B(x, 1 x) ¼
0 0 x>0
2
2
x 0 ex dx
which, if compared with the integral in Table 1.4, we have the correspondence a ¼ 0, b ¼ 1, c ¼ 2. Hence we obtain
0
x 6¼ 0, 1, 2, . . .
ex dx, we write it in the form
0
1 ð
x>0
B(x, y) ¼ B(y, x)
1 2n B(x, n þ 1) ¼ x(x þ 1) (x þ n)
Example
x>0
pffiffiffiffi G aþ1 G 0þ1 p 2 c 2 ¼ ¼ ex dx ¼ (aþ1)=c 2 cb 2 11=2
1.3 Convolution and Correlation 1.3.1 Convolution Convolution of functions, although a mathematical relation, is extremely important to engineers. If the impulse response of a system is known, that is, the response of the system to a delta function input, the output of the system is the convolution of the
1-14
Transforms and Applications Handbook TABLE 1.5 G(x), 1 x 1.99 x
0
1
2
3
4
5
6
7
8
9
1.0
1.0000
.9943
.9888
.9835
.9784
.9735
.9687
.9642
.9597
.9555
.1
.9514
.9474
.9436
.9399
.9364
.9330
.9298
.9267
.9237
.9209
.2
.9182
.9156
.9131
.9108
.9085
.9064
.9044
.9025
.9007
.8990
.3
.8975
.8960
.8946
.8934
.8922
.8912
.8902
.8893
.8885
.8879
.4
.8873
.8868
.8864
.8860
.8858
.8857
.8856
.8856
.8857
.8859
.5 .6
.8862 .8935
.8866 .8947
.8870 .8959
.8876 .8972
.8882 .8986
.8889 .9001
.8896 .9017
.8905 .9033
.8914 .9050
.8924 .9068
.7
.9086
.9106
.9126
.9147
.9168
.9191
.9214
.9238
.9262
.9288
.8
.9314
.9341
.9368
.9397
.9426
.9456
.9487
.9518
.9551
.9584
.9
.9618
.9652
.9688
.9724
.9761
.9799
.9837
.9877
.9917
.9958
input and its impulse response. The convolution of two functions is given by : g(t) ¼ f (t)*h(t) ¼
1 ð
1
f (t)h(t t)dt
(1:84)
i
i
If Dt is sufficiently small, the area of fi(t) equals f(ti) Dt (see Figure 1.3). Hence, the output is approximately f(ti) Dt h (t ti) because fi(t) is concentrated near the point ti. As Dt ! 0, we thus conclude that
i
gi (t) ffi
X i
f (ti )h(t ti )Dt !
1 ð
1
f (t)h(t t)dt
(1:86)
1
f (t)h(t t)dt ¼
f (t t)h(t)dt
1. Both f (t) and h(t) must be absolutely integrable in the interval (1, 0]. 2. Both f (t) and h(t) must be absolutely integrable in the interval [0, 1). 3. Either f (t) or h(t) (or both) must be absolutely integrable in the interval (1, 1). For example, the convolution cos v0t * cos v0t does not exist.
f (t) ¼ 1, 0 < t < 1, h(t) ¼ et u(t)
(1:87)
g(t) ¼
1 ð
1
f (t)h(t t)dt
The ranges are f (τi)
fi (t)
Δτ
FIGURE 1.3
The convolution does not exist for all functions. The sufficient conditions are
0
f (t)
τi
(1:88)
0
then the output is given by
and, therefore, the output of the system becomes g(t) ¼
g(t) ¼ f (t)h(t t)dt ¼ f (t t)h(t)dt
If the functions to be convoluted are
h(t) ¼ 0, t < 0
1 ð
ðt
Example
For casual systems, the impulse response is
ðt
ðt 0
Proof Let f (t) be written as a sum of elementary fi(t). The output g(t) is also given by the sum of the outputs gi(t) due to each elementary function fi(t). (Table 1.5) Hence X X fi (t), g(t) ¼ gi (t) (1:85) f (t) ¼
X
If, also, f (t) ¼ 0 for t < 0, then g(t) ¼ 0 for t < 0; for t > 0 we obtain
1. 1 < t < 0. No overlap of f(t) and h(t) takes place. Hence, g(t) ¼ 0. 2. 0 < t < 1. Overlap occurs from 0 to t. Hence
Δτ t
τi
t
ðt ðt g(t) ¼ 1 e(tt) dt ¼ et et dt ¼ 1 et 0
0
1-15
Signals and Systems 3. 1 < t < 1, Overlap occurs from 0 to 1. Hence
yc (t) ¼
ð1 g(t) ¼ e(tt) dt ¼ et (e 1)
1 RC
1 ð
1
e(tt)=RC u(t t)y(t)dt ¼
h(t) ¼
1.3.1.1 Definition: Convolution Systems The convolution of any continuous and discrete system is given respectively by
y(t) ¼ y(n) ¼
h(t, t)x(t)dt
(1:89)
1 1 X
h(n, m)x(m)
(1:90)
m¼1
If the systems are time invariant, the kernels h() are functions of the difference of their argument. Hence h(n, m) ¼ h(n m),
1 t=RC *y(t) e RC
Therefore, the impulse response of this system is
0
1 ð
h(t, t) ¼ h(t t)
1 t=RC u(t) e RC
Example A discrete system that smooths the input signal x(n) is described by the difference equation y(n) ¼ ay(n 1) þ (1 a)x(n), n ¼ 0, 1, 2, . . . By repeated substitution and assuming zero initial condition y(1) ¼ 0, the output of the system is given by y(n) ¼ (1 a)
n X
anm x(m),
m¼0
n ¼ 0, 1, 2, . . .
(1:93)
If we define the impulse response of the system by
and therefore
y(t) ¼ y(n) ¼
1 ð
1
h(n) ¼ (1 a)an ,
x(t)h(t t)dt
1 X
m¼1
x(m)h(n m)
n ¼ 0, 1, 2, . . .
(1:91)
the system has an input–output relation
(1:92)
y(n) ¼
1 X
m¼1
h(n m)x(m)
which indicates that the system is a convolution one.
1.3.1.2 Definition: Impulse Response The impulse response h(t) of a system is the result of a delta function input to the system. Its value at t is the response to a delta function at t ¼ 0.
Example A pure delay system in defined by
y(t) ¼
Example The voltage yc(t) across the capacitor of an RC circuit in series with an input voltage source y(t) is given by
1 ð
1
d(t t0 t)x(t)dt ¼ x(t t0 )
(1:94)
which shows that its impulse response is h(t) ¼ d(t t0).
dyc (t) 1 1 þ yc (t) ¼ y(t) dt RC RC
1.3.1.3 Definition: Nonanticipative Convolution System
For a given initial condition yc(t0) at time t ¼ t0 the solution is
A system, discrete or continuous, is nonanticipative if and only if its impulse response is
yc (t) ¼ e(tt0 )=RC yc (t0 ) þ
1 RC
ðt
t0
e(tt)=RC y(t)dt, t t0
For a finite initial condition and t0 ! 1, the above equation is written in the form
h(t) ¼ 0, t < 0 with t ranging over the range in which the system is defined. If the delay t0 of a pure delay system is positive, then the system in nonanticipative; and if it is negative, the system is anticipative.
1-16
Transforms and Applications Handbook
1.3.2 Convolution Properties
Proof
Commutative
y(t) ¼
1 ð
1
f (t)h(t t)dt ¼
1 ð
1
mg ¼ f (t t)h(t)dt ¼
Set t t ¼ t0 in the first integral, and then rename the dummy variable t0 to t. ¼
Distributive g(t) ¼ f (t)*[h1 (t) þ h2 (t)] ¼ f (t)*h1 (t) þ f (t)*h2 (t)
¼
1 ð
1 1 ð
1 1 ð
1 1 ð
1 ð
tg(t)dt ¼ 2
f (t)4 2
f (t)4
1 ð
1 1 ð
1
f (t)dt
1
1
2
t4
3
1 ð
1
f (t)h(t t)dt5dt
3
th(t t)dt5dt
3
(l þ t)h(l)dl5dt,
1 ð
lh(l)dl þ
1
1 ð
tt¼l
tf (t)dt
1
1 ð
h(l)dl
1
This property follows directly as a result of the linear property of integration.
¼ Af mh þ mf Ah
Associative
mg mg : Af mh þ mf Ah ¼ ¼ Kg ¼ ¼ Kh þ Kf Af Ah Ag Af Ah [[f (t)*h1 (t)]*h2 (t)] ¼ f (t)*[h1 (t)*h2 (t)]
Scaline property If g(t) ¼ f(t) * h(t), then f Proof
Shift invariance If g(t) ¼ f(t) * h(t), then g(t t0 ) ¼ f (t t0 )*h(t) ¼
1 ð
1
1 ð
f (t t0 )h(t t)dt
Write g(t) in its integral form, substitute t t0 for t, set t þ t0 ¼ t0 , and then rename the dummy variable. Area property
1
t t t *h ¼ jajg . a a a
1 ð t t t t t t f f h dt ¼ h dt a a a a a 1
¼ jaj
1 ð
f (r)h
1
t
t r dr ¼ jajg a a
Complex-valued functions g(t) ¼ f (t)*h(t) ¼ [fr (t) þ jfi (t)]*[hr (t) þ jfhi (t)]
Af ¼
mf ¼
1 ð
1 1 ð
1
¼ [fr (t)*hr (t) fi (t)*hi (t)] þ j[fr (t)*hi (t) þ fi (t)*hr (t)]
f (t)dt ¼ area
tf (t)dt ¼ first moment
mf ¼ center of gravity Kf ¼ Af The convolution g(t) ¼ f (t) * h(t) leads to Ag ¼ Af Ah
Kg ¼ Kf þ Kh
Derivative of delta function dd(t) ¼ g(t) ¼ f (t)* dt ¼
d dt
1 ð
1
1 ð
1
f (t)
d d(t t)dt dt
f (t)d(t t)dt ¼
df (t) dt
Moment expansion Expand f(t t) in Taylor series about the point t ¼ 0 f (t t) ¼ f (t) tf (1) (t) þ
t2 (2) ( t)n1 (n1) f (t) þ þ f (t) þ en 2! (n 1)!
1-17
Signals and Systems
Band-limited function
Insert into convolution integral
g(t) ¼ f (t)
1 ð
1
þ þ
h(t)dt f
(1)
1 ð
1
f (n1) (t) (1)n1 (n 1)!
¼ mh0 f (t) mh1 f (1) (t) þ þ þ
f (2) (t) th(t)dt þ 2! 1 ð
1
1 ð
If f(t) is s-band limited, then the output of a system is 2
t h(t)dt g(t) ¼
1
tn1 h(t)dt þ En
1 ð
1
f (t)h(t t)dt ¼
hs (t) ¼
Because (t)n (n) f (t t1 ), 0 t1 t n! 1 ð 1 (t)n f (n) (t t1 )h(t)dt En ¼ n! en ¼
1
Because t1 depends on t, the function f (n)(t t1) cannot be taken outside the integral. However, if f (n)(t) is continuous and tnh(t) 0, then 1 En ¼ f (n) (t t0 ) n!
1 ð
1
(1)n mhn (n) f (t t0 ) (t) h(t)dt ¼ n!
Fourier transform ^{f (t)*h(t)} ¼ F(v)H(v) Proof 2 1 3 1 1 1 ð ð ð ð 4 f (t)h(t t)dt5ejvt dt ¼ f (t) h(t t)ejvt dt dt f (t)ejvt dt
1
1 ð
1
1
1
h(r)ejvr dr, t t ¼ r
1 ð
1
H(v)ejvt dv
s
Hs (v) ¼ ps (v)H(v),
G(v) ¼ F(v)H(v) ¼ F(v)p s (v)H(v) ¼ F(v) for s < v < s ¼ F(v)Hs (v), F(v) " # 1 X Tf (nT)d(t nT) *hs (t) g(t) ¼ f (t)*hs (t) ¼ n¼1
¼
1 X
n¼1
Tf (nT)hs (t nT)
The convolution properties are given in Table 1.6. 1.3.2.1 Stability of Convolution Systems
F(v)H(v)e
jvt
dv ¼
1 ð
1
1.3.2.1.1 Definition: Bounded Input Bounded Output (BIBO) Stability A discrete or continuous convolution system with impulse response h is BIBO stable if Ð and only if the impulse satisfies the inequality, Sn jhj < 1 or jh(t)jdt < 1. If the system is BIBO R stable, then supjy(n)j
X n
jh(n)jsupjx(n)j, supjy(t)j
ð jh(t)jdt supjx(t)j, t 2 R R
for every finite amplitude input x(t) (y is the input of the system).
Example If the impulse response of a discrete system is h(n) ¼ abn, n ¼ 0, 1, 2, . . . , then
Inverse Fourier transform 1 2p
ðs
n
where t0 is some constant in the interval of integration.
¼
1 2p
hence
Truncation Error
1 1 ð
Tf (nT)hs (t nT)
Proof
where bracketed numbers in exponents indicate order of differentiation.
1
n¼1
where
mh2 (2) f (t) 2!
(1)n1 mh(n1) f (n1) (t) þ En (n 1)!
1 X
f (t)h(t t)dt
1 X n¼0
jh(n)j ¼
1 X n¼0
jajjbjn ¼
1 jaj 1jbj 1
jb j< 1 jb j1
1-18
Transforms and Applications Handbook TABLE 1.6 Convolution Properties Ð1
Ð1
1. Commutative
g(t) ¼
2. Distributive
g(t) ¼ f (t)*[h1 (t) þ h2 (t)] ¼ f (t)*h1 (t) þ f (t)*h2 (t)
3. Associative
[[f (t)*h1 (t)]*h2 (t)] ¼ f (t)*[h1 (t)*h2 (t)]
1
f (t)h(t t)dt ¼
g(t) ¼ f (t)*h(t)
4. Shift invariance
g(t t0 ) ¼ f (t t0 )*h(t) ¼
1
Ð1
1
f (t t)h(t)dt
f (t t0 )h(t t)dt
Af ¼ area of f(t), Ð1 mf ¼ 1 tf (t)dt ¼ first moment mf ¼ center of gravity Kf ¼ Af
5. Area property
Ag ¼ Af Ah, Kg ¼ Kf þ Kh 6. Scaling
7. Complex-valued functions 8. Derivative
g(t) ¼ f(t) * h(t) t t t f *h ¼ jajg a a a
g(t) ¼ f (t)*h(t) ¼ [fr (t)*hr (t) fi (t)*hi (t)] þ j[fr (t)*hi (t) þ fi (t)*hr (t)]
g(t) ¼ f (t)*
dd(t) df (t) ¼ dt dt
mh2 (1) (1)n1 f (t) þ þ mh(n1) f (n1) (t) þ En g(t) ¼ mh0 f (t) mh1 f (1) (t) þ 2! n 1! Ð1 k mhk ¼ 1 t h(t)dt
9. Moment expansion
En ¼
( 1)n mhn (n) f (t t0 ), t0 ¼ constant in the interval of integration n!
10. Fourier transform
F{f (t)*h(t)} ¼ F(v)H(v)
11. Inverse Fourier transform
Ð1 1 Ð1 F(v)H(v)ejvt dv ¼ 1 f (t)h(t t)dt 2p 1 Ð1 P g(t) ¼ 1 f (t)h(t t)dt ¼ 1 n¼1 Tf (nT)hs (t nT) 1 Ðs H(v)ejvt dv, f (t) ¼ sband limited ¼ 0, jtj > s hs (t) ¼ 2p s PN1 x((n m)mod N)y(m) x(n) y(n) ¼ m¼0 P1 x(n)*y(n) ¼ m¼1 x(n m)y(m) P x(nT)*y(nT) ¼ T 1 m¼1 x(nT mT)y(mT)
12. Band-limited function
13. Cyclical convolution 14. Discrete-time 15. Sampled
The above indicates that for jbj < 1 the system is BIBO and for jbj 1 the system is unstable.
Example Ð1 If h(t) ¼ u(t) then jh(t)j ¼ 0 ju(t)jdt ¼ 1, which indicates the system is not BIBO stable.
The above equation indicates that the output is the same as the input ejvt with its amplitude modified by jH(v)j and its phase by tan1 (Hi(v)=Hr(v)) where Hr(v) ¼ Re{H(v)} and Hi(v) ¼ Im {Hi(v)}. For the discrete case we have the relation y(n) ¼ ejvn H(ejv )
1.3.2.1.2 Harmonic Inputs If the input function is of complex exponential order ejvt, then its output is y(t) ¼
1 ð
1
h(t)e
jv(tt)
dt ¼ e
jvt
1 ð
1
h(t)ejvt dt ¼ H(v)ejvt
where
H(ejv ) ¼
1 X
n¼1
h(n)ejvn
1-19
Signals and Systems The ranges of t are
1.4 Correlation The cross-correlation of two different functions is defined by the relation : Rfh (t) ¼ f (t) } h(t) ¼
1 ð
1
f (t)h(t t)dt ¼
1 ð
1
f (t þ t)h(t)dt (1:95)
When f (t) ¼ h(t) the correlation operation is called autocorrelation. : Rff (t) ¼ f (t) } f (t) ¼
1 ð
1
f (t)f (t t)dt ¼
1 ð
1
: Rff (t) ¼ f (t) } f *(t) ¼
1 ð
f (t)h*(t t)dt
1 ð
f (t)f *(t t)dt
1
1
(1:96)
(1:97)
f (t) } h(t) 6¼ h(t) } f (t)
1
ð
:
f (t)f *(t t)dt
jRff (t)j ¼ jf (t) } f *(t)j ¼
1 21 31=2 2 1 31=2 ð ð 2 5 4 4 jf (t)j dt jf (t t)j2 dt5 ¼
1
x(n) } y(n) ¼
x(n) } x(n) ¼
1 X
x(m n)y*(m) crosscorrelation
1 X
x(m n)x*(m) autocorrelation
m¼1
m¼1
(1:101)
(1:102)
x(nT ) } y(nT ) ¼ T
1 X
m¼1
x(mT nT )y*(mT )
sampled cross-correlation
(1:103)
1.5 Orthogonality of Signals 1.5.1 Introduction
(1:98)
The two basic properties of correlation are
1 1 ð
The discrete form of correlation is given by
f (t þ t)f (t)dt
For complex functions the correlation operations are given by : Rfh (t) ¼ f (t) } h*(t) ¼
1. t > 2: Rfh(t) ¼ 0 (no overlap of function) Ð1 2. 4 < t < 2: Rfh (t) ¼ 3þt e(tt3) dt ¼ 1 e2 et Ð1 3. 1 < t < 4: Rfh (t) ¼ 1 e(tt3) dt ¼ et e2 (e2 1)
(1:99)
ð E
jf (t)j2 dt
exists in the sense of Lebesque. The class L2 of all real or complex functions is quadratically integrable on a given interval if one regards the functions f(t), h(t), . . . as vectors and defines
1
jf (t)j2 dt Rff (0)
Modern analysis regards some classes of functions as multidimensional vectors introducing the definition of inner products and expansion in term of orthogonal functions (base functions). In this section, functions F(t), f(t), F(x), . . . symbolize either functions of one independent variable t, or, for brevity, a function of a set n independent variables t1, t2, . . . , tn. Hence, dt ¼ dt1 . . . dtn. A real or complex function f(t) defined on the measurable set E of elements {r} is quadratically integrable on E if and only if
(1:100) Vector sum of f (t) and h(t) as f (t) þ h(t) Product of f (t) by a scalar a as af (t)
Example
The inner product of f(t) and h(t) is defined as
The cross-correlation of the following two functions, f(t) ¼ p(t) and h(t) ¼ e(t3) u(t 3), is given by Rfh (t) ¼
1 ð
1
p(t)e(tt3) u(t t 3)dt
: hf , hi ¼
ð
g(t)f *(t)h(t)dt
(1:104)
I
where g(t) is a real nonnative function (weighting function) quadratically integrable on I.
1-20
Transforms and Applications Handbook
ð : : d 2 hrn , ri ¼ krn r k2 ¼ g(t)jrn (t) r(t)j2 dt ! 0 as n ! 1
Norm The norm is L2 is the quantity
I
(1:109) 2 31=2 ð : k f k¼ [hf , f i]1=2 ¼4 g(t)jf (t)j2 dt5
(1:105)
Therefore we define limit in the mean
I
If k f k exists and is different from zero, the function is normalizable. Normalization f (t) ¼ unit norm kf k If f(t), h(t), and the nonnegative weighting function g(t) are quadratically integrable on I, then Cauchy–Schwarz inequality
2
ð
ð ð
: : jhf (t), h(t)ij ¼
g(t)f *hdt
gjf j2 dt gjhj2 dt¼hf , f ihh, hi
I
(1:110)
Convergence in the mean does not necessarily imply convergence of the sequence at every point, nor does convergence of all points on I imply convergence in the mean. Riess–Fischer Theorem
Inequalities
I
lim:n!1 rn (t) ¼ r(t)
The L2 space with a given interval I is complete; every sequence of quadratically integrable functions r0(t), r1(t), r2(t), . . . such that lim.m!1,n!1jrm rnj ¼ 0 (Cauchy sequence), converges in the mean to a quadratically integrable function r(t) and defines r(t) uniquely for almost all t in I. Orthogonality Two quadratically integrable functions f (t), h(t) are orthogonal on I if and only if
I
(1:106)
ð hf , hi ¼ g(t)f *(t)h(t)dt ¼ 0
Minkowski inequality
(1:111)
I
0 11=2 ð : @ 2 A kf þ hk ¼ gjf þ hj dt
Orthonormal
I
0 11=2 0 11=2 ð ð @ gjf j2 dtA þ@ gjhj2 dtA I
A set of function ri(t), i ¼ 1, 2, . . . is an orthonormal set if and only if
I
¼ kf k þ khk
(1:107)
ð 0 if i 6¼ j : hri , rj i ¼ g(t)ri *(t)rj (t)dt ¼ dij ¼ 1 if i ¼ j I
(i, j ¼ 1, 2, . . . ) (1:112)
Convergence in mean Every set of normalizable mutually orthogonal functions is linearly independents.
The space L2 admits the distance function (matric) 2 31=2 ð : :4 2 5 dhf , hi ¼ k f h k ¼ g(t)jf (t) h(t)j dt
Bessel’s inequalities (1:108)
I
The root-mean-square difference of the above equation between the two functions f(t) and h(t) is equal to zero if and only if f(t) ¼ h(t) for almost all t in I. Every sequence in I of functions r0(t), r1(t), r2(t), . . . converges in the mean to the limit r(t) if and only if
Given a finite or infinite orthonormal set w1(t), w2(t), w3(t), . . . and any function f (t) quadratically integrable over I X i
jhwi f ij2 hf , f i
(1:113)
The equal sign applies if and only if f(t) belongs to the space spanned by all wi(t).
1-21
Signals and Systems
Complete orthonormal set of functions (orthonormal bases)
Series approximation
A set of functions {wi(t)}, i ¼ 1, 2, . . . , in L2 is a complete orthonormal set if and only if the set satisfies the following conditions:
If f(t) is a quadratically integrable function, then ð
1. Every quadratically integrable function f(t) can be expanded in the form f (t) ¼ hf , w1 iw1 þ hf , w2 iw2 þ þ hf , wi iwi þ , i ¼ 1, 2, . . .
hf , f i ¼ jhf , w1 ij2 þ jhf , w2 ij2 þ
3. For any pair of functions f(t) and h(t) in L2, the relation holds hf , hi ¼ hf , w1 ihh, w1 i þ hf , w2 ihh, w2 i þ 4. The orthonormal set w1(t), w2(t), w3(t), . . . is not contained in any other orthonormal set in L2. The above conditions imply the following: given a complete orthonormal set {wi(t)}, i ¼ 1, 2, . . . in L2 and a set of complex P1 2 numbers hf , w1 i, hf , w2 i þ such that i¼1 jhf , wi ij < 1, there exists a quadratically integrable function f(t) such that hf , w1 iw1 þ hf , w2 iw2 þ converges in the mean of f(t). Gram–Schmidt orthonormalization process Given any countable (finite or infinite) set of linear independent functions r1(t), r2(t), . . . normalizable in I, there exists an orthogonal set w1(t), w2(t), . . . spanning the same space of functions. Hence Ð w1 r2 dt , w3 w1 ¼ r1 , w2 ¼ r2 IÐ 2 I w1 dt Ð Ð IÐ w1 r3 dt IÐ w2 r3 dt w2 , etc: ¼ r3 2 dt w1 2 w I 1 I w2 dt
(1:114)
yi (t) yi (t) ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi k yi (t) k þ hyi , yi i k¼1
hwk , riþ1 iwk (t),
(1:116)
1.5.2.1 Relations of Legendre Polynomials Legendre polynomials are closely associated with physical phenomena for which spherical geometry is important. The polynomials Pn(t) are called Legendre polynomials in honor of their discoverer, and they are given by
Pn (t) ¼
[n=2] X k¼0
[n=2] ¼
(1)k (2n 2k)!t n2k 2n k!(n k)!(n 2k)! n=2 (n 1)=2
8 1 P > > Pn (t)sn
> Pn (t)sn1 : n¼0
i ¼ 1, 2, . . . (1:115)
(1:117)
n even n odd
jsj < 1 jsj > 1 generating function (1:117a)
Table 1.7 gives the first eight Legendre polynomials. Figure 1.4 shows the first six Legendre polynomials.
TABLE 1.7 Legendre Polynomials
For creating an orthonormal set, we proceed as follows:
i X
n ¼ 1, 2, . . .
1.5.2 Legendre Polynomials
which is the completeness relation (Parseval’s identity).
y1 (t) ¼ r1 (t), yiþ1 (t) ¼ riþ1 (t)
yields the least mean square error. The set {wi(t)}, i ¼ 1, 2, . . . is orthonormal and the approximation to f(t) is fn (t) ¼ a1 w1 (t) þ a2 w2 (t) þ þ an wn (t),
2. If (1) above is true, then
wi (t) ¼
I
jfn (t) f (t)j2 dt
P0 ¼ 1
P1 ¼ t 3 1 P2 ¼ t 2 2 2 5 3 P3 ¼ t 3 t 2 2 35 30 3 P4 ¼ t 4 t 2 þ 8 8 8 63 70 15 P5 ¼ t 5 t 3 þ t 8 8 8 231 6 315 4 105 2 5 t t þ t P6 ¼ 16 16 16 16 429 7 693 5 315 3 35 t t þ t t P7 ¼ 16 16 16 16
1-22
Transforms and Applications Handbook
From Equation 1.123 we get
P1(t)
P3(t)
–1
Example
P0(t)
1
0.5 P (t) 4
d 0 (t) nPn (t) ntPn0 (t) (1 t 2 )Pn0 (t) ¼ nPn1 dt
–0.5
0.5
–0.5
1
t
Use Equation 1.121 to find d (1 t 2 )Pn0 (t) þ n(n þ 1)Pn (t) ¼ 0 dt
P2(t)
or –1
(1 t 2 )Pn00 (t) 2tPn0 (t) þ n(n þ 1)Pn (t) ¼ 0
FIGURE 1.4
We have deduced the Legendre polynomials y ¼ Pn(t) (n ¼ 0, 1, 2, . . . ) as the solution of the linear second-order ordinary differential equation
Rodrigues formula Pn (t) ¼
(1:128)
1 dn 2 (t 1)n , n ¼ 0, 1, 2, . . . n! dt n
2n
(1:118)
(1 t 2 )y 00 (t) 2ty 0 (t) þ n(n þ 1)y(t) ¼ 0
(1:128a)
called the Legendre differential equation. If we let x ¼ cos w then the above equation transforms to the trigonometric form
Recursive formulas (n þ 1)Pnþ1 (t) (2n þ 1)tPn (t) þ nPn1 (t) ¼ 0, n ¼ 1, 2, . . .
: 0 Pnþ1 (t) tPn0 (t) ¼ (n þ 1)Pn (t), (P0 (t) ¼ derivative of P(t)) n ¼ 0, 1, 2, . . . (1:120) 0 (t) ¼ nPn (t) n ¼ 1, 2, . . . tPn0 (t) Pn1 0 0 Pnþ1 (t) Pn1 (t) ¼ (2n þ 1)Pn (t)
n ¼ 1, 2, . . .
y 00 þ (cotw)y 0 þ n(n þ 1)y ¼ 0
(1:119)
(1:121) (1:122)
(1:128b)
It can be shown than Equation 1.128a has solutions of a first kind n(n þ 1) 2 n(n þ 1)(n 2)(n þ 3) 4 y ¼ C0 1 t þ t 2! 4! (n 1)(n þ 2) 3 (n 1)(n þ 2)(n 3)(n þ 4) 5 þ C1 1 t þ t 3! 5!
(1:128c)
(t 2 1)Pn0 (t) ¼ ntPn (t) nPn1 (t)
(1:123)
P0 (t) ¼ 1, P1 (t) ¼ t
(1:124)
valid for jtj < 1, C0 and C1 being arbitrary constants.
Schläfli’s integral formula
Example 1 Pn (t) ¼ 2pj
From Equation 1.117, when n is even, implies, Pn(t) ¼ Pn(t) and when n is odd, Pn(t) ¼ Pn(t). Therefore Pn (t) ¼ (1)n Pn (t)
(1:125)
C
(z 2 1)n dz 2n (z t)nþ1
(1:129)
where C is any regular, simple, closed curve surrounding t. 1.5.2.2 Complete Orthonormal System, n o 1=2 1 (2n þ 1) P (t) n 2
Example From Equation 1.123 t ¼ 1 implies 0 ¼ nPn1(1) nPn1(1) or Pn(1) ¼ Pn1(1). For n ¼ 1 it implies P1(1) ¼ P0(1) ¼ 1. For n ¼ 2 P2(1) ¼ P1(1) ¼ 1, and so forth. Hence, Pn(1) ¼ 1. From Equation 1.125 Pn(1)n. Hence Pn (1) ¼ 1, Pn (1) ¼ (1)n
(1:126)
for 1 < t < 1
(1:127)
Pn (t) < 1
ð
The Legendre polynomials are orthogonal in [1, 1] ð1
1
ð1
1
[Pn (t)]2 dt ¼
Pn (t)Pm (t)dt ¼ 0
(1:130)
2 2n þ 1
(1:131)
n ¼ 0, 1, 2, . . .
1-23
Signals and Systems
Example
and therefore the set rffiffiffiffiffiffiffiffiffiffiffiffiffi 2n þ 1 wn (t) ¼ Pn (t) n ¼ 0, 1, 2, . . . 2
Suppose f(t) is given by
(1:132) f (t) ¼
is orthonormal.
0 1 t < a 1 a0 2n 1 a2 n! a 1
n ¼ 0, 1, 2, . . . , 0 t < 1
The Rodrigues formula for creating Laguerre polynomials is given by
7. H2n(t) are even functions, H2nþ1 (t) are odd functions 8. Hnþ1(t) 2tHn(t) þ 2nHn1(t) ¼ 0,
n X (1)k n!t k 2 k¼0 (k!) (n k)!
By expressing the exponential function in a series, realizing that k 1 kþm ¼ (1)m m m and finally making the change of index m ¼ n k, Equation 1.164 leads to
1.5.4.2 Recurrence Relations The generating function w(t, x), Equation 1.164 satisfies the identity (1 x2 )
qw þ (t 1)w ¼ 0 qx
(1:169)
Substituting Equation 1.164 in Equation 1.169 and equating the coefficients of xn to zero, we obtain (n þ 1)Lnþ1 (t) þ (t 1 2n)Ln (t) þ nLn1 (t) ¼ 0, n ¼ 1, 2, . . .
(1:170)
TABLE 1.10 L0(t) ¼ 1
Laguerre Polynomials
L1(t) ¼ t þ 1 1 L2 (t) ¼ (t 2 4t þ 2) 2! 1 L3 (t) ¼ ( t 3 þ 9t 2 18t þ 6) 3! 1 L4 (t) ¼ (t 4 16t 3 þ 72t 2 96t þ 24) 4!
1-32
Transforms and Applications Handbook
1.5.4.3 Orthogonality, Laguerre Series
20 15
The orthogonality relations for Laguerre polynomials are
L5(t)
L2(t)
10
L4(t)
5
1 ð
L0(t) 2
4
6
8
10
–5
n 6¼ m
(1:179)
0
t 1 ð
L1(t)
–10
et Ln (t) Lm (t)dt ¼ 0,
et [Ln (t)]2 dt ¼
G(n þ 1) ¼ 1, n ¼ 0, 1, 2, . . . n!
(1:180)
0
–15
L3(t)
For the generalized Laguerre polynomials, the orthogonality relations
–25
FIGURE 1.6
1 ð
qw þ xw ¼ 0 qt
n 6¼ m,
a > 1
0
Similarly substituting Equation 1.164 into (1 x)
et t a Lam (t)Lan (t)dt ¼ 0,
(1:171)
1 ð 0
2 G(n þ a þ 1) et t a Lan (t) dt ¼ , a > 1, n ¼ 0, 1, 2, . . . n!
(1:181)
we obtain the relation L0n (t) L0n1 (t) þ Ln1 (t) ¼ 0, n ¼ 1, 2, . . .
(1:172)
The orthogonal system for the generalized polynomials on the interval 0 t < 1 is
From this we obtain L0nþ1 (t) L0n1 (t)
¼
¼
L0n (t)
L0n (t)
Ln (t)
(1:173)
þ Ln1 (t)
(1:174)
wan (t) ¼
n! G(n þ a þ 1)
1=2
et=2 t a=2 Lan (t),
n ¼ 0, 1, 2, . . . (1:182)
The Laguerre series is given by
From Equation 1.170 by differentiation we find f (t) ¼
(n þ 1)L0nþ1 (t) þ (t 1 2n)L0n (t) þ Ln (t) þ nL0n1 (t) ¼ 0
(1:175)
1 X
Cn Ln (t),
n¼0
0t 1=2 and t > 0, is expanded as follows
(1:185) Cn ¼
The Rodrigues formula is Lm n (t) ¼
1 t m d n t nþm (e t ) et dt n n!
¼
(1:186) ¼
Example
t ¼
1 X n¼0
1 b > (a þ 1) 2 ebt ¼ (b þ 1)a1
n! G(n þ a þ 1)
t bþa et Ln (t)dt
n! ¼ G(n þ a þ 1)
1 ð
et t bþa
1 ¼ G(n þ a þ 1)
1 ð
tb
et t a dn nþa t (t e )dt n! dt n
dn nþa t (t e )dt dt n
0 1 ð
ebt
dn t nþa (e t )dt dt n
e(bþ1)t t nþa dt
0
n ¼ 0, 1, 2, . . .
n 1 X b Lan (t), bþ1 n¼0
0t 1 a > 1 a > 1
Tnþ1 (t) 2tTn (t) þ Tn1 (t) ¼ 0
(1:190)
Unþ1 (t) 2tUn (t) þ Un1 (t) ¼ 0
(1:191)
The orthogonality properties are ð1
(1 t 2 )1=2 Tn (t)Tk (t)dt ¼ 0,
k 6¼ n
(1:192)
ð1
(1 t 2 )1=2 Un (t)Uk (t)dt ¼ 0, k 6¼ n
(1:193)
1
1
The governing differential equations for Tn(t) and Un(t) are, respectively,
a>
1 2
(1 t 2 )y00 ty0 þ n2 y ¼ 0
(1:194)
(1 t 2 )y00 3ty0 þ n(n þ 2)y ¼ 0
(1:195)
The following are relationships between the two Chebyshev types: Tn (t) ¼ Un (t) tUn1 (t)
(1:196)
(1 t )Un (t) ¼ tTn (t) Tnþ1 (t)
(1:197)
2
1-35
Signals and Systems TABLE 1.12
Properties of the Chebyshev Polynomials
T3(t)
1
d2 y dy 1. (1 t 2 ) 2 t þ n2 y ¼ 0; y(t) ¼ Tn (t) dt dt n X[n=2] (1)k (n k 1)! (2t)n2k , 2. Tn (t) ¼ k¼0 2 k!(n 2k)! n ¼ 1, 2, . . . , [n=2] ¼ largest integer n=2
0.25 –1
–0.5
Table 1.12 gives relationships for the Chebyshev polynomials. If we set t ¼ cos u in Equation 1.194, we find that it reduce to d2 y þ n2 y ¼ 0 du2 with solution cos nu and sin nu. Therefore, if we set Tn(cos u) ¼ Cn cos nu, we find that Cn ¼ 1 for all n because Tn(1) ¼ 1 for all n. Hence (1:198)
Similarly
is the function y ¼ Jn(t), known as the Bessel function of the first kind and order n. The Bessel function is defined by the series
(1:199)
(1:200)
(1:201)
1 X 1 : 1 w(t, x) ¼ e2tðxxÞ ¼ Jn (t)xn , x 6¼ 0
(1:203)
(1:204)
n¼1
Jn (t) ¼
1 1 X (1)k (t=2)2kn X (1)k (t=2)2kn ¼ k!(k n)! k!(k n)! k¼0 k¼n
because 1=[(k n)!] ¼ 0 for k ¼ 0, 1, 2, . . . , n 1 (G(n) ¼ 1 for negative n). Setting k ¼ m þ n, we obtain 1 X (1)mþn (t=2)2mþn m!(m þ n)! m¼0
(1:205)
from which it follows that Jn (t) ¼ (1)n Jn (t),
Figure 1.7 shows several Chebyshev polynomials.
n ¼ 0, 1, 2, . . .
(1:206)
Equating like terms in the expanded form of Equation 1.204, we obtain
1.5.6 Bessel Functions 1.5.6.1 Bessel Functions of the First Kind General relations: The solution of Bessel’s equation 1 n2 y00 þ y0 þ 1 2 y ¼ 0, n ¼ 0, 1, 2, . . . t t
1 < t < 1
We can find Equation 1.203 by expanding the function w(t, x) in series of the two exponentials exp(tx=2) and exp(t=2x) in the form
Jn (t) ¼
The generalized Rodrigues formula is 1 (2)n n! pffiffiffiffiffiffiffiffiffiffiffiffi2 d n 1 t n (1 t 2 )n2 dt (2n)!
1 X (1)k (t=2)nþ2k , k!(n þ k)! k¼0
By setting n ¼ n in Equation 1.203 we obtain
The generating function for the Chebyshev polynomial is
Tn (t) ¼
T2(t)
FIGURE 1.7
Jn (t) ¼
1 X 1 st ¼ Tn (t)sn 1 2st þ s2 n¼0
t
–0.75 1
t]
1
–0.5
8. Tn(1) ¼ 1, Tn(1) ¼ (1)n, T2n(0) ¼ (1)n, T2n þ 1(0) ¼ 0
sin [(n þ 1) cos pffiffiffiffiffiffiffiffiffiffiffiffi 1 t2
0.5 –0.25
6. Tn þ 1(t) ¼ 2tTn(t) Tn1(t) ( 0 n 6¼ m Ð 1 Tn (t)Tm (t) 7. 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi dt ¼ p=2 n ¼ m 6¼ 0 (1 t 2 ) p n¼m¼0
Un (t) ¼
T1(t)
0.5
4. Tn(t) ¼ cos(n cos1t) X1 1 st ¼ T (t)sn , generating function 5. n¼0 n 1 2st þ s2
1
T5(t)
0.75
1 (2)n n! pffiffiffiffiffiffiffiffiffiffiffiffi2 dn 1 t n (1 t 2 )n2 , 3. Tn (t) ¼ dt (2n)! Rodrigues formula
Tn (t) ¼ cos nu ¼ cos (n cos1 t)
T4(t)
J0 (0) ¼ 1, Jn (0) ¼ 0, (1:202)
n 6¼ 0
(1:207)
Figure 1.8 shows several Bessel functions of the first kind and zero order.
1-36
1
Transforms and Applications Handbook
Set y ¼ 0 in Equation 1.212 to obtain
J0(t)
J00 (t) ¼ J1 (t)
0.8 J1(t)
0.6
J2(t)
J3(t)
0.4
Add and subtract Equations 1.211 and 1.212 to find, respectively, the relations
J4(t)
0.2 2
4
(1:213)
6
8
10
t
–0.2
2Jy0 (t) ¼ Jy1 (t) Jyþ1 (t)
(1:214)
2y Jy (t) ¼ Jy1 (t) þ Jyþ1 (t) t
(1:215)
The last relation is known as the three-term recurrence formula. Repeated operations result in
–0.4
FIGURE 1.8
1.5.6.2 Bessel Functions of Nonintegral Order The Bessel functions of a noninteger number are give by (y ¼ noninteger number) Jy (t) ¼
1 X (1)k (t=2)2kþy , y0 k!G(k þ y þ 1) k¼0
1 X (1)k (t=2)2ky Jy (t) ¼ , y0 k!G(k y þ 1) k¼0
(1:208a)
(1:208b)
1 d y d X (1)k (t)2kþ2y [t Jy (t)] ¼ dt dt k¼0 22kþy k!G(k þ y þ 1)
¼ ty
k¼0
i d y d huy d uy du Jy (u) ¼ Jy (u) [t Jy (at)] ¼ dt dt a du ay dt ¼ ay
d y [u Jy (u)]a ¼ a1y [uy Jy1 (u)] du
¼ a1y [(at)y Jy1 (at)] ¼ at y Jy1 (at) where Equation 1.209 was used.
1.5.6.3 Recurrence Relation
k
Example We proceed to find the following derivative
The two functions Jy(t) and Jy(t) are linear independent for noninteger values of y and they do not satisfy any generatingfunction relation. The functions Jy(0) ¼ 1 and Jy(0) remain finite. Both share most of the properties of Jn(t) and Jn(t).
1 X
d m y [t Jy (t)] ¼ t ym Jym (t) (1:216) t dt d m y [t Jy (t)] ¼ (1)m t ym Jyþm (t) m ¼ 1, 2, . . . t dt (1:217)
2kþ(y1)
(1) (t=2) k!G(k þ y)
¼ t y Jy1 (t)
Example Differentiate Equation 1.214 to find
(1:209)
Similarly
d2 Jy (t) 1 dJy1 (t) dJyþ1 (t) ¼ dt 2 2 dt dt Then apply the same equation to each derivative on the right side to find
d y [t Jy (t)] ¼ t y Jyþ1 (t) dt
(1:210)
Differentiate Equations 1.209 and 1.210 and dividing by ty and ty, respectively, we find y Jy0 (t) þ Jy (t) ¼ Jy1 (t) t y Jy0 (t) Jy (t) ¼ Jyþ1 (t) t
(1:211) (1:212)
d2 Jy (t) 1 1 1 ¼ [Jy2 (t) Jy (t)] [Jy (t) Jyþ2 (t)] dt 2 2 2 2 ¼
1 [Jy2 (t) 2Jy (t) þ Jyþ2 (t)] 22
Similarly we find d3 Jy (t) 1 ¼ 3 [Jy3 (t) 3Jy1 (t) þ 3Jyþ1 (t) Jyþ3 (t)] dt 3 2
1-37
Signals and Systems
1.5.6.4 Integral Representation
Example
Set x ¼ exp(jw) in Equation 1.204, multiply both sides by exp (jnw), and integrate the results from 0 to p. Hence
We apply the integration procedure to find
ðp
ej(nwt
sin w)
dw ¼
0
1 X
ðp
Jk (t) ej(nk)w dw
k¼1
(1:218)
ð
0
Expand on both sides the exponentials in Eauler’s formula; equate the real and imaginary parts and use the relation ðp
cos (n k)w dw ¼
0
n
0 k 6¼ 0 p k¼n
1 p
ðp
cos (nw t sin w)dw,
n ¼ 0, 1, 2, . . .
d t J2 (t)dt ¼ t [t J2 (t)]dt ¼ t 3 [t 1 J1 (t)]dt dt ð ð ¼ t 2 J1 (t) þ 3 tJ1 (t)dt ¼ t 2 J1 (t) 3 t[ J1 (t)]dt ð d J0 (t) dt ¼ t 2 J1 (t) 3 t dt ð ¼ t 2 J1 (t) 3tJ0 (t) þ 3 J0 (t)dt
(1:219)
Example If a > and b > 0, then [see Equation 1.220]
0
1 ð
When n ¼ 0, we find
eat J0 (bt)dt ¼
0
1 J0 (t) ¼ p
ðp
cos (t sin w)dw
(1:220)
¼
(t=2)y pffiffiffiffi 1 pG y þ 2
1
¼ 1
(1 x2 )y2 ejtx dx,
2 p
2 p
p=2 ð
1 ð
2 p
p=2 ð
p=2 ð
cos (bt sin w)dw
0
0
1 y > ,t > 0 2
dw
eat cos (bt sin w)dt
0
adw 1 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a2 þ b2 sin2 w a2 þ b2
Example (1:221) For a > 0, b > 0, and y > 1 (y is real), then
Set x ¼ cos u to obtain (t=2)y Jy (t) ¼ pffiffiffiffi 1 pG y þ 2
eat dt
0
For a Bessel function with nonintegral order, the Poisson formula is Jy (t) ¼
1 ð 0
0
ð1
ð
3 1
The last integral has no closed solution.
to find that all terms of the infinite sum vanish except for k ¼ n. Hence, we obtain Jn (t) ¼
ð
2
ðp
1 cos (t cos u) sin2y u du, y > , t > 0 2
0
(1:222)
1 ð
e
a2 t 2
Jy (bt)t
yþ1
dt ¼
0
2 2
ea t t yþ1 dt
0
¼
1.5.6.5 Integrals Involving Bessel Functions Start with the identities d y [t Jy (t)] ¼ t y Jy1 (t) dt d y [t Jy (t)] ¼ t y Jyþ1 (t) dt
1 ð
1 X (1)k (bt=2)yþ2k k!G(k þ y þ 1) k¼0
1 X
yþ2k (1)k b k!G(k þ y þ 1) 2
ea t t 2yþ2kþ1 dt
k¼0 1 ð
2 2
0
(1:223)
¼
(1:224)
1 X
yþ2k (1)k b 1 k!G(k þ y þ 1) 2 2a2yþ2kþ2
er r yþk dr
k¼0 1 ð 0
and directly integrate to find ð t y Jy1 (t)dt ¼ t y Jy (t) þ C ð
t y Jyþ1 (t)dt ¼ t y Jy (t) þ C
where C is the constant of integration.
(1:225) (1:226)
¼
by (2a2 )yþ1
k b2 1 X by 2 2 4a2 ¼ eb =4a 2 )yþ1 k! (2a k¼0 (1:227)
where the last integral is the gamma function and the summation is the exponential expression.
1-38
Transforms and Applications Handbook
The usual method of find definite integrals involving Bessel functions is to replace the Bessel function by its series representation. To illustrate the technique, let us find the value of the integral
Setting p ¼ 0 in this equation we find 1 ð
eat J0 (bt) ¼
0
1 ð
I¼
0
1 eat t p Jp (bt)dt, p > , a > 0, 2
b>0
1 [a2 þ b2 ]1=2
,
a > 0, b > 0
(1:231)
Set a ¼ 0 þ in this equation to obtain 1 ð
1 ð 1 X (1)k (b=2)2kþp ¼ eat t 2kþ2p dt k!G(k þ p þ 1) k¼0
1 J0 (bt)dt ¼ , b > 0 b
(1:232)
0
0
¼ bp
1 X (1)k G(2k þ 2p þ 1) 2 ð pþ12Þk 2 k (a ) (b ) 22kþp k!G(k þ p þ 1) k¼0
(1:228)
where the last integral is in the form of a gamma function. But we know that r k
!
nþ1 kþ1
k
¼ (1) !
¼
rþk1 n
kþ1
k !
þ
!
n
,
n k
k
!
,
!
¼
n nk
!
0kn1
and thus we obtain 1 k p (1) 2 G p þ k þ (1)k G(2k þ 2p þ 1) 2 pffiffiffiffi ¼ 22kþp k!G(k þ p þ 1) pk! 1 0 1 (1)k p 1 Bp þ k 2C ¼ pffiffiffiffi 2 G p þ @ A 2 p k 0 1 1 1 2p G p þ p þ 2 B 2 C pffiffiffiffi ¼ A @ p k (1:229) Therefore, Equation 1.228 becomes
I¼
1 ð
eat t p Jp (bt)dt
0
1 1 0 1 (2b)p G p þ 1 X p þ 2 @ 2 A(a2 )(pþ(1=2))k (b2 )k pffiffiffiffi ¼ p k¼0 k 1 (2b)p G p þ 1 2 ¼ pffiffiffiffi , p > , a > 0, b > 0 pþ12 2 2 2 p(a þ b ) (1:230)
By assuming the real approaches zero and writing a as pure imaginary, Equation 1.231 becomes 1 ð
e
jat
J0 (bt)dt ¼
0
8 > > > < > > > :
1 (b2
b>a
a2 )1=2 j
(1:233) ba
(1:234)
, b
1 2
(1:238)
where c’s are the expansion coefficient constants and tn’s (n ¼ 1, 2, 3, . . . ) are the zeros (positive roots) of the function Jy (tn t), n ¼ 1, 2, 3, . . .
(1:239)
tJy (tm t)Jy (tn t)dt ¼ 0, m 6¼ n
(1:240)
ða
cn tJy (tm t)Jy (tn t)dt
n¼1
0
¼ cm [Jy (tm t)]2 dt
(1:242)
0
because the integral is zero if n 6¼ m (see Equation 1.240). Hence, from this equation we obtain 2 cn ¼ 2 a [Jyþ1 (tn a)]2
ða
tf (t)Jy (tn t)dt, n ¼ 1, 2, 3, . . .
(1:243)
0
Example Find the Fourier–Bessel series for the function f (t) ¼
n
t 0 0,
a > 0, b > 0 b>0 n ¼ 1, 2, . . . a>0 n ¼ 1, 2, . . .
b>0
1-43
Signals and Systems TABLE 1.13 (continued) 67. 68. 69. 70. 71. 72. 73. 74.
Properties of Bessel Functions of the First Kind
2pþ1 G p þ 32 abp pffiffiffiffi 3 , 2 p (a þ b2 )pþ2 Ð 1 2 at 2a2 b2 J0 (bt)dt ¼ , 0 t e (a2 þ b2 )5=2 2 Ð 1 at2 pþ1 bp eb =4a t Jp (bt)dt ¼ pþ1 , 0 e (2a) Ð 1 at2 pþ3 bp b2 b2 =4a e e J (bt)dt ¼ p þ 1 , t p 0 2pþ1 apþ2 4a 1 Ð 1 1 sin t J0 (bt)dt ¼ arcsin b , 0 t Ð p=2 J0 (t cos w) cos w dw ¼ sint t 0 Ð p=2 1 cos t J1 (t cos w)dw ¼ 0 t Ð 1 t cos w J0 (t sin w)t n dt ¼ n!Pn ( cos w), 0 e Ð1 0
eat t pþ1 Jp (bt)dt ¼
p > 1, a > 0, b > 0 a > 0, b > 0 p > 1, a > 0, b > 0 p > 1, a > 0, b > 0 b>1
0w 0 m > 12 ,
p m > 1
0 t 1, J0(kn) ¼ 0, n ¼ 1, 2, . . . p
n
0 < t < 1, J (k ) ¼ 0, n ¼ 1, 2, . . .
Jpþ1 (kn t) , n¼1 2 kn Jpþ1 (kn )
P1
0 < t < 1, p > 1=2,
TABLE 1.14 x
0
.1
.2
.3
J0(x) .4
.5
.6
.7
.8
.9
0
1.0000
.9975
.9900
.9776
.9604
.9385
.9120
.8812
.8463
.8075
1
.7652
.7196
.6711
.6201
.5669
.5118
.4554
.3980
.3400
.2818
2
.2239
.1666
.1104
.0555
.0025
3
.2601
.2921
.3202
.3443
.3643
.0758 .2238
.0412 .2433
.0068 .2601
.3205
.2961
7
.3001
.2991
.2951
.2882
.2786
.2663
.2516
.2346
.2154
.1944
8
.1717
.1475
.1222
.0960
.0692
.0419
.0146
9
.0903
.1142
.1367
.1577
.1768
.1939
.2090
.0125
.2218
.0392
.0653
11 12
.1712 .0477
.1528 .0697
.1330 .0908
.1121 .1108
.0902 .1296
.0677 .1469
.0446 .1626
.0213 .1766
.1887
.1988
13 14
.2069 .1711
.2129 .1570
.2167 .1414
.2183 .1245
.2177 .1065
.2150 .0875
.2101 .0679
.2032 .0476
.1943 .0271
.1836 .0064
15
.0142
.0346
.0544
.0736
.0919
.1092
.1253
.1401
.1533
.1650
When x > 15.9,
.2496
.2477
.3423
.2434
.2366
.0270 .2740
.2276
sffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 1 1 sin x p sin x þ p þ px 4 8x 4 :7979 1 : sin (57:296x 45 ) ¼ pffiffiffi sin (57:296x þ 45 ) þ 8x x
.2693 .0599 .2851
.2164
.2404
.2243
.4018
.1103 .2017
.2490
.3610
.1850
.4026
.1443 .1773
.2459
.3766
.1424
.3992
.1776 .1506
10
.3887
.0968
.3918
5 6
4
.3971
.0484
.3801
.0917 .2931
.2323
.2032 .0020
.2097 .1220 .2981
.2403
.1881 .0250
: J0 (x) ¼
(continued)
1-44
Transforms and Applications Handbook TABLE 1.14 (continued) x
0
.1
.2
.3
J1(x) .4
.5
.6
.7
.8
.9
0
.0000
.0499
.0995
.1483
.1960
.2423
.2867
.3290
.3688
.4059
1
.4401
.4709
.4983
.5220
.5419
.5579
.5699
.5778
.5815
.5812
2
.5767
.5683
.5560
.5399
.5202
.4971
.4708
.4416
.4097
.3754
3 4
.3391 .0660
.3009 .1033
.2613 .1386
.2207 .1719
.1792 .2028
.1374 .2311
.0955 .2566
.0538 .2791
.0128 .2985
.0272 .3147
5 6 7 8
.3276
.3371
.3432
.3460
.3453
.3414
.3343
.3241
.3110
.2951
.2767
.2559
.2329
.2081
.1816
.1538
.1250
.0953
.0652
.0349
.2346
.2476
.2580
.2657
.2708
.2731
.2728
.2697
.2641
.2559
.0047
.0252
.0543
.0826
.1096
.1352
.1592
.1813
.2014
.2192
9
.2453
.2324
.2174
.2004
.1816
.1613
.1395
.1166
.0928
.0684
10
.0435
.0184
11 12
.1768 .2234
.1913 .2157
.0066
.0313
.0555
.0789
.1012
.1224
.1422
.1603
.2039 .2060
.2143 .1943
13 14
.0703 .1334
.0489 .1488
.0271 .1626
.0052 .1747
15
.2051
.2013
.1955
.1879
.2225 .1807 .0166
.2284 .1655 .0380
.2320 .1487 .0590
.2333 .1307 .0791
.2323 .1114 .0984
.2290 .0912 .1165
.1850
.1934
.1999
.2043
.2066
.2069
.1784
.1672
.1544
.1402
.1247
.1080
When x > 15.9, sffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 3 1 sin x þ p sin x p þ px 4 8x 4 3 : :7979 sin (57:296x þ 45 ) ¼ pffiffiffi sin (57:296x 45 ) þ 8x x
: J1 (x) ¼
TABLE 1.15 J2(x) .4
x
0
.1
.2
.3
.5
.6
.7
.8
.9
0
.0000
.0012
.0050
.0112
.0197
.0306
.0437
.0588
.0758
.0946
1
.1149
.1366
.1593
.1830
.2074
.2321
.2570
.2817
.3061
.3299
2
.3528
.3746
.3951
.4139
.4310
.4461
.4590
.4696
.4777
.4832
3 4
.4861 .3641
.4862 .4835 .3383 .3105
: x2 x2 When 0 x < 1, J2 (x) ¼ 8 1 12 .
.4780 .2811
.4697 .2501
.4586 .2178
.4448 .1846
.4283 .1506
.4093 .1161
.3879 .0813
x
0
.1
.2
.3
J3(x) .4
.5
.6
.7
.8
.9
0
.0000
.0000
.0002
.0006
.0013
.0026
.0044
.0069
.0102
.0144
1
.0196
.0257
.0329
.0411
.0505
.0610
.0725
.0851
.0988
.1134
2
.1289
.1453
.1623
.1800
.1981
.2166
.2353
.2540
.2727
.2911
3
.3091
.3264
.3431
.3588
.3734
.3868
.3988
.4092
.4180
.4250
4
.4302
.4333 .4344
: x3 x2 1 16 . When 0 x < 1, J3 (x) ¼ 48
.4333
.4301
.4247
.4171
.4072
.3952
.3811
x
0
.1
.2
.3
J4(x) .4
.5
.6
.7
.8
.9
0
.0000
.0000
.0000
.0000
.0001
.0002
.0003
.0006
.0010
.0016
1 2
.0025 .0340
.0036 .0405
.0050 .0476
.0068 .0556
.0091 .0643
.0118 .0738
.0150 .0840
.0188 .0950
.0232 .1067
.0283 .1190
3
.1320
.1456
.1597
.1743
.1891
.2044
.2198
.2353
.2507
.2661
4
.2811
.2958 .3100
: x4 x2 . When 0 x < 1, J4 (x) ¼ 384 1 20
.3236
.3365
.3484
.3594
.3693
.3780
.3853
1-45
Signals and Systems
TABLE 1.17 n jlj
0
0
Zeros of J0(x), J1(x), J2(x), J3(x), J4(x), J5(x)
m
j0,m
j1,m
j2,m
j3,m
1
2.4048
3.8317
5.1356
6.3802
7.5883
8.7715
2
5.5201
7.0156
8.4172
9.7610
11.0647
12.3386
3
8.6537
10.1735
11.6198
13.0152
14.3725
15.7002
4
11.7915
13.3237
14.7960
16.2235
17.6160
18.9801
5
14.9309
16.4706
17.9598
19.4094
20.8269
22.2178
6 7
18.0711 21.2116
19.6159 22.7601
21.1170 24.2701
22.5827 25.7482
24.0190 27.1991
25.4303 28.6266
8
24.3525
25.9037
27.4206
28.9084
30.3710
31.8117
9
27.4935
29.0468
30.5692
32.0649
33.5371
34.9888
10
30.6346
32.1897
33.7165
35.2187
36.6990
38.1599
2
3
2r2 1
r
3r3 2r
2
r
2
j4,m
j5,m
The Radial Polynomials Rnjlj(r) for jlj 8, n 8
1
1
1
TABLE 1.16
3
r
3
4
4
2
4r 3r r4
4
5
6r4 6r2 þ 1
6
10r5 12r3 þ 3r 5
3
5r 4r
6
4
r
15r 20r þ 6r
7
5
21r 30r þ 10r 7
7r 6r
r6
6
8
35r7 60r5 þ 30r3 4r
2
6r6 5r4
5
5
7
20r6 30r4 þ 12r2 1
3
5
56r8 105r6 þ 60r4 10r2 28r8 42r6 þ 15r4 8r8 7r6
7
r
7 8
70r8140r6 þ 90r4 20r2 þ 1
r8
1.5.7.1 Expansion in Zernike Polynomials If f(x, y) is a piecewise continuous function, we can expand this function in Zernike polynomials in the form f (x, y) ¼
1 X
1 X
n¼0 l¼1
Anl Vnl (x, y), n jlj is even, jlj n (1:251)
Multiplying by Vnl* (x, y), integrating over the unit circle, and taking into consideration the orthogonality property we obtain
Anl ¼
nþ1 p
nþ1 ¼ p
2p ð ð1
Vnl*(r, u)f (r cos u, r sin u)r drdu
ð
ð
0 0
Vnl*(x, y)f (x, y) dxdy ¼ A*n(l)
and because n jlj must be even, l will take 0, 1 and 1 values. We then write f (x, y) ¼ ¼
1 X 1 X
Anl Rnl (r)ejlu
n¼0 l¼1
1 X n¼0
(An(1) Rn(1) (r)eju þ An0 Rn0 (r) þ An1 Rn1 (r)eju )
¼ A00 R00 (r) þ A1(1) R1(1) (r)eju þ A11 R11 (r)eju
(1:253)
Where three terms were dropped because they did not obey the condition that n jlj is even. From Equation 1.248 R1(1)(r) ¼ R11(r) and hence we obtain
(1:252)
x2 þy2 1
with restrictions of the values of n and l as shown above. Anl’s are also known as Zernike moments.
Example
A00 ¼
1 p
2p ð ð1
R00 (r)r cos u r drdu ¼ 0
2 p
2p ð ð1
R11 (r)r cos u eju r drdu ¼
2 p
2p ð ð1
R11 (r)r cos u eju r drdu ¼
0 0
A1(1) ¼
1 2
0 0
A11 ¼
1 2
0 0
Expand the function f(x, y) ¼ x in Zernike polynomials.
SOLUTION We write f(r cos u, r sin u) ¼ r cos u and observe that r has exponent (degree) one. Therefore, the values of n will be 0, 1
Therefore, the expansion becomes 1 1 f (x, y) ¼ reju þ reju ¼ r cos u ¼ R11 (r) cos u ¼ x 2 2 as was expected.
1-46
Transforms and Applications Handbook
The radial polynomials Rnl(r) are real valued and if f(x, y) is real, that is, image intensity, it is often convenient to expand in real-values series. The real expansion corresponding to Equation 1.251
Cnl Snl
ð1 2p ð
2n þ 2 ¼ p
rdrduf
0 0
(r cos u, r sin u)Rnl (r) f (x, y) ¼
1 X 1 X n¼0 l¼0
(Cnl cos lu þ Snl sin lu)Rnl (r)
(1:254)
Cn0 ¼ An0
1 ¼ p
ð1 2p ð
sin lu
l 6¼ 0
,
(1:255)
rdrduf (r cos u, r sin u)Rnl (r),
l 6¼ 0 (1:256a)
Sn0 ¼ 0, l ¼ 0
1
1
0.5
0 1
0 –1
0.5
–0.5
1 0.5
–0.5
0
0
–0.5
0.5
–0.5
0.5
1 –1
1 –1
n = 0, ℓ = 0
n = 1, ℓ = 1
1
1
0
0
–1 –1
1 0.5
–0.5
1
–1 –1
0.5
–0.5
0
0
0
0
–0.5
0.5
(1:256b)
–1 –1
0
0
–0.5
0.5 1 –1
1 –1 n = 2, ℓ = 0
n = 2, ℓ = 2
1
1
0
0
–1 –1
1 0.5
–0.5 0
0 –0.5
0.5 1 –1
FIGURE 1.9
cos lu
0 0
where n l is even and l < n. Observe that l takes only positive value. The unknown constants are found from
(a)
n = 3, ℓ = 1
1
–1 –1
0.5 –0.5
0 0
–0.5
0.5 1 –1 n = 3, ℓ = 3
1-47
Signals and Systems
1
1
0
0 1
–1 –1
0.5 –0.5
0.5 –0.5
0
0
1
–1 –1 0
–0.5
0.5
0 1 –1
n = 4, ℓ = 0
n = 4, ℓ = 2
1
1
0
0
–1 –1
1
–1 –1
0.5
–0.5
1
0
0.5
–0.5
0
0
0
–0.5
0.5
1 –1
n = 4, ℓ = 4
n = 5, ℓ = 1
1
1
0
0
–1 –1
1 0.5
–0.5
0
0
1
–1 –1
0.5
–0.5
0
0
–0.5
–0.5
0.5 1 –1
1 –1 (b)
–0.5
0.5
1 –1
0.5
–0.5
0.5
1 –1
n = 5, ℓ = 5
n = 5, ℓ = 3
FIGURE 1.9 (continued)
If the function is axially symmetric only the cosine terms are needed. The connection between real and complex Zernike coefficients are Cnl ¼ 2Re{Anl }
(1:257a)
Snl ¼ 2Im{Anl }
(1:257b)
Anl ¼ (Cnl jSnl )=2 ¼ (An(l) )*
(1:257c)
Figure 1.10 shows the reconstruction of the letter Z using different orders of Zernike moments.
1.6 Sampling of Signals Two critical questions in signal sampling are: First, do the sampled values of a function adequately represent the system? Second, what must the sampling interval be in order that an
1-48
Transforms and Applications Handbook
Original
n up to 5
n up to 10
n up to 15
n up to 20
FIGURE 1.10
optimum recovery of the signal can be accomplished from the sampled values? The value of the function at the sampling points is the sampled value, the time that separates the sampling points is the sampling interval, and the reciprocal of the sampling interval is the sampling frequency or sampling rate. If the sampling interval Ts is chosen to be constant, and n ¼ 0 1, 2, . . . , the sampled signal is fs (t) ¼ f (t)
1 X
n¼1
d(t nTs ) ¼
1 X
n¼1
f (nTs ) d(t nTs ) (1:258)
Its Fourier transform is 1 X : f (nTs )^{d(t nTs )} Fs (v) ¼ ^{fs (t)} ¼ n¼1
¼
1 X
f (nTs )ejnvTs
(1:259)
n¼1
We can also represent the Fourier transform of a sampled function as follows:
1-49
Signals and Systems ( ) ( ) 1 1 X X 1 : : d(t nTs ) ¼ ^{ ¼ (t)} * ^ Fs (v) ¼ ^ f (t) d(t nTs ) 2p n¼1 n¼1 ¼
f(t)
F(ω)
" # 1 1 2p X F(v)* d(v nvs ) 2p Ts n¼1
f(3Ts)
1 ð 1 1 1 X 1 X ¼ F(v nvs ) F(x)d(v nvs x)dx ¼ Ts n¼1 Ts n¼1 1
1 1 X 2p F(v þ nvs ), vs ¼ ¼ Ts n¼1 Ts
t
Ts 2Ts 3Ts
–2Ts –Ts (a)
F(0) –ωN (b)
fs(t)
(1:260)
Fs(v) is periodic with period vs in the frequency domain.
Example (
f (4Ts) f (3Ts)
f (–2Ts) –2Ts–Ts
)
1 X
ω
ωN
1 1 X 2 : d(t nTs ) ¼ ^s (v) ¼ ^ ejtj 2 T s n¼1 1 þ (v nvs ) n¼1
t
Ts 2Ts 3Ts 4Ts
Fs(ω)
1.6.1 The Sampling Theorem It can be shown that is possible for a band-limited signal f(t) to be exactly specified by its sampled valued provided that the time distance between sample values does not exceed a critical sampling interval.
Ts Pωs/2(ω)
Ts F(0)/Ts
F(ω)
–ωs –ωN –ωs – ωs –ωN –ωs+ωN 2 (c)
ωN ωs ωs ωs + ωN 2 ωs–ωN
ω
FIGURE 1.11
THEOREM 1.4 A finite energy function f(t) having a band-limited Fourier transform, F(v) ¼ for jvj vN, can be completely reconstructed from its sampled values f(nTs) (see Figure 1.11), with
By Equation 1.262, the above equation becomes
1
9 8 vs (t nTs ) > > > > sin 1 = < X 2p 2 , vs ¼ f (t) ¼ Ts f (nTs ) (1:261) > > ) Ts p(t nT s > > n¼1 ; :
provided that
2p p 1 TN ¼ Ts ¼ ¼ 2 vs vN 2fN
f (t) ¼ ^ {F(v)} ¼ ^ ¼ Ts
1 X
(
Proof Employ Equation 1.260 and Figure 1.11c to write (1:262)
pvs =2 (v)Ts
1 X
n¼1
f (nTs )ejnvTs
)
f (nTs )^1 {pvs =2 (v)ejnvTs }
n¼1
By application of the frequency-shift property of the Fourier transform, this equation proves the theorem. The sampling time
Ts ¼
The function within the braces, which is the sinc function, is often called the interpolation function to indicate that it allows an interpolation between the sampled values of find f(t) for all t.
F(v) ¼ pvs =2 (v)Ts Fs (v)
1
TN 1 ¼ 2 2fN
(1:263)
is related to the Nyquist interval. It is the largest time interval that can be used for sampling of a band-limited signal and still allows recovering of the signal without distortion. If, however, the sampling time is larger than the Nyquist interval, overlap of spectra takes place, known as aliasing, and no perfect reconstruction of the band-limited signal is possible. Figure 1.12 shows the
1-50
Transforms and Applications Handbook combTs (t)
f (t)
fs(t) = f(t) combTs (t)
1 =
×
fs(t)
t
t
t Ts
Ts
2π COMBωs(ω) Ts
F (ω)
Fs(ω) =
2π Ts
* F (0)
F(0)
= –
ωN
ω
ωN
ω
2π Ts
(a)
Fs(ω) =
–
2π = ωs Ts
ωs 2
–2ωN
–ωs
ω
2π = ωs Ts
ωN
Fs(ω) =
ω
ωN
Ts
ωs 2 ωN
1 Ts F(ω) * COMBωs(ω) F(0) Ts
–ωN
1 F(ω) * COMBωs(ω) Ts
1 Ts F(ω) * COMBωs(ω) F(0) Ts
–ωN
ωN
2ωN
ωs = 2ωN = 2π/Ts
(b)
(c) Fs(ω) =
–ωs
1 F(ω) * COMBωs(ω) Ts F(0) Ts
Ts pωs/2(ω)
F(ω)
Ts
F(0)
=
×
ω
ωN ωs = 2ωN
–ωN
(d)
–
ωs 2
ω
ωs 2
–1
–ωN
–1
ωN
ω
–1
(e) ωst Ts sin ( 2 ) t π
fs(t)
f(t)
1 =
*
t
(f )
FIGURE 1.12
Ts
–2Ts
–Ts Ts
2Ts
t
t
ω
1-51
Signals and Systems
delta sampling representation and recovery of a band-limited signal. The following definitions have been used in the figure: combTs (t) ¼
1 X
n¼1
COMBvs (v) ¼
d(t nTs )
1 X
n¼1
1.6.1.2 Sampling with a Train of Rectangular Pulses The Fourier transform of a band-limited function sampled with periodic pulses is given by (see Figure 1.13)
(1:264)
1 F(v) * Fp (v) 2p nv t 9 8 s 1 = TN
(1:266)
possesses a Fourier transform that can be uniquely determined from its samples at distances np=TN, and is given by F(v) ¼
p sin (vTN np) F n TN vTN np n¼1 1 X
(1:267)
where t is the width of the pulse. The above expression indicates that as long as vs > 2vN, the spectrum of the sampled signal contains no overlapping spectra of f(t) and can be recovered using a low-pass filter.
1.6.2 Extensions of the Sampling Theorem The sampling theorem of a band-limited function of n variables is given by the following theorem:
where the sampling is at the Nyquist rate. f p(t)
f (t)
f s(t)
Ts τ
× t
–Ts
–
τ
τ
2
2
τ
Ts
= t
–Ts
Fs(ω)
2π
=
1 ωN
ω
–2ωs
2ωs –ωs
–
FIGURE 1.13
Low-pass filter
1
*
–ωN
t
Ts
Fp(ω)
F(ω)
1 2π
(1:268)
2π τ
ωs 2π τ
ω
–ωs –ωs + ωN
–ωN
ωN ωs – ωN
ω
ωs ωs + ωN
1-52
Transforms and Applications Handbook
can be represented by
THEOREM 1.6 Let f(t1, t2, . . . , tn) be a function of n real variables, whose n-dimensional Fourier integral exists and is identically zero outside an n-dimensional rectangle and is symmetrical about the origin, that is, g(y1 , y2 , . . . , yn )0, jyk j > jvk j, k ¼ 1, 2, . . . , n
f (t) ¼
1 X
f (nT)
n¼1
sin w0 (t nT) w2 (t nT)
(1:273)
where
(1:269)
w2 ¼
p w1 , w1 w0 2w2 w1 T
Then 1 X
1 X
pm1 pmn f ,..., f (t1 , t2 , . . . , tn ) ¼ v vn 1 m1 ¼1 mn ¼1
THEOREM 1.8
sin (v1 t1 m1 p) sin (vn tn mn p) v 1 t1 m1 p vn tn mn p (1:270)
Given an arbitrary sequence of numbers {an}, if we form the sum x(t) ¼
An additional theorem on the sampling of band-limited signals follows.
1 X
an
n¼1
sin w0 (t nT) w2 (t nT)
(1:274)
then x(t) is band limited by w0. The sampling expansion of f 2(t) is given by
THEOREM 1.7 Let f(t) be a continuous function with finite Fourier transform F(v)[F(v) ¼ 0 for jvj > 2pfN]. Then f (t) ¼
1 X
k¼1
R
j(kh) þ (t kh)j(1) (kh) þ þ
(t kh) (R) j (kh) R!
p sin h (t kh) Rþ1 p h (t kh)
(1:271)
f 2 (t) ¼
G(b) a G(0) a G(6) a
j X j p j1
(j1)
h d t a ¼ b dt sin t t¼0 a (4) a(5a þ 2) (2) ¼ 1, Ga ¼ , Ga ¼ , 3 15 a(35a2 þ 42a þ 16) ¼ , . . . , G(b) a ¼ 0 for odd b 63 i¼0 b
i
n¼1
w2 ¼
(1:275)
The band-limited signal given in Equation 1.272 can be expressed in terms of the sample values g(nT) of the output w ð1
F(v)H(v)ejwt dv
(1:276)
w1
of a system with transfer function H(v) driven by f(t). The sampling expansion of f(t) is f (t) ¼
1 X
n¼1
g(nT)y(t nT)
(1:277)
where
y(t) ¼
1.6.2.1 Papoulis Extensions
sin w0 (t nT) w2 (t nT)
p p , w2 2w1 , 2w1 w0 2w2 2w1 , T T 2w1
1 g(t) ¼ 2p
GRþ1 f (i) (kh)
f 2 (nT)
where
where R is the highest derivative order h ¼ (R þ 1)=(2fN) j(R) (kh) is the Rth derivative of the function j() j(j) (kh) ¼
1 X
1 2w1
w ð1
w1
ejvt dv H(v)
(1:278)
The band-limited signal 1 f (t) ¼ 2p
w ð1
w1
1.7 Asymptotic Series F(v)e
jvt
dv
(1:272)
Functions such as f(z) and w(z) are defined on a set R in the complex plane. By a neighborhood of z0 we mean an open disc
1-53
Signals and Systems
jz z0j < d if z0 is at a finite distance, and a region jzj > d if z0 is the point at infinity. f ¼ O(w) and f ¼ o(w) Notation We write f ¼ O(w) if there exists a constant A such that j f j Ajwj for all z in R. We also write f ¼ O(w) as z ! z0 if there exists a constant A and a neighborhood U of z0 such that j f j Ajwj for all points in the intersection of U and R. We write f ¼ o(w) as z ! z0 if, for any positive number e, there exists a neighborhood U of z0 such that j f j ejwj for all points z of the intersection of U and R. More simply, if w does not vanish on R, f ¼ O(w) means that f=w is bounded, f ¼ o(w) means that f=w tends to zero as z ! z 0.
is an approximation to f(z) with an error O(wm) as z ! z0; this error is of the same order of magnitude as the first term omitted. If such an asymptotic expansion exists, it is unique, and the coefficients are given successively by Pm1 limz!z0 f (z) n¼0 an wn (z) am ¼ wm (z)
(1:283)
Hence, for a function f(z) we write
f (z) ffi
1 X
an wn (z)
(1:284)
n¼0
1.7.3 Asymptotic Approximation
1.7.1 Asymptotic Sequence A sequence of functions {wn(z)} is called an asymptotic sequence as z ! z0 if there is a neighborhood of z0 in which none of the functions vanish (except the point z0) and if for all n
A partial sum of Equation 1.284 is called an asymptotic approximation to f(z). The first term is called the dominant term. The above definition applies equally well for a real variable z.
1.7.4 Asymptotic Power Series wnþ1 ¼ o(wn ) as z ! z0 For example, if z0 is finite {(z z0)n} is an asymptotic sequence as z ! z0, and {zn} is as z ! 1.
1.7.2 Poincaré Sense Asymptotic Sequence
We shall assume that the transformation z0 ¼ 1=(z z0) has been done for limit points z0 located at a finite distance. Hence we can always consider expansions as z approaches infinity in a sector a < ph z < b; or, for real value x, as x approaches infinity or as x approaches negative infinity. The divergence series
The formal series f (z) ffi
1 X
f (z) ¼ an wn (z)
(1:279)
n¼0
which is not necessarily convergent, is an asymptotic expansion of f(z) in the Poincaré sense with respect to the asymptotic sequence {wn(z)}, if for every value of m, f (z)
1 X n¼0
1 X an a1 a2 an ¼ a0 þ þ 2 þ n þ z z z z n n¼0
an wn (z) ¼ o(wm (z))
in which the sum of the first (n þ 1) terms is Sn(z), is said to be an asymptotic expansion of a function f (z) for a given range of values of arg z, if the expansion Rn(z) ¼ zn{f(z) Sn(z)} satisfies the condition lim Rn (z) ¼ 0
(1:280)
as z ! z0 :
jzj!1
lim jRn (z)j ¼ 1
n!1 m1 X n¼0
an wn (z) ¼ am wm (z) þ o(wm (z))
(1:281)
in partial sum
(z is fixed)
When this is true, we can make jz n { f (z) Sn (z)}j < e
m1 X n¼0
an wn (z)
(1:285)
even though
Because f (z)
(n is fixed)
(1:282)
(1:286)
where e is arbitrarily small, by making jzj sufficiently large. This definition is due to Poincaré.
1-54
Transforms and Applications Handbook
1. If A is constant
Example For real x, integration on the real axis and repeated integration by parts, we obtain
f ( x) ¼
1 ð x
1 1 2! (1)n1 (n 1)! t 1 ext dt ¼ 2 þ 3 þ x x x xn
þ (1)n n!
1 ð
e
xt
Af (x)
1 X Aan xn n¼0
(1:287)
2. f (x) þ g(x)
dt
t nþ1
1 X a n þ bn xn n¼0
(1:288)
x
3. If we consider the expansion un1 ¼
(1)
n1
f (x)g(x)
(n 1)!
xn
xn
(1:289)
4. If a0 6¼ 0, then
n X
1 1 2! (1)n! um ¼ 2 þ 3 þ nþ1 ¼ Sn (x) x x x x m¼0 But jum=um1j ¼ mx1 ! 1 as m ! 1. The series Sum is divergent for all values of x. However, the series can be used to calculate f(x). For a fixed n, we can calculate Sn from the relation
f (x) Sn (x) ¼ (1)nþ1 (n þ 1)!
1 ð
jf (x) Sn (x)j ¼ (n þ 1)!
ext dt t nþ2
(1:290)
The function 1=f(x) tends to a finite limit 1=a0 as x approaches infinity. Hence, 1 1 f (x) a0
1 1 a0 þ (a1 =x) þ O(1=x2 ) a0 a1 þ O 1x a1 ¼ ! 2 ¼ d1 a0 [a0 þ (a1 =x) þ O(1=x2 )] a0
(1=x) ¼ x
x
Similarly we obtain 1 ð x
ext dt < (n þ 1)! t nþ2
1 ð
dt t nþ2
x
¼
n! x nþ1
For large values of x the right-hand member of the above relation is very small. This shows that the value of f(x) can be calculated with great accuracy for large values of x, by taking the sum of a suitable number of terms of the series Sum. From the last relation we obtain jx n {f (x) Sn (x)}j < n!x 1 ! 0
as x ! 1
which satisfies the asymptotic expansion condition.
1.7.5 Operation of Asymptotic Power Series Let the following two functions possess asymptotic expansions: 1 X an , xn n¼0
1 1 1 X dn , x!1
þ f (x) a0 n¼1 xn
Because exp(x t) 1,
on the real axis.
n¼0
cn ¼ a0 bn þ a1 bn1 þ þ an1 b1 þ an b0
we can write
f (x)
1 X cn
g(x)
1 X bn n x n¼0
1 1 a1 þ 2 f (x) a0 a0 x
1 a2 a0 a2 ¼ d2 ! 1 3 2 a0 x
and so on. In general, any rational function of f(x) has an asymptotic power series expansion provided that the denominator does not tend to zero as x approaches infinity. 5. If f(x) is continuous for x > a > 0 and if x > a, then
F(x) ¼
1 ð x
f (t) a0
a1 dt t
a2 a3 anþ1
þ 2 þ þ n þ x 2x nx
(1:291)
6. If f(x) has a continuous derivative f 0 (x), and if f 0 (x) possess an analytic power series expansion as x approaches infinity, the expression is
as x ! 1 f 0 (x)
1 X (n 1)an1 xn n¼2
(1:292)
1-55
Signals and Systems
7. It is permissible to integrate an asymptotic expansion term-by-term. The resulting series is the expansion of the integral of the function represented by the original series.
Integrating by parts we obtain
f (x, a) ¼
Let f (x)
1 X
am xm
Sn ¼
and
m¼2
n X
þ
am xm
m¼2
Then, give any positive number e, we can find x0 such that jf (x) Sn (x)j < ejxjn
n je jx je jx X G(a þ r) jaf (x, a þ 1) ¼ a a x x r¼0 G(a)( jx)r
for x > x0
1
1
ð
ð
f (x)dx Sn (x)dx
f (x, a)
x
j f (x) Sn (x)jdx
0, the transform of the corresponding pulse function, pa (s) ¼
1, if 0, if
1 ^ [f]jx ¼ 2p
1 ð
jxs
f(s)e ds:
(2:2)
1
Example 2.1 If f(s) ¼ es u(s), then 1 ð
1
jsj < a , a < jsj
is
^[ pa ]jx ¼
ða
a
ejxs ds ¼
e jax ejax 2 ¼ sin (ax): jx x
A function, c, is said to be ‘‘classically transformable’’ if either 1. c is absolutely integrable on the real line 2. c is the Fourier transform (or Fourier inverse transform) of an absolutely integrable function 3. c is a linear combination of an absolutely integrable function and a Fourier transform (or Fourier inverse transform) of an absolutely integrable function If f is classically transformable but not absolutely integrable, then it can be shown that formulas 2.1 and 2.2 can still be used to define ^[f] and ^1[f] provided the limits are taken symmetrically; that is:
^[f]jx ¼ lim
ða
f(s)ejxs ds
a
and
^1 [f]jx ¼
^[f]jx ¼
e(1jx)s ds
1
and
1
1 ð
1 : ¼ 2p j2px
a!1
1 ð
1 2p
es u(s)ejxs ds ¼
1 ð 0
e(1þjx)s ds ¼
1 1 þ jx
1 lim 2p a!1
ða
f(s)e jxs ds:
a
In most applications involving Fourier transforms, the functions of time, t, or position, x, are denoted using lower case letters—for example: f and g. The Fourier transforms of these functions are denoted using the corresponding upper case letters—for example: F ¼ ^[ f] and G ¼ ^[g]. The transformed functions can be viewed as functions of angular frequency, v. Along these same lines it is standard practice to view a signal as a pair of functions, f(t) and F(v), with f(t) being the ‘‘time domain representation of the signal’’ and F(v) being the ‘‘frequency domain representation of the signal.’’
2-3
Fourier Transforms
2.1.2 Alternate Definitions Pairs of formulas other than formulas 2.1 and 2.2 are often used to define ^[f] and ^1[f]. Some of the other formula pairs commonly used are: ^[f]jx ¼ ^1 [f]jx ¼
1 ð 1 ð
1. Every derivative of f exists and is a continuous function on ( 1, 1) and 2. For every pair of nonnegative integers, n and p,
f(s)ej2pxs ds,
1
(2:3)
and
1 ^1 [f]jx ¼ pffiffiffiffiffiffi 2p
(n) f (s) ¼ O(jsj p )
f(s)e j2pxs ds
1
1 ^[f]jx ¼ pffiffiffiffiffiffi 2p
employ a generalized definition of the Fourier transform constructed using the set of ‘‘rapidly decreasing test functions’’ and a version of Parseval’s equation (see Section 2.2.14). A function, f, is a ‘‘rapidly decreasing test function’’ if
1 ð
f(s)ejxs ds,
1 ð
f(s)e jxs ds:
1
(2:4)
The set of all rapidly decreasing test functions is denoted by 6 and includes the Gaussian functions as well as all test functions that vanish outside of some finite interval (such as those discussed in Chapter 1. If f is a rapidly decreasing test function then it is easily verified that f is classically transformable and that both ^[f] and ^ 1[f] are also rapidly decreasing test functions. It can also be shown that ^ 1[^[f]] ¼ f. Moreover, if f and G are classically transformable, then 1 ð
1
Equivalent analysis can be performed using the theory arising from any of these pairs; however, the resulting formulas and equations will depend on which pair is used. For this reason care must be taken to ensure that, in any particular application, all the Fourier analysis formulas and equations used are derived from the same defining pair of formulas.
1
Let f(t) ¼ et u(t) and let c1, c2, and c3 be the Fourier transforms of f as defined, respectively, by formulas 2.1, 2.3, and 2.4. Then,
c1 (v) ¼ c2 (v) ¼
1 ð
1 1 ð
1
et u(t)ejtv dt ¼
1 , 1 þ jv
et u(t)ejt2ptv dt ¼
1 , 1 þ j2pv
and
1 ð
f (y)^[f]jy dy
(2:5)
1 ð
G(y)^ 1 (f)jy dy:
(2:6)
^ [f ]jx f(x)dx ¼
1
and 1 ð
1
Example 2.3
as jsj ! 1:
1
^ [G]jx f(x)dx ¼
1
If f is a function or a generalized function for which the righthand side of Equation 2.5 is well defined for every rapidly decreasing test function, f, then the generalized Fourier transform of f, ^[ f], is that generalized function satisfying Equation 2.5 for every f in 6. Likewise, if G is a function or generalized function for which the right-hand side of Equation 2.6 is well defined for every rapidly decreasing test function, f, then the generalized inverse Fourier transform of G, ^ 1[G], is that generalized function satisfying Equation 2.6 for every f in 6.
Example 2.4 Let a be any real number. Then, for every rapidly decreasing test function f,
1 c3 (v) ¼ pffiffiffiffiffiffi 2p
1 ð
1
1 1 et u(t)ejtv dt ¼ pffiffiffiffiffiffi : 2p 1 þ jv
2.1.3 The Generalized Transforms Many functions and generalized functions* arising in applications are not sufficiently integrable to apply the definitions given in Section 2.1.1 directly. For such functions it is necessary to * For a detailed discussion of generalized functions, see Chapter 1.
1 ð
1
^[e jay ]jx f(x)dx ¼
1 ð
e jay ^[f]jy dy
1
2
1 ¼ 2p4 2p
1 ð
1
^[f]jy e
jay
¼ 2p^ 1 ½^[f]ja ¼ 2pf(a) 1 ð 2pd(x a)f(x)dx ¼ 1
3
dy5
2-4
Transforms and Applications Handbook
where d(x) is the delta function. This shows that, for every f in 6, 1 ð
1
2pd(x a)f(x)dx ¼
1 ð
for every test function f(t), in &. In particular, letting a þ jb ¼ j, ^[et ]jv ¼ 2pd j (v)
e jay ^[f]jy dy
1
and
and thus, ^[e jay ]jx ¼ 2pd(x a):
Any (generalized) function whose Fourier transform can be computed via the above generalized definition is called ‘‘transformable.’’ The set of all such functions is sometimes called the set of ‘‘tempered generalized functions’’ or the set of ‘‘tempered distributions.’’ This set includes any piecewise continuous function, f, which is also polynomially bounded, that is, which satisfies p
j f (s)j ¼ O(jsj ) as jsj ! 1 for some p < 1. Finally, it should also be noted that if f is classically transformable, then it is transformable, and the generalized definition of ^[ f] yields exactly the same function as the classical definition.
2.1.4 Further Generalization of the Generalized Transforms Unfortunately, even with the theory discussed in Section 2.1.3, it is not possible to define or discuss the Fourier transform of the real exponential, et. It may be of interest to note, however, that a further generalization that does permit all exponentially bounded functions to be considered ‘‘Fourier transformable’’ is currently being developed using a recently discovered alternate set of test functions. This alternate set, denoted by &, is the subset of a rapidly decreasing test functions that satisfy the following two additional properties: 1. Each test function is an analytic test function on the entire complex plane. 2. Each test function, f(x þ jy), satisfies f(x þ jy) ¼ O(eajxj ) as x ! 1 for every real value of y and a. The second additional property of these test functions ensures that all exponentially bounded functions are covered by this theory. The very same computations given in Example 2.4 can be used to show that, for any complex value, a þ jb,
^[dj (t)]jv ¼ ev : In addition to allowing delta functions to be defined at complex points, the analyticity of the test functions allows a generalization of translation. Let a þ jb be any complex number and f(t) any (exponentially bounded) (generalized) function. The ‘‘generalized translation of f(t) by a þ jb,’’ denoted by Ta þ jb f(t), is that generalized function satisfying 1 ð
1
T aþjb f (t)f(t)dt ¼
1 ð
1
daþjb (t)f(t)dt ¼ f(a þ jb)
1
f (t)f(t þ (a þ jb))dt
(2:7)
for every test function, f(t), in &. So long as b ¼ 0 or f(t) is, itself, an analytic function on the entire complex plane, then the generalized translation is exactly the same as the classical translation. Taþjb f (t) ¼ f (t
(a þ jb)):
It may be observed, however, that Equation 2.7 defines the generalized function Ta þ jb f even when f(z) is not defined for nonreal values of z.
2.1.5 Use of the Residue Theorem Often a Fourier transform or inverse transform can be described as an integral of a function that either is analytic on the entire complex plane, or else has a few isolated poles in the complex plane. Such integrals can often be evaluated through intelligent use of the reside theorem from complex analysis (see Appendix A). Two examples illustrating such use of the reside theorem will be given in this section. The first example illustrates its use when the function is analytic throughout the complex plane, while the second example illustrates its use when the function has poles off the real axis. The use of the residue theorem to compute transforms when the function has poles on the real axis will be discussed in Section 2.1.6.
Example 2.5 Transform of an Analytic Function Consider computing the Fourier transform of g(t) ¼ e
^[e j(aþjb)t ]jv ¼ 2pdaþjb (v), where da þ jb (t) is ‘‘the delta function at a þ jb.’’ This delta function, da þ jb(t), is the generalized function satisfying
1 ð
G(v) ¼ ^[g(t)]jv ¼
1 ð
e
t2
e
jvt
1
Because v2 v2 þ , t 2 þ jvt ¼ t þ j 4 2
dt:
t2
,
2-5
Fourier Transforms it follows that
while 1 ð
1 2
G(v) ¼ e4v
v2 exp t þ j dt 2
1
lim
g!1
1 2
¼ e4v
2
ez dz:
(2:8)
g!1
2
ez dz
lim
g!1
ð
¼
e
z
2
dz þ
ð
e
z
2
dz þ
C2, g
C1, g
ð
e
z
2
dz þ
C3, g
ð
e
z
2
2
g!1
2
ez dz ¼
C3, g
ð
2
ez dz þ
C1, g
ð
ð
2
ez dz þ
C2, g
2
ez dz:
ð
lim
g!1
e
z 2
14v2
dz ¼ lim
g!1
¼e
2
e(gþjy) dy
2
¼ lim eg g!1
So,
v=2 ð
ey
2
1
2
14v2
2
2
6 lim 4
g!1
ð
C3, g
ð
3
2 7 ez dz5 2
ez dz þ
ð
2
ez dz þ
C2, g
C1, g
1 2 pffiffiffiffi ¼ e4v p:
ð
C4, g
2
3
7 ez dz5
j2gy
pffiffiffiffi 1 2 2 ^[et ] ¼ G(v) ¼ pe4v :
dy
v
y¼0
¼ 0:
Example 2.6 Transform of a Function with a Pole Off the Real Axis
Likewise, lim
g!1
ð
Consider computing the Fourier inverse transform of F(v) ¼ (1 þ v2)1,
2
ez dz ¼ 0,
C4, g
f (t) ¼ ^1 [F(v)]jt ¼
Y C3,γ
C4,γ
FIGURE 2.1
2
ex dx:
y¼0
C2, g
x = –γ
1 ð
ez dz
6 lim 4
g!1
(2:9)
C4, g
v=2 ð
2
ex dz ¼
1þjv2
Now, ð
1þj v2
1þj v2
14v2
C4, g
¼e ð
2
ez dz
pffiffiffiffi That last integral is well known and equals p. Combining Equations 2.8 and 2.9 with the above limits yields
Thus,
ðg
x¼g
C1, g
G(v) ¼ e
dz:
ez dz ¼
gþjv2
ez dz ¼ lim
Cg
ð
ð
2
and
2
ð
ez dz ¼ lim
1þjv2
ð
1þjv2
Consider, now, the integral of e2z over the contour Cg where, for each g > 0, Cg ¼ C1,g þ C2,g þ C3,g þ C4,g is the contour in 2 Figure 2.1. Because e2z is analytic everywhere on the complex plane, the residue theorem states that 0¼
gþjv2 2
C3, g
1þj v2
ð
ð
y = ω/2
C1,γ
2
Contour for computing ^[et ].
1 ð
1
e jtv dv: 1 þ v2
(2:10)
For t ¼ 0, f (0) ¼
C2,γ
1 2p
1 2p
1 ð
1
1 1 1 dv ¼ arctan vj1 1 ¼ : 1 þ v2 2p 2
(2:11)
To evaluate f(t) when t 6¼ 0, first observe that the integrand in formula 2.10, viewed as a function of the complex variable,
x=γ X
F(z) ¼
e jtz , 1 þ z2
2-6
Transforms and Applications Handbook
has simple poles at z ¼ j. The residue at z ¼ j is Resj [F] ¼ lim (z
j)F(z) ¼ lim (z
z!j
z!j
j)
and
e jtz 1 ¼ e t, j)(z þ j) 2j
(z
f (t) ¼
while the residue at z ¼ j is
¼
Res j [F] ¼ lim (z þ j)F(z) ¼ z! j
Cg
e jtz dz þ 1 þ z2
e jtz dz ¼ 2pjResj [F] ¼ pe 1 þ z2
ð
Cþ, g
Cg
e jtz dz þ 1 þ z2
C
ð
,g
e jtv dv 1 þ v2
1 lim 2p g!1
ð
Cg
e jtz dz 1 þ z2
2
1 6 t ¼ 4pe þ lim g!1 2p
C
3
e jtz 7 dz5: 1 þ z2
ð
,g
(2:13)
Now, t
ðp ð e jtg( cos uþj sin u) jtz e ju ¼ dz ge du 2 2 j2u 1þz 1þg e Cþ, y 0
and ð
1 ð
1
1 t e: 2j
For each g > 1, let Cg, Cþ,g, and C ,g be the curves sketched in Figure 2.2. By the residue theorem: ð
1 2p
ðp jtg( cos uþj sin u) e ju < ge du 1 þ g2 e j2u
e jtz dz ¼ 2pjRes j [F] ¼ pet : 1 þ z2
0
0 and 0 u p,
lim
g!1
ð
Cþ, g
Y
0e
jtz
3
e 7 dz5 1 þ z2
tg sin u
1:
Thus, for t > 0, (2:12) ð ðp tg sin u jtz e e g du dz lim lim g!1 1 þ z 2 g!1 g2 1 Cþ, g 0 lim
y=γ
g!1
ðp
g
g2
1
du
0
C+,γ
lim
j
g!1
pg g2 1
¼ 0:
Cγ x=γ
X
Combining this last result with Equation 2.12 gives
–j C–,γ
f (t) ¼ FIGURE 2.2 Contours for computing ^ 1[(1 þ v2) 1].
2
1 6 4pe 2p
whenever t > 0.
t
lim
g!1
ð
Cþ, g
jtz
3
e 7 1 dz5 ¼ e 1 þ z2 2
t
(2:14)
2-7
Fourier Transforms In a similar fashion, it is easy to show that if t < 0,
or, equivalently, by 2 e 3 ð ðR 1 1 1 jtz jtz f (t) ¼ limþ 4 e dz þ e dz5: 2p e!0 z z R!þ1
ð 2p ð jtz e e tg sin u lim lim dz g du 2 g!1 g!1 1þz g2 1 C, g p 2p ð
lim
g!1
g
g2
1
R
Because v
du
p
1
is an odd function, f(0) is easily evaluated,
2 e 3 ð ðR 1 1 1 5 4 f (0) ¼ limþ dv þ dv ¼ 0: 2p e!0 v v R!þ1
¼ 0, which, combined with Equation 2.13, yields
f (t) ¼
1 6 t 4pe þ lim g!1 2p
C
To evaluate f(t) when t > 0, first observe that the only pole of the integrand in formula 2.16,
3
ð
,g
e jtz 7 1 dz5 ¼ et 1 þ z2 2
(2:15)
whenever t < 0. Finally, it should be noted that formulas 2.11, 2.14, and 2.15 can be written more concisely as 1 f (t) ¼ e 2
jtj
1 F(z) ¼ e jtz , z is at z ¼ 0. For each 0 < e < R, let Ce and CR be the semicircles indicated in Figure 2.3. By the residue theorem, ðe
:
1 jtz e dz þ z
Cauchy principal value (CPV) at x ¼ x0 of an integral, ÐThe 1 f(x)dx, is 1 CPV
1
f(x)dx ¼ limþ 4 e!0
ðe
0
1
f(x)dx þ
1 ð
x0 þe
3
f(x)dx5
provided the limit exists. So long as f is an integrable function, it should be clear that
CPV
1 ð
1
f(x)dx ¼
1 ð
1 jtz e dz þ z
ð
1 jtz e dz þ z
Ce
ð
1 jtz e dz ¼ 0: z
CR
This, combined with Equation 2.16, yields
2.1.6 Cauchy Principal Values
2x
ðR e
R
1 ð
(2:17)
e
R
2
(2:16)
e
f (t) ¼
2
1 6 4 lim 2p e!0þ
ð
1 jtz e dz þ lim R!1 z
ð
CR
Ce
3
1 jtz 7 e dz5, z
(2:18)
provided the limits exist. Now,
lim
e!0þ
ð
1 jtz e dz ¼ limþ e!0 z
Ce
¼ j limþ e!0
f(x)dx:
1
ð0
1 jte( cos uþj sin u) ju e jee du ee ju
p
ð0
e
et( sin uþj cos u)
du
p
ð0 ¼ j e0 du
It is when f is not an integrable function that the CPV is useful. In particular, the Fourier transform and Fourier inverse transform of any function with a singularity of the form (x x0) 1 can be evaluated as the CPVs at x ¼ x0 of the integrals in formulas 2.1 and 2.2.
p
¼
jp:
(2:19)
Y CR
Example 2.7 Consider evaluating the inverse transform of F(v) ¼ v 1. Because of the v 1 singularity, f ¼ ^ 1[F] is given by f (t) ¼
1 CPV 2p
1 ð
1
Cε x=ε
1 jvt e dv v FIGURE 2.3
Contour for computing ^ 1[v 1].
x=R
X
2-8
Transforms and Applications Handbook
Similarly,
Example 2.8 ð
CR
Because ^[et u(t)]jv ¼ (1 þ jv)1 (see Example 2.1),
ðp 1 jtz e dz ¼ j eRt( sin uþj cos u) du: z 0
^
Here, because t > 0, the integrand is uniformly bounded and vanishes as R ! 1. Thus, ð
lim
R!1
1 jtz e dz ¼ 0: z
(2:20)
With Equations 2.19 and 2.20, Equation 2.18 becomes
f (t) ¼
1 6 4 lim 2p e!0þ
ð
1 jtz e dz þ lim R!1 z
Ce
ð
CR
1 ¼ et u(t): 1 þ jv t
2.2.2 Near-Equivalence (Symmetry of the Transforms)
3
1 jtz 7 j e dz5 ¼ : (2:21) z 2
By replacing Ce and CR with corresponding semicircles in the lower half-plane, the approach used to evaluate f(t) when 0 < t, can be used to evaluate f(t) when t < 0. The computations are virtually identical, except for a reversal of the orientation of the contour of integration, and yield j f (t) ¼ , 2
Computationally, the classical formulas for ^[f(s)]jx and ^1[f(s)]jx (formulas 2.1 and 2.2) are virtually the same, differing only by the sign in the exponential and the factor of (2p)1 in Equation 2.2. Observing that
CR
2
1
(2:22)
when t < 0. Finally, it should be noted that formulas 2.17, 2.21, and 2.22 can be written more concisely as 1 j ¼ f (t) ¼ sgn(t): ^1 v t 2
1 ð
f(s)e
jxs
1
2
1 ds ¼ 2p4 2p 2
1 ¼ 2p4 2p
1 ð
f(s)e
j(x)s
f(s)e
jx(s)
1 1 ð
1
3
ds5 3
ds5
leads to the ‘‘near equivalence’’ identity, ^[f(s)]jx ¼ 2p ^1 [f(s)]jx ¼ 2p^1 [f(s)]jx :
(2:23)
Likewise, ^1 [f(s)]jx ¼
1 1 ^[f(s)]jx ¼ ^[f(s)]jx : 2p 2p
(2:24)
Example 2.9
2.2 General Identities and Relations Using near-equivalence and results of Example 2.1,
Some of the more general identities commonly used in computing and manipulating Fourier transforms and inverse transforms are described here. Brief (nonrigorous) derivations of some are presented, usually employing the classical transforms (formulas 2.1 and 2.2). Unless otherwise stated, however, each identity may be assumed to hold for the generalized transforms as well.
^[es u(s)]jx ¼ 2p^1 [es u(s)]jx ¼ 2p
2.2.3 Conjugation of Transforms Using z* to denote the complex conjugate of any complex quantity, z, it can be observed that
2.2.1 Invertibility The Fourier transform and the Fourier inverse transform, ^ and ^1, are operational inverses, that is,
0 @
c ¼ ^[f] , ^1 [c] ¼ f:
1 ð
1
1 1 * ð jvt A f (t)e dt ¼ f *(t)e jvt dt: 1
Thus,
Equivalently, ^1 [^[ f ]] ¼ f
1 1 : ¼ 2p j 2px 1 jx
and ^[^1 [F]] ¼ F:
^[ f ]* ¼ 2p^1 [ f *]:
(2:25)
2-9
Fourier Transforms
2.2.6 Translation and Multiplication by Exponentials
Likewise, ^1 [ f ]* ¼
1 ^[ f *]: 2p
(2:26)
If F(v) ¼ ^[ f(t)]jv and a is any real number, then a)]jv ¼ e
^[ f (t
2.2.4 Linearity
^[e
If a and b are any two scalar constants, then it follows from the linearity of the integral that ^[af þ bg] ¼ a^[ f ] þ b^[g]
jat
jav
F(v),
(2:29)
a),
(2:30)
f (t)]jv ¼ F(v
a)]jt ¼ e jat f (t),
^ 1 [F(v
(2:31)
and
and
^ 1 [e jav F(v)]jt ¼ f (t ^1 [aF þ bG] ¼ a^1 [F] þ b^1 [G]:
a):
(2:32)
These formulas are easily derived from the classical definitions. Identity 2.30 for example, comes directly from the observation that
Example 2.10 Using linearity and the transforms computed in Examples 2.1 and 2.9,
^ ejtj v ¼ ^½et u(t) þ et u(t)v ¼ ¼
2 1 þ v2
1 1 þ 1 þ jv 1 jv
jtj
¼ ^½e t u(t) v 2vj : ¼ 1 þ v2
et u( t)v ¼
1 1 þ jv
e
jat
f (t)e
jvt
1
dt ¼
1 ð
f (t)e
j(v a)t
dt:
1
In general, identities 2.29 through 2.32 are not valid when a is not a real number. An exception to this occurs when f is an analytic function on the entire complex plane. Then identities 2.29 and 2.32 do hold for all complex values of a. Likewise, identities 2.30 and 2.31 may be used whenever a is complex provided F is an analytic function on the entire complex plane.
and ^ sgn(t)e
1 ð
1 1
jv
Example 2.12 2
Let g(t) ¼ e t . It can be shown that g(t) is analytic on the entire complex plane and that its Fourier transform is
2.2.5 Scaling If a is any nonzero real number, then, using the substitution t ¼ at, 1 ð
f (at)e
1
jtv
1 dt ¼ jaj
1 ð
pffiffiffiffi p exp
1 2 v 4
(see Example 2.5 or Example 2.18). If b is any real value, then
f (t)e
jtv a
dt:
h ^ e
1
Letting F(v) ¼ ^[ f(t)]jv, this can be rewritten as ^[ f (at)]jv ¼
G(v) ¼
1 v F : jaj a
(2:27)
t2 þ2bt
i h ¼ ^ e j( v
j2b)t
e
t2
i
v 1 (v ( j2b))2 4 pffiffiffiffi b2 1 2 ¼ pe exp v þ jbv : 4
pffiffiffiffi ¼ p exp
Likewise, 1 t ^ [F(av)]jt ¼ f : jaj a 1
Example 2.11 Using identity 2.27 and the results from Example 2.10: ^ e
jatj
2 2jaj ¼ 1 : : ¼ v jaj 1 þ v 2 a2 þ v2 a
(2:28)
2.2.7 Complex Translation and Multiplication by Real Exponentials Using the ‘‘generalized’’ notion of translation discussed in Section 2.1.4, it can be shown that for any complex value, a þ jb, ^[Taþjb f (t)]jv ¼ e
j(aþjb)v
F(v),
^[e j(aþjb)t f (t)]jv ¼ Taþjb F(v), ^ 1 [Taþjb F(v)]jt ¼ e j(aþjb)t f (t),
2-10
Transforms and Applications Handbook
Example 2.14
and ^1 [e j(aþjb)v F(v)jt ¼ T(aþjb) f (t):
For a > 0, the function
Letting a ¼ 0 and b ¼ g, these identities become
^ Tjg f (t) v ¼ egv F(v),
f (t) ¼
(
cos 0,
p t , 2a
ata
if
otherwise
gt
^½ e f (t)jv ¼ T jg F(v),
^ 1 T jg F(v) ¼ egt f (t),
can be written as
t
and
f (t) ¼ cos
^ 1 ½egv F(v)jt ¼ Tjg f (t):
p t pa (t): 2a
Thus, using identity 2.33 and the results of Example 2.2,
Caution must be exercised in the use of these formulas. It is true that Ta þ jb f(t) ¼ f(t (a þ jb)) whenever b ¼ 0 or f(z) is analytic on the entire complex plane. However, if f(z) is not analytic and b 6¼ 0, then it is quite possible that Ta þ jb f(t) 6¼ f(t (a þ jb)), even if f(t (a þ jb)) is well defined. In these cases Ta þ jbf(t) should be treated formally.
h p i F(v) ¼ ^ cos t pa (t) 2a v h h 1 2 p i 2 p i ¼ sin a v sin a v þ þ p p 2 v 2a 2a v þ 2a 2a ¼
p2
4ap cos (av): 4a2 v2
Example 2.13
2.2.9 Products and Convolution
By the above
If F ¼ ^[ f] and G ¼ ^[g], then the corresponding transforms of the products, fg and FG, can be computed using the identities
^½et u(t)jv ¼ ^ e2t e t u(t) v ¼ T
2j
1 : 1 þ jv
Note, however, that ^½ et u(
t)jv ¼
(2:35)
^ 1 [FG] ¼ f * g,
(2:36)
and
1 1 ¼ : 1 jv 1 þ j(v ( 2j))
Because et u(t) and et u( t) certainly are not equal, it follows that their transforms are not equal, 1 1 T 2j : 6¼ 1 þ jv 1 þ j(v ( 2j))
provided the convolutions, F * G and f * g, exist. Conversely, as long as the convolutions exist, ^[ f * g] ¼ FG
(2:37)
^ 1 [F * G] ¼ 2p fg:
(2:38)
and
2.2.8 Modulation The ‘‘modulation formulas,’’ 1 ^½cos (v0 t)f (t)jv ¼ [F(v 2
v0 ) þ F(v þ v0 )]
(2:33)
and ^½sin (v0 t)f (t)jv ¼
1 F G 2p *
^[ fg] ¼
1 [F(v 2j
F(v þ v0 )]
v0 )
(2:34)
are easily derived from identity 2.30 using the well-known formulas 1 cos (v0 t) ¼ [e jv0 t þ e 2
jv0 t
1 jv0 t [e 2j
e
jv0 t
1 ð
1
f (t)g(t)e
jvt
1 ð
0
1 ¼ 2p
1 ð
F(s)
1 ¼ 2p
1 ð
F(s)G(v
dt ¼
1
]
and sin (v0 t) ¼
Identity 2.35 can be derived as follows:
]:
@1 2p 1
1 ð
1
1
F(s)e jst dsAg(t)e
1 ð
g(t)e
j(v s)t
jvt
dt ds
1
s)ds:
1
The other identities can be derived in a similar fashion.
dt
2-11
Fourier Transforms Using identity 2.37
Example 2.15
1 ^[pa=2 (t) pa=2 (t)]v a
a 2 a 1 2 ¼ sin v sin v a v 2 v 2 4 a sin2 v : ¼ av2 2
From direct computation, if b > 0, then
1
^
e
bv
1 u(v) t ¼ 2p
1 ð
^[La (t)]jv ¼
e( jtb)v dv ¼
1 1 : 2p b jt
0
And so,
^
1
b
Applying identity 2.35,
¼ 2pe jt
bv
2.2.10 Correlation u(v):
The cross-correlation of two functions, f(t) and g(t), is another function, denoted by f(t) $ g(t), given by
v
f (t) g(t) ¼ $
^
10
¼^ 1 1 t 2 v 2 jt 5 jt v
1 7 jt
1 [2pe 2p 1 ð ¼ 2p e ¼
2v
2s
u(v)] * [2pe 5(v s)
u(s)e
5v
u(v
u(v)]
e
5v
^[ f (t) $ g(t)]jv ¼ F *(v)G(v)
if 0 < v:
By straightforward computations it is easily verified that for a > 0,
^[ f * (t)g(t)]jv ¼
f (t) f (t) ¼ $
and pa=2 (t) * pa=2 (t) ¼ aLa (t),
and La(t) is the triangle function,
:
1 0,
jt j , if jtj < a a if a < jtj
(2:41)
1 ð
1
f * (s)f (t þ s)ds:
(2:42)
Often autocorrelation is denoted by rf(t) instead of f(t) $ f(t). For autocorrelation, formulas 2.40 and 2.41 simplify to ^[ f (t) $ f (t)]jv ¼ jF(v)j2
where pa=2(t) is the pulse function, 8 a > < 1, if jtj < 2, pa=2 (t) ¼ a > : 0, if < jtj 2
1 F(v) $ G(v), 2p
where F ¼ ^[ f] and G ¼ ^[g]. Derivations of these formulas are similar to the analogous identities involving convolution. For a given function, f(t), the corresponding autocorrelation function is simply the cross-correlation of f(t) with itself,
a
2 ^ pa=2 (t) v ¼ sin v v 2
La (t) ¼
(2:40)
and ],
Example 2.16
8
0 and g(t) ¼ e
f (t þ Dt) Dt
then application of the above identities is limited by requirements that the functions involved be suitably smooth and that they vanish at infinity. These limitations can be eliminated, however, by interpreting f 0 and F 0 in a more generalized sense. In this more generalized interpretation, f 0 and F 0 are defined to be the (generalized) functions satisfying the ‘‘generalized’’ integration by parts formulas,
Using identity 2.45, j
pffiffiffiffiffiffiffiffiffi p=a. Thus,
and
Dv!0
dt:
1
rffiffiffiffi p 1 2 exp v : a 4a
F 0 (v) ¼ lim
jg 0 (t),
at 2
It should be noted that if f 0 and F0 are assumed to be the classical derivatives of f and F, that is
and ^ 1 [v G(v)]t ¼
e
(2:45)
where g ¼ ^ 1[G]. Similar derivations yield ^[tf (t)]jv ¼ jF 0 (v)
1 ð
The value of this last integral is well known to be
where F ¼ ^[ f ]. By near equivalence, if G(v) is differentiable for all v and vanishes as v ! 1, then ^ 1 [G0 (v)]t ¼
1 2 v : 4a
The value of the constant of integration, A, can be determined* by noting that
f (t)ejvt dt:
0
0
1
for every test function, f (with f0 denoting the classical derivative of f). As long as the function being differentiated is piecewise smooth and continuous, then there is no difference between the classical and the generalized derivative. If however, the function, f(x), has jump discontinuities at x ¼ x1, x2, . . . , xN, then 0 0 fgeneralized ¼ fclassical þ
X
Jk dxk ,
k
* A method for determining A using Bessel’s equality is described in Section 2.2.15.
2-13
Fourier Transforms
2.2.12 Moments
where Jk denotes the ‘‘jump’’ in f at x ¼ xk,
For any suitably integrable function, f(t), and nonnegative integer, n, the ‘‘nth moment of f ’’ is the quantity
Jk ¼ lim þ f (xk þ Dx) f (xk Dx): Dx!0
It is not difficult to show that the product rule, (fg)0 ¼ f 0 g þ fg0 , holds for the generalized derivative as well as the classical derivative.
Example 2.19
mn ( f ) ¼
t n f (t)dt:
1
Because
Consider the step function, u(t). The classical derivative of u is clearly 0, because the graph of u consists of two horizontal half-lines (with slope zero). Using the generalized integration by parts formula, however, 1 ð
1 ð
1 ð
0
1
u (t)f(t)dt ¼
¼
1
u(t)f (t)dt
mn ( f ) ¼ jn F (n) (0):
1 1 ð
f0 (t)dt
2.2.13 Integration If F(v) and G(v) are the Fourier transforms of f(t) and g(t), and g(t) ¼ t1 f(t), then tg(t) ¼ f(t) and, by identity 2.47, jG0 (v) ¼ F(v). Integrating this gives
¼ f(0) 1 ð
t n f (t)dt ¼ ^[t n f (t)]j0 ,
it is clear from identity 2.52 that 0
0
¼
1 ð
d(t)f(t)dt,
ðv G(v) G(a) ¼ j F(s)ds,
1
showing that d(t) is the generalized derivative of u(t).
a
where a can be any real number. This can be written
Example 2.20
ðv f (t) ¼ j F(s)ds þ ca ^ t v
Using the generalized derivative and identity 2.47,
t d 1 ¼ j ^ ^ 1 jt v dv 1 jt v
(2:54)
a
where ca ¼ G(a). For certain general types of functions and choices of a, the value of ca is easily determined. For examine, if f(t) is also absolutely integrable, then
d (2pev u(v)) dv v de v 0 u(v) þ e u (v) ¼ 2pj dv
¼j
ðv f (t) ¼ j F(s)ds, ^ t v
¼ 2pj[ ev u(v) þ d(v)]: The extension of formulas 2.45 through 2.48 to the corresponding identities involving higher-order derivatives is straightforward. If n is any positive integer, then
(2:55)
1
while if f(t) is an even function
ðv f (t) ¼ j F(s)ds, ^ t
^[f (n) (t)]v ¼ ( jv)n F(v),
(2:50)
^[t n f (t)]jv ¼ jn F (n) (v),
(2:52)
provided the integrals are well defined. It can also be shown that as long as the limit of v1 F(v) exists as v ! 0, then for each real value of a there is a constant, ca, such that
^1 [vn F(v)]t ¼ (j)n f (n) (t):
(2:53)
3 F(v) þ ca d(v): ^4 f (s)ds5 ¼ j v
^1 [F (n) (v)]t ¼ (jt)n f (t),
(2:51)
and
Again, these identities hold for all transformable functions as long as the derivatives are interpreted in the generalized sense.
2
ðt a
v
v
(2:56)
0
(2:57)
2-14
Transforms and Applications Handbook Unfortunately, this is of little value because
If f(t) is an even function, then ðt
3 F(v) ^4 f (s)ds5 ¼ j , v 2
while if f(t) and
Ðt
0
f (s) ds are absolutely integrable, then
^4
ðt
1
d(s)ds
a
v
is not well defined if a ¼ 0. However, because
a
2
ðv
(2:58)
3 F(v) f (s)ds5 ¼ j : v
lim eajtj ¼ 1,
a!0þ
(2:59)
v
and 8p > if 0 < v < , 2 limþ arctan ¼ > a!0 a : p , if v < 0 2 p ¼ sgn(v), 2
Example 2.21
v
Let a and b be positive, f (t) ¼ eajtj ebjtj , and
it can be argued, using Equation 2.60, that g(t) ¼
f (t) e ¼ t
ajt j
e t
bjt j
:
Both functions are easily verified to be transformable with
F(v) ¼ ^ eajtj ebjtj v ¼
2a 2b : a2 þ v2 b2 þ v2
Because f(t) is even, formula 2.56 applies, and
G(v) ¼ ^
ajtj ðv e ebjtj ¼ j F(s)ds t v
2.2.14 Parseval’s Equality Parseval’s equality is
0
ðv
2a 2b ds a2 þ s2 b2 þ s2 0
v v arctan ¼ 2j arctan : a b
¼ j
ajtj 1 ebjtj e ebjtj ^ ¼ lim ^ a!0þ t t v v
v v arctan ¼ limþ 2j arctan a!0 a b
v ¼ jp sgn(v) þ 2j arctan : (2:61) b
1 ð
(2:60)
1
f (t)g* (t)dt ¼
1 2p
1 ð
F(v)G*(v)dv
(2:62)
1
and is valid whenever the integrals make sense. Closely related to Parseval’s equality and the two ‘‘fundamental identities,’’
Example 2.22 Applying the same analysis done in the previous example but using
1 ð
1
f (t) ¼ 1 ebjtj
f (x)^[h]jx dx ¼
1 ð
^[ f ]jy h(y)dy
(2:63)
^1 [F]jx H(x)dx:
(2:64)
1
and
leads, formally, to ðv 1 ebjtj 2b ^ ¼ j 2pd(s) 2 ds t b þ s2 v 0
ðv v ¼ 2pj d(s)ds þ 2j arctan : b 0
1 ð
1
1
f (y)^ [H]jy dy ¼
1 ð
1
Derivations of these identities are straightforward. Identity 2.63, for example, follows immediately from
2-15
Fourier Transforms 1 ð
1
0
f (x)@
1 ð
h(y)e
1
jxy
1
dyAdx ¼ ¼
1 ð
1 ð
f (x)h(y)e
1 1 0 1 1 ð ð 1
@
1
jxy
Letting v ¼ 2at this becomes, after a little simplification,
dydx 1
f (x)ejxy dxAh(y)dy:
Parseval’s equality can then, in turn, be derived from identity 2.63 and the observation that 1 ð * 1 1 1 G * (v)ejvt dv ¼ ^[G*]jt : g*(t) ¼ ^ [G] t ¼ 2p 2p
1 ð
1
2
e2at dt ¼
1 ð
1
j f (t)j2 dt ¼
1 2p
1 ð
1
jF(v)j2 dv,
(2:65)
is obtained directly from Parseval’s equality by letting g ¼ f.
Example 2.23 Let a > 0 and f(t) ¼ pa(t), where pa(t) is the pulse function. It is easily verified that
F(v) ¼ ^[pa (t)]jv ¼
ða
a
ejvt dt ¼
2 sin (av): v
So, using Bessel’s equality, 1 ð
1
1 2 ð 1 sin(av)2 dv ¼ 2p pa (t) dt 2a av 1 ða
¼
2p 4a2
¼
p : a
dt
a
Example 2.24 Let a > 0. In Example 2.18 it was shown that
the Fourier 2 1 2 transform of g(t) ¼ eat is G(v) ¼ A exp 4a v . The positive constant A can be determined by noting that, by Bessel’s equality, 1 ð
1
1 2 ð 1 at2 2 A exp 1 v2 dv: e dt ¼ 2p 4a 1
2
e2at dt:
1
rffiffiffiffi p , A¼ a where the positive square root is taken because
A ¼ G(0) ¼
Bessel’s equality,
1 ð
Dividing out the integrals and solving for A yields
1
2.2.15 Bessel’s Equality
a 2 A p
1 ð
2
eat dt > 0:
1
2.2.16 The Bandwidth Theorem If f(t) is a function whose value may be considered as ‘‘negligible’’ outside of some interval, (t1, t2), then the length of that interval, Dt ¼ t2 t1, is the effective duration of f(t). Likewise, if F(v) is the Fourier transform of f(t), and F(v) can be considered as ‘‘negligible’’ outside of some interval, (v1, v2), then Dv ¼ v2 v1 is the effective bandwidth of f(t). The essence of the bandwidth theorem is that there is a universal positive constant, g, such that the effective duration, Dt, and effective bandwidth, Dv, of any function (with finite Dt or finite Dv) satisfies DtDv g: Thus, it is not possible to find a function whose effective bandwidth and effective duration are both arbitrarily small. There are, in fact, several versions of the bandwidth theorem, each applicable to a particular class of functions. The two most important versions involve absolutely integrable functions and finite energy functions. They are described in greater detail in Sections 2.3.3 and 2.3.5, respectively. Also in these sections are appropriate precise definitions of effective duration and effective bandwidth. Because it is the basis of the Heisenberg uncertainty principle of quantum mechanics, the bandwidth theorem is often, itself, referred to as the uncertainty principle of Fourier analysis.
2.3 Transforms of Specific Classes of Functions In many applications one encounters specific classes of functions in which either the functions or their transforms satisfy certain particular properties. Several such classes of functions are discussed below.
2-16
Transforms and Applications Handbook
2.3.1 Real=Imaginary Valued Even=Odd Functions
On occasion it is convenient to decompose a function, f(t), into its even and odd components, fe(t) and fo(t),
Let F(v) be the Fourier transform of f(t). Then, assuming f(t) is integrable,
f (t) ¼ fe (t) þ fo (t), where
F(v) ¼
¼
¼
1 ð
f (t)ejvt dt
1 ð
f (t)[ cos(vt) j sin(vt)]dt
1 fe (t) ¼ [ f (t) þ f (t)] 2
1
1 1 ð
1
f (t) cos(vt)dt j
1 ð
It f(t) is a real-valued function with Fourier transform F(v) ¼ R(v) þ jI(v),
f (t) sin(vt)dt:
(2:66)
1
where R(v) and I(v) denote, respectively, the real and imaginary parts of F(v), then, by the above discussion it follows that
If f(t) is an even function, then 1 ð
1
F(v) ¼
1
1 ð
f (t) cos(vt)dt,
1
f (t) cos(vt)dt ¼ 0,
f (t) sin(vt)dt ¼ 2j
f(t) is imaginary and even f(t) is odd f(t) is real and odd f(t) is imaginary and odd
(2:69)
1 fo (t) ¼ ^ [Fo (v)]jt ¼ p 1
1 ð
(2:70)
I(v) sin (vt)dv:
1 ð
Rewriting F(v) in terms of its amplitude, A(v) ¼ jF(v)j, and phase, f(v),
R(v) cos (vt) I(v) sin (vt) ¼ A(v)[ cos f(v) cos (vt) sin f(v) sin (vt)] ¼ A(v) cos (vt þ f(v)):
f (t) sin(vt)dt,
0
Thus, by Equations 2.69 and 2.70, if f(t) is real then, 1 f (t) ¼ fe (t) þ fo (t) ¼ p
1 ð
A(v) cos (vt þ f(v))dv:
(2:71)
0
2.3.2 Absolutely Integrable Functions
TABLE 2.1 F ¼ ^[ f] f(t) is real and even
R(v) cos (vt)dv,
0
it is easily seen that
which is clearly an odd function of v and is imaginary valued as long as f is real valued. These and related relations are summarized in Table 2.1.
f(t) is even
1 ð
F(v) ¼ A(v)e jf(v) ,
and Equation 2.66 reduces to
F(v) ¼ j
(2:68)
0
which is clearly an even function of v and is real valued whenever f is real valued. Likewise, if f(t) is an odd function, then
1 ð
Fo (v) ¼ jI(v) ¼ ^[fo (t)]jv , 1
0
1
(2:67)
and
f (t) cos(vt)dt ¼ 2
1 ð
Fe (v) ¼ R(v) ¼ ^[fe (t)]jv ,
1 fe (t) ¼ ^ [Fe (v)]jt ¼ p
f (t) sin(vt)dt ¼ 0
and Equation 2.66 becomes 1 ð
1 fo (t) ¼ [ f (t) f (t)]: 2
and
,
,
,
, ,
,
F(v) is even F(v) is real and even F(v) is imaginary and even F(v) is odd F(v) is imaginary and odd F(v) is real and odd
If f(t) is absolutely integrable (i:e:, integral defining F(v),
F(v) ¼ ^ [f (t)]jv ¼
Ð1
1
1 ð
1
j f (t)j dt < 1) then the
f (t)ejvt dt
2-17
Fourier Transforms
is well defined and well behaved. As a consequence, F(v) is well defined for every v and is a reasonably well behaved function on (1, 1). One immediate observation is that for such functions, 1 ð
F(0) ¼
Analogous results hold when taking inverse transforms of absolutely integrable functions. If F(v) is absolutely integrable and f ¼ ^ 1[F], then
f (t)dt:
1 f (0) ¼ 2p
1 ð
F(v)dv,
1 j f (t)j 2p
1 ð
jF(v)jdv:
1
and, for all real t,
It is also worth noting that for any v, 1 1 ð ð jvt f (t)e f (t)e dt jF(v)j ¼ 1
1
1 ð
jvt dt ¼
1
j f (t)jdt:
1. F(v) is a continuous function of v and for each v0 < 1, lim F(v) ¼
v!v0
f (t)e
1
Furthermore,
The following can also be shown:
1 ð
1
jv0 t
1
0 and f(t) ¼ e
dt ¼ 2
1 ð
e
at
dt ¼
2 a
0
1
1 ð
1
2a dv ¼ ap: a2 þ v2
The products of these measures of effective bandwidth the duration are DtDv ¼
2 (ap) ¼ 2p: a
as predicted by the bandwidth theorem.
A function, f(t), is square integrable if 1 ð
ajtj
. The transform of f(t) is
2a F(v) ¼ 2 : a þ v2
j f (t)j2 dt < 1:
For many applications, it is natural to define the energy, E, in a function (or signal), f(t), by
Clearly, if both f(t) and F(v) are real and nonnegative and neither f(0) or F(0) vanish, then the above inequalities can be replaced with F(0) ¼
1
a jF(v)jdv ¼ 2
1
1 ð
e
aj t j
and
Thus, DtDv
1 ð
2.3.4 Square Integrable (‘‘Finite Energy’’) Functions
and j f (t )j
1 ð
1 Dt ¼ jf (0)j
:
Alternatively, to minimize the values used for the effective duration and effective bandwidth, t and v can be chosen to maximize the values of j f (t )j and jF( v)j. Clearly, choosing t ¼ 0 and v ¼ 0 is especially appropriate if both f(t) and F(v) are real valued, even functions with maximums at the origin. The above version of the bandwidth theorem is very easily derived. Because f(t) and F(v) are both absolutely integrable, 1 ð
Observe that both f(t) and F(v) are even functions with maximums at the origin. It is therefore appropriate to use t ¼ 0 and v ¼ 0 to compute the effective duration and effective bandwidth,
E ¼ E[f ] ¼
1 ð
1
j f (t)j2 dt:
For this reason, square integrable functions are also called finite energy functions. By Bessel’s equality,
E[f ] ¼
1 ð
1
1 j f (t)j dt ¼ 2p 2
1 ð
1
jF(v)j2 dv,
(2:72)
where F(v) is the Fourier transform of f(t). This shows that a function is square integrable if and only if its transform is also square integrable. If also indicates why jF(v)j2 is often referred to as either the ‘‘energy spectrum’’ or the ‘‘energy spectral density’’ of f(t).
2.3.5 The Bandwidth Theorem for Finite Energy Functions Assume that f(t) and its Fourier transform, F(v), are finite energy functions, and let the effective duration, Dt, and the effective bandwidth, Dv, be given by the ‘‘standard deviations,’’
2-19
Fourier Transforms
2
(Dt) ¼
Ð1
if and only if f(t) is a Gaussian,
(t t )2 j f (t)j2 dt Ð1 2 1 j f (t)j dt
1
f (t) ¼ Ae
Ð1
(v v )2 jF(v)j2 dv , Ð1 2 1 jF(v)j dv
1
where t and v are the mean values of t and v, Ð1 t j f (t)j2 dt t ¼ Ð1 1 2 1 j f (t)j dt
and
Ð1
v ¼ Ð1 1
vjF(v)j2 dv
1
jF(v)j2 dv
Example 2.27 : Let a > 0 and f(t) ¼ e
Using the energy of f(t),
E¼
1 ð
1
ajtj
. The transform of f(t) is
F(v) ¼
1 j f (t)j dt ¼ 2p 2
1 ð
1
2
jF(v)j dv,
the effective duration and effective bandwidth can be written more concisely as
E¼
The bandwidth theorem for finite energy functions states that, if the above quantities are well defined (and finite) and
t!1
then
1
e
ajtj 2
dt ¼ 2
1 ð
e
2at
dt ¼
1 : a
0
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 1 u ð u1 (t t )2 jf (t)j2 dt Dt ¼ t E 1
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 1 u ð u ¼ t2a t 2 e 2at dt
and
1
1 ð
Using integration by parts, the corresponding effective duration and effective bandwidth are easily computed,
1
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 1 ð u u 1 Dv ¼ t (v v )2 jF(v)j2 dv: 2pE
2a : a2 þ v2
Because tf(t) and vF(v) are both odd functions, it is clear that t ¼ 0 and v ¼ 0. The energy is
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 1 u ð u1 (t t )2 j f (t)j2 dt Dt ¼ t E
lim t j f (t)j2 ¼ 0,
,
for some a > 0. The reader should be aware that the effective duration and effective bandwidth defined in this section are not the same as the effective duration and effective bandwidth previously defined in Section 2.3.3. Nor do these definitions necessarily agree with the definitions given for the analogous quantities defined later in the sections on reconstructing sampled functions.
and
(Dv)2 ¼
at 2
0
pffiffiffi 2 ¼ 2a and
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 1 ð u u 1 Dv ¼ t (v v )2 jF(v)j2 dv 2pE 1
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 1
2 ð u 2a ua ¼t dv v2 2 2p a þ v2 1
1 DtDv : 2 Moreover, when t ¼ 0 and v ¼ 0, then 1 DtDv ¼ 2
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u ð u 3 1 2v ua ¼t v 2 dv p (a þ v2 )2 1
¼ a:
(By comparison, treating f(t) and F(v) as absolutely integrable functions [Example 2.26] led to an effective duration of 2a 1 and an effective bandwidth of ap.)
2-20
Transforms and Applications Handbook
The products of these measures of bandwidth and duration computed here are pffiffiffi pffiffiffi 2 2 1 DtDv ¼ a¼ > , 2a 2 2
2.3.7 Band-Limited Functions Let f(t) be a function with Fourier transform F(v). The function, f(t), is said to be band limited if there is a 0 < V < 1, such that F(v) ¼ 0 whenever V < jvj:
as predicted by the bandwidth theorem for finite energy functions.
2.3.6 Functions with Finite Duration
The quantity 2 V is called the bandwidth of f(t). By the near equivalence of the Fourier and inverse Fourier transforms, it should be clear that f(t) satisfies properties analogous to those satisfied by the transforms of finite duration functions. In particular
A function, f(t), has finite duration (with duration 2T) if there is a 0 < T < 1 such that
f (t) ¼
f (t) ¼ 0 whenever T < jtj:
F(v) ¼
f (t)e
jvt
dt:
(2:73)
f
(n)
1 (t) ¼ 2p
F (v) ¼ ^[(jt) f (t)]jv ¼
ðV
(jv)n F(v)e jvt dv:
V
Any piecewise continuous function with finite duration is automatically absolutely integrable and automatically has finite energy, and, so, the discussions in Sections 2.3.2 through 2.3.5 apply to such functions. In addition, if f(t) is a piecewise continuous function of finite duration (with duration 2T), then, for every nonnegative integer, n, tnf(t) is also a piecewise continuous finite duration function with duration 2T, and using identity 2.52,
n
(2:74)
and, for any nonnegative integer, n, f (n)(t) is a well-defined continuous function given by
T
(n)
F(v)e jvt dv,
V
The transform, F(v), of such a function is given by a proper integral over a finite interval, ðT
ðV
1 2p
ðT
(jt)n f (t)ejvt dt:
T
From the discussion in Section 2.3.2, it is apparent that the transform of a piecewise continuous function with finite duration must be classically differentiable up to any order, and that every derivative is continuous. It should be noted that the integral defining F(v) in formula 2.73 is, in fact, well defined for every complex v ¼ x þ jy. It is not difficult to show that the real and imaginary parts of F(x þ jy) satisfy the Cauchy–Riemann equations of complex analysis (see Appendix A). Thus, F(v) is an analytic function on both the real line and the complex plane. As a consequence, it follows that the transform of a finite duration function cannot vanish (or be any constant value) over any nontrivial subinterval of the real line. In particular, no function of finite duration can also be band limited (see Section 2.3.7). Another important feature of finite duration functions is that their transforms can be reconstructed using a discrete sampling of the transforms. This is discussed more fully in Section 2.5.
Letting t ¼ x þ jy in Equation 2.74, it is easily verified that f(x þ jy) is a well-defined analytic function on both the real line and on the entire complex plane. From this is follows that if f(t) is band limited, then f(t) cannot vanish (or be any constant value) over any nontrivial subinterval of the real line. Thus, no band-limited function can also be of finite duration. This fact must be considered in many practical applications where it would be desirable (but, as just noted, impossible) to assume that the functions of interest are both band-limited and of finite duration. Another most important feature of band-limited functions is that they can reconstructed using a discrete sampling of their values. This is discussed more thoroughly in Section 2.5.
2.3.8 Finite Power Functions For a given function, f(t), the average autocorrelation function, rf (t), is defined by 1 rf (t) ¼ lim T!1 2T
ðT
f * (s)f (t þ s)ds,
(2:75)
T
or, equivalently, by rf (t) ¼ lim
T!1
1 fT (t) $ fT (t) 2T
(2:76)
where the $ denotes correlation (see Section 2.2.10), and fT(t) is the truncation of f(t) at t ¼ T,
2-21
Fourier Transforms
fT (t) ¼ f (t)pT (t) ¼
f (t), 0,
if T t T : otherwise
If rf (t) is a well-defined function (or generalized function), then f(t) is called a finite power function. The power spectrum or power spectral density, P(v), of a finite power function, f(t) is defined to be the Fourier transform of its average autocorrelation, 1 ð f (t)e r P(v) ¼ ^[ rf (t)] ¼
jvt
v
dt:
ðT
1 lim T!1 2T
(2:77)
T
is a finite power function. The three properties listed above are easily derived. For the first, 1 rg (t) ¼ lim T!1 2T 1 ¼ lim T!1 2T
(2:78)
1
Using formula 2.76 for rf (t) and recalling the Wiener– Khintchine theorem (Section 2.2.10).
1 2T
¼ lim
T!1
1 1 ^[fT (t) $ fT (t)]jv ¼ lim ^[ rf (t)] ¼ lim jFT (v)j2 T!1 2T T!1 2T v
ðT
FT (v) ¼
f (t)pT (t)e
jvt
1
ðT
dt ¼
f (t)e
jvt
T ð t0
T t0
ðT
T
T
The average power in f(t) is defined to be
rf (0) ¼ lim
T!1
1 2T
ðT
T
j f (s)j2 ds:
(2:79)
1 þ lim T!1 2T
ðT
T
f * (s)f (s þ t)ds
T t0
f * (s)f (s þ t)ds:
2 T T ð ð ðT f * (s)f (s þ t)ds j f * (s)j2 ds j f (s þ t)j2 ds, T
T
it follows, after taking the limit, that
jrf (t)j2 jrf (0)j2 : (2:80)
v
1 ð
f * (s)f (s þ t)ds
T ð t0
T
Because P(v) ¼ ^[rf (t)] , this is equivalent to 1 rf (0) ¼ ^ 1 [P(v)]0 ¼ 2p
t0 þ t)ds
The first limit in the last line above equals rf (t) while the other limits, involving integrals over intervals of fixed bounded length, must vanish. From an application of the Schwarz inequality,
Thus, an alternate formula for the power spectrum is 2 jvt dt :
t0 )f (s
f * (s)f (s þ t)ds
1 þ lim T!1 2T
dt:
T
T ð 1 P(v) ¼ lim f (t)e T!1 2T
f * (s
T
where FT(v) is the Fourier transform of fT(t), 1 ð
j f (s)j2 ds < 1
P(v)dv:
1
A number of properties of the average autocorrelation should be noted. They are 1. r f (t) is invariant under a shift in f(t), that is, if g(t) ¼ rg (t) ¼ rf (t). f(t t0), then rf (t)j each has a maximum value at t ¼ 0. 2. rf (t) and j rf ( t). Thus, as is often the case, if f(t) is a real3. ( rf (t))* ¼ valued function, then rf (t) is an even real-valued function. As a consequence of the second property above, any function, f(t), satisfying
Hence, at t ¼ 0, jrf (t)j has a maximum (as does rf (t), because rf (0) ¼ jrf (0)j). Finally, using the substitution s ¼ s þ t, 0
1 (rf (t)) ¼ @ lim T!1 2T ¼ lim
T!1
¼ lim
T!1
ðT
T
1
f * (s)f (t þ s)dsA
1 2T
ðT
f (s)f * (t þ s)ds
1 2T
ðT
f (s
T
t)f * (s)ds
T
¼ rf ( t): If f(t) is a finite energy function, then, trivially, it is also a finite power function (with zero average power). Nontrivial examples of finite power functions include periodic functions, nearly periodic
2-22
Transforms and Applications Handbook
functions, constants, and step functions. Finite energy functions also play a significant role in signal-processing problems dealing with noise.
Example 2.28
Because rf (t) is even, rf (t) ¼ for all t. The average power is
Consider the step function, 0, if t < 0 : u(t) ¼ 1, if 0 < t
1 rf (0) ¼ , 4 and the power spectrum is
For 0 t, ru (t) ¼ lim
T !1
¼ lim
T !1
1 2T
ðT
1 2T
ðT
1 p P(v) ¼ ^ cos (t) ¼ [d(v 4 4 v
u(s)u(s þ t)ds
T
0
Let 0 < p < 1. A function, f(t), is periodic (with period p) if
Because the step function is a real function, its average autocorrelation function must be an even function. Thus, for all t, 1 ru (t) ¼ , 2 showing that the step function is a finite power function. Its average power, ru (0), is equal to 1=2, and its power spectrum is 1 ¼ pd(v): P(v) ¼ ^ 2 v
f (t þ p) ¼ f (t) for every real value of t. The Fourier series, FS[ f ], for such a function is given by
¼ lim
T !1
Dv ¼
2p p
1 p
ð
f (t)e
jnDvt
dt:
(2:82)
period
1 2T
ðT
f (s)f (s þ t)ds
1 2T
ðT
sin (s) sin (s þ t)ds
(Because of the periodicity of the integrand, the integral in formula 2.82 can be evaluated over any interval of length p.) As long as f(t) is at least piecewise smooth, its Fourier series will converge, and at every value of t at which f(t) is continuous,
T
0
1 ¼ lim T !1 2T
sin (s)[ sin (s) cos (t) þ cos (s) sin (t)]ds
1 2T
ðT
cos (t)
T !1
(2:81)
where
cn ¼
ðT
¼ lim
cn e jnDvt ,
n¼ 1
and, for each n, if t 0 : if 0 t
For 0 t,
T !1
1 X
FS[ f ]jt ¼
Example 2.29
Consider now the function 0, f (t) ¼ sin t,
1) þ d(v þ 1)]:
2.3.9 Periodic Functions
ds
1 ¼ : 2
rf (t) ¼ lim
1 cos (t) 4
f (t) ¼
0
sin2 (s)ds þ sin (t)
0
1 T ¼ lim cos (t) T !1 2T 2 1 ¼ cos (t): 4
ðT
sin (s) cos (s)ds
0
sin (2T ) 4
sin (t)
sin2 (T ) 2
cn e jnDvt :
n¼ 1
0
ðT
1 X
At points where f(t) has a ‘‘jump’’ discontinuity, the Fourier series converges to the midpoint of the jump. In any immediate neighborhood of a jump discontinuity any finite partial sum of the Fourier series, N X
n¼ N
cn e jnDvt ,
2-23
Fourier Transforms
will oscillate wildly and will, at points, significantly overand undershoot the actual value of f(t) (‘‘Ringing’’ or Gibbs phenomena). Because periodic functions are not at all integrable over the entire real line, the standard integral formula, formula 2.1, cannot be used to find the Fourier transform of f(t). Using the generalized theory, however, it can be shown that as generalized functions. 1 X
f (t) ¼
cn e
jnDvt
is easily derived. Inserting formula 2.84 for rf (t) into formula 2.86, rearranging, and using the substitution t ¼ s þ t, an ¼
F(v) ¼ ^ ¼ ¼
¼
(2:83)
1 X
cn e
jnDvt
n¼1
1 X
n¼1 1 X
n¼1
#
period
¼
1 p
period
61 ¼4 p
v
f * (s)f (s þ t)ds:
ð
rf (t) ¼
an e
jnDvt
,
1 X
n¼1
rf (t)e
jnDvt
dt,
3
ð
period
7 f (t)ejnDv(ts) dt5ds 32
ð
period
3
7 f (t)ejnDvt dt5
F(v) ¼ 2p rf (t) ¼
cn e jnDvt,
1 X
n¼1
1 X
n¼1
cn d(v nDv),
jcn j2 e jnDvt ,
(2:88)
(2:89)
(2:90)
and 1 X
n¼1
jcn j2 d(v nDv),
(2:91)
where F(v) is the Fourier transform of f(t), P(v) is the power spectrum of f(t), Dv ¼
2p , p
(2:92)
and, for each n,
A useful relation between the Fourier coefficients of rf (t), 1 an ¼ p
7 f (s þ t)ejnDvt dt5ds
n¼1
an 2pd(v nDv):
ð
period
1 X
(2:85)
n¼1
3
ð
76 1 f * (s)e jnDvs ds54 p
P(v) ¼ 2p
and the power spectrum is the regular array of delta functions, P(v) ¼
1 6 f * (s)4 p
f (t) ¼
period
1 X
2
7 f * (s)f (s þ t)ds5ejnDvt dt
Thus, an ¼ jcnj2. In summary, if f(t) is periodic with period p, then so is its average autocorrelation function, rf (t): Moreover (as generalized functions)
(2:84)
Because rf (t) is periodic, it can also be expanded as a Fourier series,
period
1 6 f * (s)4 p
period
¼ cn*cn :
cn 2pd(v nDv):
61 4 p
3
ð
2
ð
2
cn ^[e jnDvt ]v
ð
1 p
ð
1 p
n¼1
It should be noted that F(v) is a regular array of delta functions with spacing inversely proportional to the period of f(t) (see Section 2.3.10). If f(t) is periodic (with period p), then f(t) is a finite power function (but is not, unless f(t) is the zero function, a finite energy function). The average autocorrelation, rf (t), will also be periodic and have period p. Formula 2.75 reduces to rf (t) ¼
1 p
period
and that the Fourier transform of f(t) is given by "
2
ð
(2:86)
cn ¼
1 p
ð
f (t)ejnDvt dt:
(2:93)
period
period
Analogous formulas are valid if G(v) is a periodic function with period P. In particular, its inverse transform is
and the Fourier coefficients of f(t), cn ¼
1 p
ð
period
f (t)ejnDvt dt,
(2:87)
g(t) ¼
1 X
k¼1
Ck d(t kDt),
(2:94)
2-24
Transforms and Applications Handbook The graph of the Nth partial sum approximation to saw(t),
where Dt ¼
1 X
2p p
and, for each k, Ck ¼
1 p
ð
j jnpt e , np
( 1)n
n¼ 1 n6¼0
is sketched in Figure 2.5 (with N ¼ 20), and the graph of the imaginary part of ^[saw(t)]jv is sketched in Figure 2.6. The Gibbs phenomenon is evident in Figure 2.5. Formulas 2.90 and 2.91 for the autocorrelation function, rsaw (t), and the power spectrum, P(v), yield
G(v)e jkDtv dv:
period
Again, because of periodicity, the integral can be evaluated over any interval of length P.
rsaw (t) ¼
1 1 X 1 jnpt e 2 p n¼ 1 n2 n6¼0
and
Example 2.30 Fourier Series and Transform of a Periodic Function
P(v) ¼
Consider the ‘‘saw’’ function, saw(t) ¼
np):
n6¼0
t, saw(t þ 2),
if 1 t < 1 for all t
2.3.10 Regular Arrays of Delta Functions Let Dx > 0. A function f(x) is called a regular array of delta functions (with spacing Dx) if
The graph of this saw function is sketched in Figure 2.4. Here, because the period is p ¼ 2, formula 2.92 becomes Dv ¼
1 2 X 1 d(v p n¼ 1 n2
2p ¼ p, p
f(x) ¼
1 X
fn d(x
nDx),
n¼ 1
and formula 2.93 becomes 1 cn ¼ 2
ð1
te
jnpt
dt ¼
1
8 < 0,
j : ( 1) , np n
if n ¼ 0
1
if n ¼ 1, 2, 3, . . .
:
Using Equations 2.88 and 2.90, saw(t) ¼
1 X
( 1)n
n¼ 1 n6¼0
–2
–1
j jnpt e np
FIGURE 2.5 1 X
n¼ 1 n6¼0
2 ( 1)n d(v n
2
3
t
4
–1
and ^½saw(t)jv ¼ j
1
Partial sum of the saw function’s Fourier series.
np): 2
1 1
–2π –2
–1
1
2
3
t
–π
π
2π
3π
ω 4π
4
–1 –1
–2
FIGURE 2.4
The saw function.
FIGURE 2.6
Fourier transform of the saw function (imaginary part).
2-25
Fourier Transforms
where the fn’s denote fixed values. Such arrays arise in sampling and as transforms of periodic functions. They are also useful in describing discrete probability distributions (see Examples 2.32 and 2.33 below).
where, for each k, gk ¼
1 P
ð
G(v)e jkDtv dv:
period
Example 2.31 Example 2.32
The transform of the saw function from Example 2.30,
^½saw(t)jv ¼ j
1 X
n¼ n6¼0
2 ( 1) d(v n 1 n
For any l > 0, the corresponding Poisson probability distribution is given by
np),
is a regular array of delta functions with spacing Dv ¼ p. Let f(t) be a function with Fourier transform F(v). A straightforward extension and restatement of the results in Section 2.3.9 is that f(t) is periodic if and only if F(v) is a regular array of delta functions. The period, p, of f(t), and the spacing, Dv, of F(v) are related by pDv ¼ 2p:
fl (t) ¼ e
l
1 X ln d(t n! n¼0
n):
Its Fourier transform, cl(v), is given by
cl (v) ¼ e
l
1 X ln e n! n¼0
jnv
:
Recalling the Taylor series for the exponential,
Moreover, 1 X
1 f (t) ¼ 2p
Fn e
cl (v) ¼ e
jnDvt
¼e 1 X
Fn d(v
n¼ 1
A(v) ¼ e ð
2p p
f (t)e
jnDvt
dt:
jv
l(1 cos v þ j sin v)
,
l(1 cos v)
and
Q(v) ¼
l sin v:
Example 2.33
Conversely, if g(t) is a function with Fourier transform G(v) then g(t) is a regular array of delta functions if and only if G(v) is periodic. The spacing of g(t), Dt, and the period of G(v), P, are related by
For any nonnegative integer, n, and 0 p 1, the corresponding binomial probability distribution is given by
bn, p (t) ¼
pDt ¼ 2p: where q ¼ 1
Moreover, g(t) ¼
)
(2:95)
period
1 X
jv n
which is clearly a periodic function with period P ¼ 2p. It can also be seen that the amplitude. A(v), and the phase, Q(v), of cl(v) are given by
nDv),
where, for each n, Fn ¼
1 X 1 (le n n¼0
¼ e l ele
n¼ 1
and
F(v) ¼
l
gk d(t
kDt)
k¼ 1
Bn, p (v) ¼
n X n k n k p q d(t k
k)
k¼0
p. The Fourier transform of bn, p is given by 1 X n n¼0
k
pk qn k e
jkv
¼
1 X n k¼0
k
(pe
By the binomial theorem, this can be rewritten as
and G(v) ¼
1 X
k¼ 1
g ke
jkDtv
,
Bn, p (v) ¼ (pe
jv
þ q)n ,
which is clearly periodic with period P ¼ 2p.
jv k n k
)q
:
2-26
Transforms and Applications Handbook
Example 2.34
A regular array of delta functions, 1 X
g(t) ¼
k¼1
gk d(t kDt),
cannot be a finite energy function (unless all the gk’s vanish), but, if the gk’s are bounded, can be treated as a finite power function with average autocorrelation function, rg (t), and power spectrum, P(v), given by 1 X
rg (t) ¼
k¼1
The regular periodic array, 1 X
f (t) ¼
f (t) ¼
and P(v) ¼
Ak e
jkDtv
,
k¼1
Ak ¼ lim
M!1
1 2MDt
F(v) ¼
* gmþk : gm
m¼M
m¼1
jgm j2 < 1,
fk d(t
kDt)
k¼ 1
1 X
Fn d(v
nDv):
(2:96)
n¼ 1
Also, f(t) can be expressed as a corresponding Fourier series,
It should be noted, however, that if 1 X
1 X
be a regular periodic array with spacing Dt, index period N, and period p ¼ NDt. From the discussion in Section 2.3.10 on regular arrays, it is evident that the Fourier transform of f(t) is also a regular periodic array of delta functions.
where M X
kDt),
with spacing Dt ¼ 1=2, index period N ¼ 4, and ( f0, f1, f2, f3) ¼ (1, 2, 3, 3), is sketched in Figure 2.7. Note that f4 ¼ f0, f5 ¼ f1, . . . , and that the period of f(t) is 4Dt ¼ 2. Let
Ak d(t kDt)
1 X
fk d(t
k¼ 1
f (t) ¼
1 1 X Fn e jnDvt : 2p n¼ 1
(2:97)
The spacing, Dv, and period, P, of F(v) are related to the spacing, Dt, and period, p, of f(t) by
then the Ak’s will all be zero.
Dv ¼
2p p
and
P¼
2p : Dt
2.3.11 Periodic Arrays of Delta Functions Regular periodic arrays of delta functions are of considerable importance because the formulas for the discrete Fourier transforms can be based directly on formulas derived in computing transforms of regular arrays that are also periodic. For an array with spacing Dx, f(x) ¼
1 X
k¼1
The index period, M, of F(v) is given by M¼
P (2p=Dt) p ¼ ¼ ¼ N: Dv (2p=p) Dt
Using Equation 2.95, p
fk d(x kDx),
2p Fn ¼ p
Dt
ð2
Dt t¼ 2
to also be periodic with period p,
1 X
fk d(t
!
kDt) e
k¼ 1
jnDvt
dt:
(2:98)
f(x þ p) ¼ f(x), 3
it is necessary that there be a positive integer, N, called the index period, such that fkþN ¼ fk
2 1
for all k:
The index period, spacing, and period of f(x) are related by period of f(x) ¼ (index period of f(x)) (spacing of f(x)):
–1.5
–1
FIGURE 2.7
–0.5
0.5
1
1.5
2
2.5
A regular periodic array of delta functions.
3
3.5
t
2-27
Fourier Transforms must also be a regular periodic array,
But, as is easily verified, Dt p 2
ð
d(t kDt)ejnDvt dt ¼
Dt t¼ 2
ejnkDvDt , 0,
if 0 k N otherwise
1
F(v) ¼
,
1 X
Fn d(v
nDv),
n¼ 1
where
and
Dv ¼ DvDt ¼
2pDt 2p ¼ : p N
Because the index period of F(v) must also be N ¼ 1,
Thus, Equation 2.98 reduces to Fn ¼ F0 ¼ N 1 X
2p NDt
Fn ¼
fk e
j2p N nk
:
(2:99)
N 1 1 X 2p Fn e j N kn : NDv n¼0
^[combDt (t)]jv ¼ (2:100)
1 X
rf (t) ¼
Dv ¼
Ak d(t
kDt),
nDv) ¼ Dv combDv (v),
2p : Dt
(2:101)
k¼ 1
r(t) ¼
1 1 X jFn j2 d(v 2p n¼ 1
1 1 X d(t Dt k¼ 1
P(v) ¼
nDv):
(2:103)
Example 2.35 The Comb Function For each Dx > 0, the corresponding comb function is
d(x
1 combDt (t) Dt
nDv) ¼
Dv combDv (v): Dt
(2:102) 1 Dv X d(v Dt n¼ 1
In addition, using Equation 2.97, the comb function can be expressed as a Fourier series, combDt (t) ¼
1 X
kDt) ¼
and
N 1 1 X fm*fmþk , NDt m¼0
combDx (x) ¼
¼ Dv,
From formulas 2.101 through 2.103, the average correlation function and the power spectrum for combDt(t) are given by
and
P(v) ¼
j2p N 0k
where
where
Ak ¼
Dvd(v
n¼ 1
Formulas for the autocorrelation function, rf (t), and the power spectrum, P(v), follow immediately from the above and the discussion in Sections 2.3.9 and 2.3.10. They are 1 X
0 2p X fk e Dt k¼0
for all n. Combining the last few equations gives
k¼0
A similar set of calculations yields the inverse relation,
fk ¼
2p : Dt
kDx):
1 1 Dv X 1 X e jnDvt ¼ e jnDvt : 2p n¼ 1 Dt n¼ 1
2.3.12 Powers of Variables and Derivatives of Delta Functions In Example 2.4 it was shown that, for any real value of a,
k¼ 1
With index period N ¼ 1 and with the spacing equal to the period, the comb function is the simplest possible nonzero regular period array. By the above discussion, F(v) ¼ ^[combDt (t)]jv
^[e jat ]jv ¼ 2pd(v Letting a ¼ 0, this gives ^[1]jv ¼ 2pd(v),
a):
2-28
Transforms and Applications Handbook
and, by symmetry or near equivalence, ^[d(t)]jv ¼ 1: n
Now, let n be any nonnegative integer. Because, trivially, x ¼ x 1, it immediately follows from an application of identities 2.50 through 2.53 that ^[t n ]jv ¼ jn 2pd(n) (v)
n
sgn(t)]jv ¼ ( j)nþ1
(2:113) (2:114)
and (2:104)
j nþ1 : ^[t u(t)]jv ¼ j pd (v) þ n! v n
n
(n)
(2:115)
^ 1 [vn ]jt ¼ ( j)n d(n) (t),
(2:105)
^[d(n) (t)]jv ¼ (jv)n ,
(2:106)
In these formulas n denotes an arbitrary positive integer. Derivations of formulas 2.108 and 2.109 are easily obtained. One derivation starts with the observation that, for any a < 0,
(2:107)
ðt
and ( jt)n , ^ 1 [d(n) (v)]t ¼ 2p
where d (x) is the nth (generalized) derivative of the delta function.
2.3.13 Negative Powers and Step Functions The basic relation between step functions and negative powers is ^[sgn(t)]jv ¼
2 j , v
(2:108)
u(t) ¼ d(s)ds: a
By identity 2.57, with f(t) ¼ d(t) and F(v) ¼ ^jd(t)]jv ¼ 1, 2t 3 ð ^[u(t)]jv ¼ ^4 f (s)ds5 a
¼
¼
where sgn(t) is the signum function, sgn(t) ¼
1, þ1
u(t) ¼
0, 1,
¼
can be written in terms of the signum function,
formula 2.108 is equivalent to (2:109)
A number of useful formulas can be easily derived from Equations 2.108 and 2.109 with the aid of various identities from the identities in Section 2.2. Some of these formulas are 1 ¼ ^ t v n
]jv ¼
jp sgn(v), jp
( jv)n 1 sgn(v), (n 1)!
(2:116)
j
2 þ 2(c v
2pd(v)
p)d(v):
(2:117)
Because sgn (t) is an odd function, so is ^[sgn(t)]jv and, hence, so is the right-hand side of Equation 2.117. But, because the delta function is even, this is possible only if c ¼ p. Plugging this only possible choice for c into Equations 2.116 and 2.117 gives formulas 2.108 and 2.109.
1 u(t) ¼ [sgn(t) þ 1], 2
1 j : v
F(v) þ cd(v) v 1 j þ cd(v), v j
^[sgn(t)]jv ¼ ^[2u(t) 1]jv 1 ¼ 2 j þ cd(v) v
if t < 0 , if 0 < t
^[u(t)]jv ¼ pd(v)
v
where c is some constant. From this
if t < 0 : if 0 < t
Because the step function,
^[t
(2:112)
2n , vnþ1 1 ^[ramp(t)]jv ¼ jpd0 (v) , v2
^[t n
(n)
2 , v2
^[jtj]jv ¼
(2:110) (2:111)
Example 2.36 Derivation of Formulas 2.112 and 2.113 Using identity 2.52, dn ^[sgn(t)]jv dvn
dn 2 j ¼ jn n dv v
^[t n sgn(t)]jv ¼ jn
¼ ( j)nþ1
2n! : vnþ1
2-29
Fourier Transforms Using this and the observation that
More generally, if F(v) is any rational function, then F(v) can be written
jtj ¼ t sgn(t),
F(v) ¼ P(v) þ R(v),
it immediately follows that ^[jtj]jv ¼ ^[t sgn(t)]jv ¼ (j)1þ1
Where P(v) is a polynomial,
2(1!) 2 ¼ 2: v1þ1 v
One technical flaw in the above discussion should be noted. If f(x) is any function continuous at x ¼ 0, and n 1, then, from a strict mathematical point of view, the function x nf(x) is not integrable over any interval containing x ¼ 0. Because of this, it is not possible to define ^[t n]jv or ^ 1[v n]jt via the classical integral formulas. Neither is it possible for the function x n to be treated as a generalized function. However, the function ln jxj is integrable over any finite interval and can be treated as a legitimate generalized function, as can any of its generalized derivatives (as defined in Section 2.2.11). It is possible to justify rigorously the formulas given in this section, as well as any other standard use of x n, by agreeing that x 1 is actually a symbol for the generalized derivative of ln jxj, and that, more generally, for any positive integer n, x n is a symbol for ( 1)n 1 dn ln jxj (n 1)! dxn where the derivatives are taken in the generalized sense as described in Section 2.2.11.
P(v) ¼
F(v) ¼
(v
m
l)
^ 1 [P(v)]jt ¼
Using the elementary identities and the material from the previous section, it can be directly verified that ^
1 (v
(jt)m 1 jlt e Ga (t) m ¼j (m 1)! l)
( j)n cn d(n) (t):
n¼0
Letting l1, l2, . . . , lK be the distinct roots of D(v) and M1, M2, . . . , MK the corresponding multiplicities of the roots, R(v), can be written in the partial fraction expansion, Mk K X X k¼1 m¼1
(v
ak, m : lk )m
Thus, applying formula 2.118, K X
e jlk t Gak (t)
k¼1
Mk X m¼1
ak,m
(jt)m 1 , (m 1)!
(2:119)
(2:118)
Example 2.37 Let
t
where a is the imaginary part of l and 8 u(t), > < 1 Ga (t) ¼ sgn(t), > :2 u( t),
N X
where, for each k, ak is the imaginary part of lk. Fourier transforms of rational functions can be computed using the same approach as just described for inverse transforms of rational functions.
where m is a positive integer l is some complex constant
1
N(v) , D(v)
in which the degree of the numerator is strictly less than the degree of the denominator. According to formula 2.104, the inverse transform of P(v) is simply a linear combination of derivatives of delta functions.
^ 1 [R(v]jt ¼ j
,
n¼0
R(v) ¼
R(v) ¼
1
cn vn ,
and R(v) is the quotient of two polynomials.
2.3.14 Rational Functions Rational functions often turn out to be the transforms of functions of interest. The simplest nontrivial rational function is given by
N X
if 0 < a if a ¼ 0 : if a < 0
F(v) ¼
N(v) 5v þ 9 10j : ¼ D(v) v2 4jv 13
Using the quadratic formula, the roots of D(v) are found to be
l¼
4j
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (4j)2 þ 4(13) ¼ 3 þ 2j: 2
2-30
Transforms and Applications Handbook
F(v) can then be expanded 5v þ 9 10j A B ¼ þ : F(v) ¼ 2 v 4jv 13 v (3 þ 2j) v (3 þ 2j)
1 R(v) ¼ p
1 ð
1
I(s) ds vs
(2:124)
R(s) ds: vs
(2:125)
and
Solving for A and B gives
1 I(v) ¼ p
4 1 þ , F(v) ¼ v (3 þ 2j) v (3 þ 2j) whose inverse transform can be computed directly from formula 2.119,
f (t) ¼ j 4e j(3þ2j)t G2 (t) þ e j(3þ2j)t G2 (t) ¼ 4je(2þ3j)t u(t) þ je(23j)t u(t)
¼ j 4e j3t þ ej3t e2t u(t):
1 ð
1
The last two integrals are Hilbert transforms and may be defined using CPVs (see Section 2.1.6). Conversely, it can be shown that R(v) and I(v) are real-valued functions (with R(v) even and I(v) odd) satisfying either Equation 2.120 or Equation 2.125, then f (t) ¼ ^1 [R(v) þ jI(v)]jt must be a causal function. Derivations of Equations 2.120 through 2.125 are quite straightforward. First, observe that because f(t) vanishes for negative values of t, then
2.3.15 Causal Functions A function, f(t), is said to be ‘‘causal’’ if f (t) ¼ 0 whenever t < 0:
f (t) ¼ 2fe (t) ¼ 2fo (t) for 0 < t,
Such functions arise in the study of causal systems and are of obvious importance in describing phenomena that have welldefined ‘‘starting points.’’ Let f(t) be a real causal function with Fourier transform F(v), and let R(v) and I(v) be the real and imaginary parts of F(v),
where fe(t) and fo(t) are the even and odd components of f(t). Equations 2.120 and 2.121 then follow immediately from Equations 2.69 and 2.70, while Equations 2.122 and 2.123 are simply Bessel’s equality combined with equations from Section 2.3.1 and the subsequent observation that
F(v) ¼ R(v) þ jI(v): Then R(v) is even, I(v) is odd, and, provided the integrals are suitably well defined, 2 f (t) ¼ p
R(v) cos(vt)dv for 0 < t,
(2:120)
2 p
1 ð
(2:121)
1 j f (t)j dt ¼ p
1 ð
1 j f (t)j dt ¼ p
0
2
0
0
2
jfe (t)j dt ¼ 2
1 ð
1
2
jfe (t)j dt ¼ 2
1 ð
1
jfo (t)j2 dt:
1 ð
1
1 ð
1
fe (t) ¼ fo (t)sgn(t) and
fo ¼ fe (t)sgn(t):
R(v) ¼ ^[ fe (t)]jv 2
jR(v)j dv,
(2:122)
2
jI(v)j dv,
¼ ^[ fo (t)]sgn(t)jv 1 ^[ fo (t)]jv *^[sgn(t)jv 2p
1 2 ¼ jI(v)* j 2p v ¼
and
2
j f (t)j dt ¼ 4
1 ð
Thus, using results from Sections 2.3.1, 2.2.9, and 2.3.13, I(v) sin(vt)dv for 0 < t,
0
1 ð
0
2
Finally, for Equation 2.124 observe that
1 ð 0
f (t) ¼
1 ð
(2:123)
In addition, if f(t) is bounded at the origin, then provided the integrals exist,
¼
1 p
1 ð
1
I(s) ds, vs
which is Equation 2.124. Similar computations yield Equation 2.125.
2-31
Fourier Transforms
Example 2.38
and the odd extension is
Assume f (t) is a causal function whose transform, F(v), has real part R(v) ¼ d(v a) þ d(v þ a), for some a > 0. Then, according to formula 2.120, for t > 0 2 f (t) ¼ p
1 ð
[d(v a) þ d(v þ a)] cos(vt)dv ¼
2 cos(at), p
0
and by formula 2.125, 1 I(v) ¼ p ¼ ¼
1 ð
1
[d(s a) þ d(s þ a)] ds vs
1 1 þ p(v a) p(v þ a)
2v : v(a2 v2 )
fodd (t) ¼
f (t), f (t),
if 0 < t : if t < 0
If f(t) is reasonably well behaved (say, continuous and differentiable) on 0 < t, then any of the above extensions will be similarly well behaved on both 0 < t and t < 0. At t ¼ 0, however, the extended functions is likely to have singularities that must be taken into account, especially if transforms of the derivatives are to be taken. It is recommended that the generalized derivative be explicitly used. Assume, for example, that f(t) and its first two derivatives are continuous on 0 < t, and that the limits f (0) ¼ limþ f (t) t!0
and f 0 (0) ¼ limþ f 0 (t) t!0
exist. Let ^f (t) be any of the above extensions of f(t), and, for convenience, let d^f=dt and D^f denote, respectively, the classical and generalized derivatives of ^f (t). Recalling the relation between the classical and generalized derivatives (see Section 2.2.11), d^f D^f ¼ þ J0 d(t) dt
Thus, f (t) ¼
2 cos (at)u(t) p
and
and F(v) ¼ d(v a) þ d(v þ a) þ j
2v : p(a2 v2 )
D2^f ¼
where J0 and J1 are the ‘‘jumps’’ in ^f (t) and ^f 0 (t) at t ¼ 0, J0 ¼ limþ [^f (t) ^f (t)]
2.3.16 Functions on the Half-Line
t!0
Strictly speaking, functions defined only on the half-line, 0 < t < 1, do not have Fourier transforms. Fourier analysis in problems involving such functions can be done by first extending the functions (i.e., systematically defining the values of the functions at negative values of t), and then taking the Fourier transforms of the extensions. The choice of extension will depend on the problem at hand and the preferences of the individual. Three of the most commonly used extensions are the null extension, the even extension, and the odd extension. Given a function, f(t), defined only for 0 < t, the null extension is fnull (t) ¼
f (t), 0,
if 0 < t , if t < 0
The even extension is feven (t) ¼
and J1 ¼ limþ [ ^f 0 (t) ^f 0 (t)]: t!0
Computing these jumps for the extensions yield the following: dfnull þ f (0)d(t), dt
(2:126)
D2 fnull ¼
d 2 fnull þ f (0)d0 (t) þ f 0 (0)d(t), dt 2
(2:127)
Dfeven ¼
dfeven , dt
(2:128)
d 2 feven þ 2f 0 (0)d(t), dt 2
(2:129)
dfodd þ 2f (0)d(t), dt
(2:130)
Dfnull ¼
D2 feven ¼
f (t),
if 0 < t
f (t),
if t < 0
d 2^f þ J0 d0 (t) þ J1 d(t), dt 2
,
Dfodd ¼
2-32
Transforms and Applications Handbook
and
where D2 fodd ¼
d 2 fodd þ 2f (0)d0 (t): dt 2
1 a0 ¼ L
(2:131)
ðL
f (t)dt
0
An example of the use of Fourier transforms in problems on the half-line is given in Section 2.8.4. This example also illustrates how the data in the problem determine the appropriate extension for the problem.
and, for k 6¼ 0, 2 ak ¼ L
2.3.17 Functions on Finite Intervals If a function, f(t), is defined only on a finite interval, 0 < t < L, then it can be expanded into any of a number of ‘‘Fourier series’’ over the interval. These series equal f(t) over the interval but are defined on the entire real line. Thus, each series corresponds to a particular extension of f(t) to a function defined for all real values of t, and, with care, Fourier analysis can be done using the series in place of the original functions. Among the best known ‘‘Fourier series’’ for such functions are the sine series and the cosine series. The sine series for f(t) over 0 < t < L is 1 X
kpt bk sin S[ f ]jt ¼ , L k¼1 where
bk ¼
2 L
ðL
f (t) sin
0
kpt dt: L
This series can be viewed as an odd periodic extension of f(t). The Fourier transform of the sine series is
1 X
kp kp bk d v þ d w ^ S [ f ]jt v ¼ jp L L k¼1
1 X kp ¼ Bk d v , L k¼1
ðL 0
kpt f (t) cos dt: L
This series can be viewed as an even periodic extension of f(t). The Fourier transform of the cosine series is
^½C[ f ]jt jv
1 X
kp kp ak d v þd vþ L L k¼1
1 X kp Ak d v ¼ , L k¼1 ¼ 2pa0 d(v) þ p
where 8 if 0 < k < pak , Ak ¼ 2pa0 , if k ¼ 0 : pak , if k < 0
The choice of which series to use depends strongly on the actual problem at hand. For example, because the sine functions in the sine series expansion vanish at t ¼ 0 and t ¼ L, sine series expansions tend to be most useful when the functions of interest are to vanish at both of the end points of the interval. For problems in which the first derivatives are expected to vanish at both end points, the cosine series tends to be a better choice. Other boundary conditions suggest other choices for the appropriate Fourier series. In addition, the equations to be satisfied must be considered in choosing the series to be used. Unfortunately, the development of a reasonably complete criteria for choosing the appropriate ‘‘Fourier series’’ in general goes beyond the scope of this chapter. It is recommended that texts covering eigenfunction expansions and Sturm–Liouville problems be consulted.*
where 8 < jpbk , Bk ¼ 0, : jpbk ,
if 0 < k if k ¼ 0 : if k < 0
The cosine series for f(t) over 0 < t < L is 1 X
kpt C [ f ]jt ¼ a0 þ ak cos , L k¼1
2.3.18 Bessel Functions 2.3.18.1 Solutions to Bessel’s Equations For v 0, the vth-order Bessel equation can be written as t 2 y00 þ ty0 þ (t 2
v2 )y ¼ 0:
(2:132)
* See, for example, Boyce and DiPrima (1977), Holland (1990), or Pinsky (1991).
2-33
Fourier Transforms
‘‘Power series’’ solutions to this equation can be found using the method of Frobenius. From these solutions, it can be shown that the general real-valued solution to this equation on t > 0 is y(t) ¼ c1 Jv (t) þ c2 y2 (t) where c1 and c2 are arbitrary real constants, Jv is the vth-order Bessel function of the first kind (which is a bounded function),* and y2 is any particular real-valued solution to the Bessel equation on t > 0 that is unbounded near t ¼ 0. Typically, one is most interested in the bounded function part of the solution to Bessel’s equation, c1Jv.
where A, B, and C are ‘‘arbitrary’’ constants. However, here Y(v) must be even and real valued since it is the Fourier transform of an even, real-valued function. This forces A, B, and C to be real constants with A ¼ C. Thus, Y(v) ¼ or, equivalently,
Its solution on t > 0 is and y(t) ¼ c1 J0 (t) þ c2 y2 (t) It is easily verified that the power series formula for J0(t) actually defines J0(t) as an even, analytic function on the entire real line, and that J0(t) satisfies Equation 2.133 everywhere. It is also easily verified from the series formula for y2(t) on t > 0 that y2(jtj) is an even function satisfying Equation 2.133 for all nonzero values of t and which behaves like ln jtj near t ¼ 0. Consequently, we can seek the Fourier transform of (2:134)
for any pair c1 and c2 by treating J0(t) and y2(jtj) as even, realvalued solutions to the Bessel equation of order zero on the real line. Taking the Fourier transform of Equation 2.133 and using the differential identities of Section 2.2.11 results in the first-order linear equation (2:135)
where Y ¼ ^[y]. The general classical solution to this equation is easily obtained via standard methods for linear, first-order differential equations. Taking into account the possible discontinuities at v ¼ 1, this general solution is given by 8 > A(v2 1) > < Y(v) ¼ B(1 v2 ) > > : C(v2 1)
1 2
if v
0] [x < 0]
–2iA
y
(2.41)
a2 + y2
a
2π/y0
2A/a
A y0 2A a2 a a2 + (y – y )2 0
A exp(iy0x – a|x|)
a 2π/y0
(2.42)
~A/a
A A a A cos y0x exp(–a|x|)
y0 a2 a2 + 2 a2 + (y – y0)2 a + (y + y0)2 =
2a2 (a2 + y02 + y2) A a (a2 + y 2 – y2)2 + 4a2y2 0
(2.43) (continued)
2-68
Transforms and Applications Handbook
TABLE 2.6 (continued)
Graphical Representations of Some Fourier Transforms
f(x)
F(y)
2π/y0
~A/a
A
y0 a
a2 a2 – 2 a2+ (y + y0)2 a + (y – y0)2 –4a2yy0 iA = a (a2 + y 2– y2)2 + 4a2y2
iA a A sin y0x exp(–a|x|) 2π/y0
2π/y0 A
(2.44)
0
a
a
A
A/2a
A/a y0 y0
A exp(iy0x – ax) [x > 0] 0 [x < 0] 2π/y0
A
a + i(y0 – y)
A
a2 + (y0 – y)2
=A
1 a + i(y – y0)
(2.45)
aa
a
A/4a
A/2a
y0 y0 A 2 A cos y0x exp(–ax) 0
a a + 2 a2 + ( y + y0)2 a + (y – y0)2
[x > 0] [x < 0]
A=
+i
y0 – y a2+ (y0 – y)2
–
y0 + y a2+ (y0 + y)2
a(a2 + y02 – y2) – iy(a2 + y2 – y02) (a2 + y02 – y2)2 + 4a2y2
(2.46)
a
2π/y0
~A/2a
A
y0
~A/4a y0
A 2 A sin y0x exp(–ax) 0
[x > 0] [x < 0]
a a y0 – y y +y a a + 2 0 +i – 2 a2 + (y0 – y)2 a + (y0 + y)2 a2 + (y0 + y)2 a + (y0 – y)2 1 = Ay0 (2.47) (a2 + y02 – y2) + i2ay 2AL
A L L A [|x| < L] 0 [|x| > L]
2π/L 2π/L sin Ly 2A y b 2AL
a
2π/S
(2.48)
2π/S
2AL
A 2π/L
L L
2π/L
S 2A A 0
[a < x < b] [x < a; x > b]
sin Ly (sin by – sin ay) – i (cos ay – cos by) exp(–iSy) = A y y
= 2A (sin Ly cos Sy) – i (sin Ly sin Sy) = iA [exp(–iby) – exp(–iay)] (2.49) y y
2-69
Fourier Transforms TABLE 2.6 (continued)
Graphical Representations of Some Fourier Transforms
f(x)
F(y)
S
S
2π/S
4AL A 2L
2π/L
2L A 0
[(S – L) < |x| < (S + L)] [otherwise]
L
L
4A
L
2AL
A
2π/L 2π/L sin{L(y0 – y)} 2A (y0 – y)
A exp(iy0x) [|x| < L]
L
(2.50)
y0
L
A
0
2π/y0
cos Sy sin Ly y
[|x| > L]
2π/y0
y0
L
(2.51)
y0 AL
A 2π/L sin L(y – y0) sin L(y + y0) A (y – y0) + (y + y0)
A cos y0x [|x| < L]
L
[|x| > L]
0
2π/y0
y0
L
2π/L
~AL
A
A sin y0x 0
[|x| < L] [|x| > L]
iA y0
2π/y0
A
π/y0
sin L(y + y0) sin L(y – y0) – (2.53) (y + y0) (y – y0)
2A/y0
A cos y0x
[|x| < (π/2y0)]
6y0
4y0
[|x| > (π/2y0)]
0
2A
y0 y02 – y2
L
A 1– 0
cos
πy 2y0
[See (2.52) with L = π/2y0].
(2.54)
AL
A
L
(2.52)
|x| L
[|x| < L] [|x| > L]
4π/L
2π/L AL
sin (Ly/2) (Ly/2)
2
(2.55)
(continued)
2-70
Transforms and Applications Handbook
TABLE 2.6 (continued)
Graphical Representations of Some Fourier Transforms
f(x)
F(y)
A
L L
2π/L Ax L 0
[|x| < L]
2iA cos Ly – sin Ly Ly y
[|x| > L]
(2.56)
AL
A
L A|x| L 0
L
2π/L [|x| < L]
sin (Ly/2) sin Ly 2AL –2 Ly Ly
[|x| > L]
2π/y0
2
(2.57)
2π/y0 2πA
A
y0 A exp(iy0x)
2πAδ(y – y0).
(2.58)
2π/y0 πA
A y0 A cos y0x
y0
πA{δ(y – y0) + δ(y + y0)}
2π/y0
y0
πA
A
(2.59)
πA
y0
A sin y0x
πiA{δ(y + y0) – δ(y – y0)}
(2.60)
2π/y0 πA A A cos2 y0x
2y0
πA/2
2y0
πA{ 1 δ(y + 2y0) + δ(y) + 1 δ(y – 2y0)} 2
2
(2.61)
2π/y0 πA
2y0 A sin2 y0x
πA{–
1 2
2y0
δ(y + 2y0) + δ(y)–
1 2
πA/2 δ(y – 2y0)}
(2.62)
2-71
Fourier Transforms TABLE 2.6 (continued)
Graphical Representations of Some Fourier Transforms
f(x)
F(y)
A
2π/y0
nm+∞
Σ 4A m
A|cos y0x|
n –∞
y02 y02 – y2
cos
2y0 πy δ(y – 2ny0) [n = 0, ±1, ±2, ...] 2y0
(2.63)
A 2π/y0 A|sin y0x| 2π/y1
nm+∞
Σ
nm–∞
(–1)m4A
y02 y02 – y2
cos
2y0 πy δ(y – 2ny0) 2y0
[n = 0, ±1, ±2, ...]
πA
2a
A
(2.64)
(1)
πa/2
(2) (3)
2π/y0
F(y) consists of delta functions as shown
(4)
cos y0x {A + a cos y1x} ... (1) cos y0x {A + a sin y1x} ... (2) sin y0x {A + a cos y1x} ... (3) y0
sin y0x {A + a sin y1x} ... (4)
y1
(2.65)
2πA
exp(iy0x ) (A + a cos y1x)
πa a 2π Aδ(y – y0) + δ(y – y0 + y1) 2 a + δ(y – y0 – y1) 2
y0
y1 (2.66)
2πA
exp(iy0x ) (A + a sin y1x) y0 2π {Aδ(y – y0) + ia δ(y – y0 + y1) – ia δ(y – y0 – y1)} 2 2
πa
y1
(2.67)
A
A
A
Aδ(x)
(2.68)
A x0 A 2π/x0
Aδ(x – x0)
A exp(–ix0y)
(2.69) (continued)
2-72
Transforms and Applications Handbook
TABLE 2.6 (continued)
Graphical Representations of Some Fourier Transforms
f(x)
F(y)
2A
A x0
x0 2π/x0
A{δ(x – x0) + δ(x + x0)}
2A cos x0y
(2.70) 2π/x0
n=0 1 2
(N – .3)
(N – 2)
(N – 1)
[N odd]
A 4π/Nx0
x0 S
[N even]
N–1
(N – 1)x0 2 Set of N delta functions symmetrically palced about x = S.
Σ Aδ x – nx0 – S +
n=0
A
sin(Nyx0/2) exp(–iSy) [Drawn for S = 0; N = 7 and N = 8] sin(yx0/2)
x0
(2.71)
2π/x0 2πA/x0
A etc.
etc.
+∞
+∞
Σ Aδ(x – nx0)
Σ
N m–∞
N
m–∞
2π 2πA δ y–n x x0 0
(2.72)
x0 2πA/x0
A +∞
+∞
2πA 2π Σ (–1)n x δ y – n x
x Σ Aδ x – 20 – nx0
N m–∞
N
m–∞
0
0
4π/x0
(2.73)
2π/x0
2π/y0
2πA/x0 2a
y0
A
πa/x0 (1)
y0 (2)
x0 2π 2π a Σ x Aδ y – n x + 2 0 0
δ y–n
2π 2π a +y + – y0 δ y– x0 0 x0 2
(1)
2π 2π ia Σ x Aδ y – n x + 2 0 0
δ y–n
2π 2π ia +y – – y0 δ y–n x0 0 x0 2
(2)
n
Σ δ(x – nx0) { A+ a cos y0x} (1) Σ δ(x – nx0) { A+ a sin y0x} (2)
n
[n = 0, ±1, ±2, ...]
[n = 0, ±1, ±2, ...]
2πA
A A
(2.74)
2πAδ(y)
(2.75)
2-73
Fourier Transforms TABLE 2.6 (continued)
Graphical Representations of Some Fourier Transforms
f(x)
F(y)
A A
+A –A
[x > 0] [f (x) = A sgn(x)] [x < 0]
– 2 iA 1 y
(2.76)
πA
A
A 0
[x > 0] [f (x) = AU(x)] [x < 0]
A πδ(y) –
i y
(2.77)
πA
1/a A
A/a a
A{1–exp(–ax)} 0
[x > 0] [x < 0]
πAδ(y)–A
a
+i
2 + y2
a
2π/L
a2
(2.78)
y(a2 + y2)
2πA
A 2π/L
2AL
L L A
[|x| > L]
0
[|x| < L]
2πAδ(y) –2A
sin Ly y
(2.79)
y0 A exp{i(a cos y0x + bx)}
b +∞
2πA
Σ (i) n Jn (a)δ(y – b – ny0)
N
(2.80)
m–∞
b A exp{i(a sin y0x + bx)}
y0 +∞
2πA
Σ Jn (a)δ(y – b – ny0)}
N
(2.81)
m–∞
(continued)
2-74
Transforms and Applications Handbook
TABLE 2.6 (continued)
Graphical Representations of Some Fourier Transforms
f(x)
F(y)
b ~2x/b
y0
2π/y0 +∞
πA
A cos (a sin y0x + bx)
Σ {Jn (a)δ(y – b – ny0) + Jn (a)δ(y + b + ny0)}
N m–∞
(2.82)
A cos (a cos y0x + bx)
+∞
πA
Σ {(+i)n Jn (a)δ(y – b – ny0) + (–i)n Jn (a)δ(y + b + ny0)}
N m–∞
(2.83)
A sin (a sin y0x + bx)
+∞
iπA
Σ {–Jn (a)δ(y – b – ny0) + Jn (a)δ(y + b + ny0)}
(2.84)
N m–∞
A sin (a cos y0x + bx)
b +∞
iπA
Σ
N
m–∞
{–(i)n Jn (a)δ(y – b – ny0) + (–i)n Jn (a)δ(y + b + ny0)}
2π/y0
(2.85)
y0 Ae*
+∞
Σ
2πA
A exp(–a cos y0x )
N
m–∞
(–1)n Jn (a)δ(y – ny0)
(2.86)
(i)n Jn (a)δ(y – ny0)
(2.87)
2π/y0 y0 Ae*
A exp(–a sin y0x )
2πA
+∞
Σ
N m–∞
2-75
Fourier Transforms TABLE 2.6 (continued)
Graphical Representations of Some Fourier Transforms
f(x)
F(y)
Re f (x)
Re F(y)
Im F(y)
A
π 2
exp(±ia2 2
x)
m=0 m = –1 m = –2
x0
1 2
A (1 ± i) exp ( iy2/4a2) a ±
Im f (x)
(2.88)
m = –2 m=1 m = 2 m = –1 m = 0 y0 m=1 m=2
n=0
2π/y0 f(x) = A Σ δ(x – nx0 + a sin y0x) n
n = –3
n = –1
n=3
n=1
2π/x0
F(y) =
2πA Σ J n 2πa δ y – n 2π my m 0 x0 x0 x0 m,n ( m = 0, ±1, ±2, ±3, ...) ( n = 0, ±1, ±2, ±3, ...)
g(x)
H(y)
h(x)
x0 f (x) = h(x)
+∞
Σ
N
m–∞
g(x – nx0)
+∞
f (x) =
Σ
N
m–∞
h(nx0) g(x – nx0)
(2.89)
G(y)/x0
2π/x0 1 F(y) = x0
+∞
Σ
N m–∞
G
n2π n2π H y– x0 x0
+∞ n2π F(y) = 1 G(y) Σ H y– m–∞ x0 x0 N
(2.90) (2.91)
Source: Champeney, D.C., Fourier Transforms and Their Physical Applications, Academic Press, New York, 1973. With permission. Note: Jn(a) ¼ Jn(a) ¼ (1)Jn(a).
References Abramowitz, M. and Stegun, I. 1972. Handbook of Mathematical Functions. New York: Dover Publications. Arsac, J. 1966. Fourier Transforms and the Theory of Distributions. Englewood Cliffs, NJ: Prentice-Hall. Boyce, W. and DiPrima, R. 1977. Elementary Differential Equations and Boundary Value Problems. New York: John Wiley & Sons. Bracewell, R. 1965. The Fourier Transform and Its Applications. New York: McGraw-Hill. Briggs, W. L. and Henson, V. E. 1995. The DFT: An Owner’s Manual for the Discrete Fourier Transform. Philadelphia, PA: Society for Industrial and Applied Mathematics.
Brown, J. W. and Churchill, R. V. 2008. Fourier Series and Boundary Value Problems (7th edn.). New York: McGraw-Hill. Campbell, G. A. and Poster, R. M. 1948. Fourier Integrals for Practical Applications. New York: D. Van Nostrand. Champeney, D. C. 1973. Fourier Transforms and Their Physical Applications. New York: Academic Press. Champeney, D. C. 1987. A Handbook of Fourier Theorems. Cambridge, U.K.: Cambridge University Press. Chu, E. and George, A. 2000. Inside the FFT Black Box: Serial and Parallel Fast Fourier Transform Algorithms. Boca Raton, FL: CRC Press LLC. DeVito, C. L. 2007. Harmonic Analysis: A Gentle Introduction. Sudbury, MA: Jones and Bartlett Publishers.
2-76
Erdélyi, A. (Ed.) 1954. Tables of Integral Transforms (Bateman Manuscript Project). New York: McGraw-Hill. Grafakos, L. 2004. Classical and Modern Fourier Analysis. Upper Saddle River, NJ: Pearson Education, Inc. Holland, S. 1990. Applied Analysis by the Hilbert Space Method. New York: Marcel Dekker. Howell, K. B. 2001. Principles of Fourier Analysis. Boca Raton, FL: Chapman & Hall=CRC. Körner, T. W. 1988. Fourier Analysis. Cambridge, U.K.: Cambridge University Press. Papoulis, A. 1962. The Fourier Integrals and its Applications. New York: McGraw-Hill.
Transforms and Applications Handbook
Papoulis, A. 1986. Systems and Transforms with Applications in Optics. New York: McGraw-Hill. Reprinted, Malabar; FL: Robert E. Krieger Publishing Company. Pinsky, M. 1991. Partial Differential Equations and BoundaryValue Problems with Applications. New York: McGrawHill. Strichartz, R. 1994. A Guide to Distribution Theory and Fourier Transforms. Boca Raton, FL: CRC Press-LLC. Walker, J. S. 1988. Fourier Analysis. New York: Oxford University Press. Walker, J. S. 1996. Fast Fourier Transforms (2nd edn.). Boca Raton, FL: CRC Press LLC.
3 Sine and Cosine Transforms 3.1 3.2
Introduction................................................................................................................................... 3-1 The Fourier Cosine Transform (FCT) .................................................................................... 3-1 Definitions and Relations to the Exponential Fourier Transforms . Basic Properties and Operational Rules . Selected Fourier Cosine Transforms . Examples on the Use of Some Operational Rules of FCT
3.3
The Fourier Sine Transform (FST) ........................................................................................ 3-11 Definitions and Relations to the Exponential Fourier Transforms and Operational Rules . Selected Fourier Sine Transforms
3.4
.
Basic Properties
The Discrete Sine and Cosine Transforms (DST and DCT) ........................................... 3-16 Definitions of DCT and DST and Relations to FST and FCT . Basic Properties and Operational Rules . Relation to the Karhunen–Loeve Transform (KLT)
3.5
Selected Applications................................................................................................................. 3-21 Solution of Differential Equations . Cepstral Analysis in Speech Processing Data Compression . Transform Domain Processing . Image Compression by the Discrete Local Sine Transform (DLS)
3.6
Computational Algorithms ...................................................................................................... 3-27 FCT and FST Algorithms Based on FFT by Direct Matrix Factorization
Pat Yip McMaster University
3.7
.
.
Fast Algorithms for DST and DCT
Tables of Transforms ................................................................................................................ 3-31 Fourier Cosine Transforms
.
Fourier Sine Transforms
Notations and Definitions
.
References ................................................................................................................................................ 3-34
3.1 Introduction Transforms with cosine and sine functions as the transform kernels represent an important area of analysis. It is based on the so-called half-range expansion of a function over a set of cosine or sine basis functions. Because the cosine and the sine kernels lack the nice properties of an exponential kernel, many of the transform properties are less elegant and more involved than the corresponding ones for the Fourier transform kernel. In particular, the convolution property, which is so important in many applications, will be much more complex. Despite these basic mathematical limitations, sine and cosine transforms have their own areas of applications. In spectral analysis of real sequences, in solutions of some boundary value problems, and in transform domain processing of digital signals, both cosine and sine transforms have shown their special applicability. In particular, the discrete versions of these transforms have found favor among the digital signal-processing community. Many data compression techniques now employ, in one way or another, the discrete cosine transform (DCT), which has been found to be asymptotically equivalent to the optimal Karhunen– Loeve transform (KLT) for signal decorrelation. In this chapter, the basic properties of cosine and sine transforms are presented, together with some selected transforms. To show the
versatility of these transforms, several applications are discussed. Computational algorithms are also presented. The chapter ends with a table of sine and cosine transforms, which is not meant to be exhaustive. The reader is referred to the References for more details and for more exhaustive listings of the cosine and sine transforms
3.2 The Fourier Cosine Transform (FCT) 3.2.1 Definitions and Relations to the Exponential Fourier Transforms Given a real- or complex-valued function f(t), which is defined over the positive real line t 0, for v 0, the FCT of f(t) is defined as Fc (v) ¼
1 ð
f (t) cos vt dt,
v 0,
(3:1)
0
subject to the existence of the integral. The definition is sometimes more compactly represented as an operator ^c applied to the function f(t), so that ^c [ f (t)] ¼ Fc (v) ¼
1 ð
f (t) cos vt dt:
(3:2)
0
3-1
3-2
Transforms and Applications Handbook
The subscript c is used to denote the fact that the kernel of the transformation is a cosine function. The unit normalization constant used here provides for a definition for the inverse FCT, given by 2 ^c [Fc (v)] ¼ p 1
1 ð
Fc (v) cos vt dv,
t 0,
1 2
cos (vt) ¼ Re[e jvt ] ¼ [e jvt þ e
jvt
],
fe (t) ¼ f (jtj),
t 2 R:
(3:5)
Its Fourier transform is defined as
fe (t)e
jvt
dt,
v 2 R:
1
(3:6)
The integral in Equation 3.6 can be evaluated in two parts over ( 1, 0] and [0, 1). Then using Equation 3.5 and changing the integrating variable in the ( 1, 0] integral from t to t, we have 21 ð ^[fe (t)] ¼ 4 f (t)e
jvt
dt þ
0
1 ð 0
3
f (t)e jvt dt 5 ¼ 2
1 ð
f (t) cos vt dt,
0
by Equation 3.4, and thus ^[ fe (t)] ¼ 2^c [ f (t)],
if fe (t) ¼ f (jtj):
(3:7)
Many of the properties of the FCTs can be derived from the properties of Fourier transforms of symmetric, or even, functions. Some of the basic properties and operational rules are discussed in Section 3.2.2.
1 ð 0
21 3 ð 4 f (t) cos vt dt5cos vt dv:
(3:8)
0
The sufficient conditions for the inversion formula (3.3) are that f(t) be absolutely integrable in [0, 1) and that f 0 (t) be piecewise continuous in each bounded subinterval of [0, 1). In the range where the function f(t) is continuous, Equation 3.8 represents f. At the point t0 where f(t) has a jump discontinuity, Equation 3.8 converges to the mean of f(t0 þ 0) þ f(t0 0), that is,
(3:4)
it is easy to understand that there exists a very close relationship between the Fourier transform and the cosine transform. To see this relation, consider an even extension of the function f(t) defined over the entire real line so that
Fc (v) cos vt dv
0
(3:3)
again subject to the existence of the integral used in the definition. The functions f(t) and Fc(v), if they exist, are said to form a FCT pair. Because the cosine function is the real part of an exponential function of purely imaginary argument, that is,
1 ð
1 ð
2 ¼ p
0
^[ fe (t)] ¼
2 f (t) ¼ p
2 p
1 ð 0
21 3 ð 4 f (t) cos (vt)dt5 cos (vt0 )dv 0
1 ¼ [ f (t0 þ 0) þ f (t0 2
0)]:
(3:80 )
2. Transforms of derivatives: It is easy to show, because of the Fourier cosine kernel, that the transforms of even-order derivatives are reduced to multiplication by even powers of the conjugate variable v, much as in the case of the Laplace transforms. For the second-order derivative, using integration by parts, we can show that,
00
^c [ f (t)] ¼
1 ð
f 00 (t) cos (vt)dt
0
0
¼
f (0)
¼
v2 Fc (v)
2
v
1 ð
f (t) cos vt dt
0
f 0 (0)
(3:9)
where we have assumed that f(t) and f 0 (t) vanish as t ! 1. These form the sufficient conditions for Equation 3.9 to be valid. As the transform is applied to higher order derivatives, corresponding conditions for higher derivatives of f are required for the operational rule to be valid. Here, we also assume that the function f(t) and its derivative f 0 (t) are continuous everywhere in [0, 1). If f(t) and f 0 (t) have a jump discontinuity at t0 of magnitudes d and d0 , respectively, Equation 3.9 is modified to
3.2.2 Basic Properties and Operational Rules 1. Inverse transformation: As stated in Equation 3.3, the inverse transformation is exactly the same as the forward transformation except for the normalization constant. This leads to the so-called Fourier cosine integral formula, which states that
^c [ f 00 (t)] ¼
v2 Fc (v) f 0 (0) d 0 cos vt0
vd sin vt0 (3:10)
Higher even-order derivatives of functions with jump continuities have similar operational rules that can be easily
3-3
Sine and Cosine Transforms
generalized from Equation 3.10. For example, the FCT of the fourth-order derivative is ^c [ f (i y) (t)] ¼ v4 Fc (v) þ v2 f 0 (0)
f 000 (0)
(3:11)
if f(t) is continuous to order three everywhere in [0, 1], and f, f 0 , and f 00 vanish as t ! 1, If f(t) has a jump discontinuity at t0 to order three of magnitudes d, d0 , d00 , and d000 , then Equation 3.11 is modified to ^c [f
(iy)
2 0
4
000
(t)] ¼ v Fc (v) þ v f (0) f (0) þ v3 d sin vt0 þ v2 d 0 cos vt0 vd 00 sin vt0
d 000 cos vt0
(3:12)
Here, and in Equation 3.10, we have defined the magnitudes of the jump discontinuity at t0 as d ¼ f (t0 þ 0) 0
f (t0
0
d ¼ f (t0 þ 0) d 00 ¼ f 00 (t0 þ 0)
d 000 ¼ f 000 (t0 þ 0)
0);
0
f (t0 0); f 00 (t0 0); f 000 (t0
^c [ f (at)] ¼
(3:13)
¼
^c [ f 0 (t)] ¼
f (0) þ v
f (t) sin vt dt f (0) ¼ vFs (v)
f (0),
(3:14)
if f vanishes as t ! 1, and where the operator ^s and the function Fs(v) are defined in Equation 3.78. When f(t) has a jump discontinuity of magnitude d at t ¼ t0, Equation 3.14 is modified to ^c [ f 0 (t)] ¼ v Fs (v)
f (0)
f (t) cos
^c [ fe (t þ a) þ fe (t
0):
0
¼ v^s [ f (t)]
1 ð
d cos (vt0 ):
(3:15)
Generalization to higher odd-order derivatives with jump discontinuities is similar to that for even-order derivatives in Equation 3.12. 3. Scaling: Scaling in the t domain translates directly to scaling in the v domain. Expansion by a factor of a in t results in the contraction by the same factor in v, together with a scaling down of the magnitude of the transform by the factor a. Thus, as we can show,
vt dt, a
by letting
a > 0:
(3:16)
4. Shifting: (a) Shifting in the t-domain: The shift-in-t property for the cosine transform is somewhat less direct compared with the exponential Fourier transform for two reasons. First, a shift to the left will require extending the definition of the function f(t) onto the negative real line. Secondly, a shift-in-t in the transform kernel does not result in a constant phase factor as in the case of the exponential kernel. If fe(t) is defined as the even extension of the function f(t) such that fe(t) ¼ f(jtj), and if f(t) is piecewise continuous and absolutely integrable over [0, 1), then
0
¼
1 a
1 v t ¼ at ¼ Fc , a a
f 0 (t) cos vt dt 1 ð
f (at) cos vt dt
0
0
For derivatives of odd order, the operational rules require the definition for the Fourier sine transform (FST), given in Section 3.3. For example, the FCTs of the first-order derivative is given by 1 ð
1 ð
a)]
¼
1 ð
[ fe (t þ a) þ fe (t
¼
1 ð
fe (t) cos v(t þ a) dt
¼
1 ð
fe (t) cos v(t
0
a
a)] cos vt dt
a) dt:
a
By expanding the compound cosine functions and using the fact that the function fe(t) is even, these combine to give ^c [ fe (t þ a) þ fe (t
a)] ¼ 2Fc (v) cos av,
a > 0: (3:17)
This is sometimes called the kernel-product property of the cosine transform. In terms of the function f(t), it can be written as ^c [ f (t þ a) þ f (jt
aj)] ¼ 2Fc (v) cos av:
(3:18)
Similarly, the kernel-product 2Fc(v) sin (av) is related to the FST: ^s [ f (jt
aj)
f (t þ a)] ¼ 2Fc (v) sin av,
a > 0: (3:19)
(b) Shifting in the v-domain: To consider the effect of shifting in v by the amount of b(>0), we examine the following,
3-4
Transforms and Applications Handbook
Fc (v þ b) ¼
1 ð
f (t) cos (v þ b)t dt
¼
1 ð
f (t) cos bt cos vt dt
0
0
1 ð
f (t) sin bt sin vt dt
0
¼ ^c [ f (t) cos bt]
^s [ f (t) sin bt]:
(3:20)
piecewise continuous and that t2nf(t) and t2n þ 1f(t) should be absolutely integrable over [0, 1). 6. Asymptotic behavior: When the function f(t) is piecewise continuous and absolutely integrable over the region [0, 1), the Reimann–Lebesque theorem for Fourier series* can be invoked to provide the following asymptotic behavior of its cosine transform: lim Fc (v) ¼ 0:
Fc (v
(3:25)
v!1
Similarly, b) ¼ ^c [ f (t) cos bt] þ ^s [ f (t) sin bt]:
(3:200 )
Combining Equations 3.20 and 3.200 produces a shift-in-v operational rule involving only the FCT as 1 2
^c [ f (t) cos bt] ¼ [Fc (v þ b) þ Fc (v
b)]:
(3:21)
More generally, for a, b> 0, we have, 1 vþb v b Fc ^c [ f (at) cos bt] ¼ þ Fc : (3:22) 2a a a
7. Integration: (a) Integration in the t domain: Integration in the t domain is transformed to division by the conjugate variable, very similar to the cases of Laplace transforms and Fourier transforms, except the resulting transform is a FST. Thus, 21 3 11 ð ð ð 4 5 f (t)dt ¼ f (t)dt cos vt dt ^c t
0
¼
Similarly, we can easily derive
0
1 vþb Fs ^c [ f (at) sin bt] ¼ 2a a
Fs
v
a
b
: (2:220 )
5. Differentiation in the v domain: Similar to differentiation in the t domain, the transform operation reduces a differentiation operation into multiplication by an appropriate power of the conjugate variable. In particular, even-order derivatives in the v domain are transformed as Fc(2n) (v) ¼ ^c [( 1)n t 2n f (t)]:
(3:23)
We show here briefly, the derivation for n ¼ 1: Fc(2) (v)
d2 ¼ 2 dv
1 ð
¼ ¼
0 1 ð
t
2t 3 ð 4 cos vt dt 5f (t) dt 0
by reversing the order of integration. The inner integral results in a sine function and is the kernel for the FST. Therefore, 21 3 ð 1 1 ^c 4 f (t)dt5 ¼ ^s [ f (t)] ¼ Fs (v): v v
(3:26)
t
Here, again, f(t) is subject to the usual sufficient conditions of being piecewise continuous and absolutely integrable in [0, 1). (b) Integration in the v domain: A similar and symmetric relation exists for integration in the v-domain.
f (t) cos vt dt
0
1 ð
1 ð
^s
d2 f (t) 2 cos vt dt dv
v
3
Fc (b)db5 ¼
1 f (t): t
(3:27)
Note that the integration transform inversion is of the Fourier sine type instead of the cosine type. Also the asymptotic behavior of Fc(v) has been invoked. 8. The convolution property: Let f(t) and g(t) be defined over [0, 1) and satisfy the sufficiency condition for the existence of Fc and Gc. If fe(t) ¼ f(jtj) and ge(t) ¼ g(jtj) are the
2
f (t)( 1)t cos vt dt
0
¼ ^c [( 1)t 2 f (t)]: For odd orders, these are related to FSTs Fc(2nþ1) (v) ¼ ^s [( 1)nþ1 t 2nþ1 f (t)]:
21 ð
14
(3:24)
In both Equations 3.23 and 3.24, the existence of the integrals in question is assumed. This means that f(t) should be
* The Reimann–Lebesque theorem states that if a function f(t) is piece-wise continuous over an interval a < t < b, then lim
Ðb
g!1 a
f (t) cos gt dt ¼ lim
Ðb
g!1 a
f (t) sin gt dt ¼ 0:
3-5
Sine and Cosine Transforms
even extensions of f and g, respectively, over the entire real line, then the convolution of fe and ge is given by
fe * ge ¼
1 ð
fe (t)ge (t
t)dt
(3:28)
1 ð
ða
^c [ f (t)] ¼ cos vt dt ¼
1 sin va: v
(3:34)
0
1
where * has been used to denote the convolution operation. It is easy to see that in terms of f and g, we have
fe * g e ¼
is the Heaviside unit step function.
f (t)[ g(t þ t) þ g(jt
tj)]dt
(3:29)
0
which is an even function. Applying the exponential Fourier transform on both sides and using Equation 3.7 and convolution property of the exponential Fourier transform, we obtain the convolution property for the cosine transform:
2. The unit height tent function: f (t) ¼ t=a
0 < t < a,
¼ (2a t)=a a < t < 2a, ¼ 0 t > 2a: ða
t cos vt dt þ ^c [ f (t)] ¼ a 0
¼
1 [2 cos av av2
2ða
t
2a
a
a
cos 2av
cos vt dt
1]:
(3:35)
3. Delayed inverse: 2Fc (v)Gc (v) 81 0
(3:33)
1 ð 0
2
e jt dt ¼
1 2
rffiffiffiffi p (1 þ j): 2
3-6
Transforms and Applications Handbook
a t aþt Imjaj < Re(b), 2þ 2 b þ (a t) b þ (a þ t)2 1 ð a t aþt ^c [f (t)] ¼ þ cos vt dt b2 þ (a t)2 b2 þ (a þ t)2
5. Inverse linear function:
(d) f (t) ¼
f (t) ¼ (a þ t) 1 j arg (a)j < p: ^c [f (t)] ¼
1 ð
(a þ t)
1
2
0
cos vt dt
cos av Ci(av)
¼
sin av si(av):
(3:38)
Equation 3.38 is obtained by shifting the integrating variable to a þ t, and then expanding the compound cosine function. Here, si(y) is related to the sine integral function Si(y), and is defined as 1 ð
si(y) ¼ ¼
y
ðy
sin x dx x
sin x dx x
3.2.3.2 FCT of Exponential and Logarithmic Functions
^c [ f (t)] ¼
sin x dx ¼ Si(y) x
(p=2):
(3:39)
¼
e
at
cos vt dt ¼
1 ð
(a2 þ t 2 )
p e 2a
^c [ f (t)] ¼ P:V:
1
(3:44)
which is identical to the Laplace transform of cos vt. 1 2. f (t) ¼ [e bt e at ] Re(a), Re(b) > 0. t 1 ð
1 [e t
bt
at
e
] cos vt dt
2 1 a þ v2 ¼ ln 2 : 2 b þ v2
cos vt dt
av
,
(3:40)
1 ð 0
(a2 þ t 2 )
1
The result is easily obtained using the integration property of the Laplace transform in the phase plane. 2 3. f (t) ¼ e at Re(a) > 0. ^c [ f (t)] ¼
cos vt dt
1 ð
1 ¼ 2 (3:41)
where ‘‘P.V.’’ stands for ‘‘principal value’’ and the integral can be obtained by a proper contour integration in the complex plane. b b Imjaj < Re(b), (c) f (t) ¼ 2 2þ 2 b þ (a t) b þ (a þ t)2 b b2 þ (a
(3:45)
1 e t
at 2
cos vt dt
0
p sin av ¼ 2a
¼ p cosav e
a a2 þ v2
0
which is obtained also by a properly chosen contour integration over the upper half-plane. (b) f (t) ¼ (a2 t 2 ) 1 a > 0,
0
1 ð
^c [ f (t)] ¼
0
^c [f (t)] ¼
Re(a) > 0.
0
6. Inverse quadratic functions: (a) f (t) ¼ (a2 þ t 2 ) 1 Re(a) > 0.
1 ð
at
^c [ f (t)] ¼
0
0
(3:43)
which can be considered as the imaginary part of the contour integral needed in Equation 3.42 when a and b are real and positive.
1. f (t) ¼ e
1 ð
bv
¼ p sin av e
0
b þ cos vt dt t)2 b2 þ (a þ t)2 bv
rffiffiffiffi p e a
v2 =4a
(3:46)
This is easily seen as the result of the exponential Fourier transform of a Gaussian distribution. 4. f (t) ¼ ln t[1 U(t 1)] ð1
^c [ f (t)] ¼ ln t cos vt dt 0
¼
1 v
ðv
sin t dt ¼ t
1 Si(v): v
(3:47)
0
(3:42)
where the integral can be obtained easily by considering a shift in t, applied to the result in Equation 3.40.
The result is obtained by integration by parts and a change of variables. The function Si(v) is defined as the sine integral function given by
3-7
Sine and Cosine Transforms
ðy
sin x Si(y) ¼ dx: x
(3:48)
0
ln bt 5. f (t) ¼ 2 (t þ a2 ) ^c [f (t)] ¼
j arg (y)j < p,
Ei(y) ¼ (1=2)[Ei(y þ j0) þ Ei(y
j0)]:
(3:50)
The integral in Equation 3.49 is evaluated using contour integration. t þ a 6. f (t) ¼ ln , a > 0: t a 1 ð 0
¼
t þ a ln cos vt dt t a
2 [si(av) cos av þ ci(av) sin av] v
bt
3.2.3.3 FCT of Trigonometric Functions
1 ð
e
bt
1 ð
^c [ f (t)] ¼
1 ð 0
p ¼ e ab cosh bv if v < a 2 p bv sinh ab if v > a: ¼ e 2
1 ð
cos at cos vt dt (t 2 þ b2 )
p e 2b p ¼ e 2b (3:52)
The result is obtained easily after some algebraic manipulations. It is, however, better understood as the result of the inverse Fourier transform of a sinc function, which is simply a rectangular window function, as is evident in Equation 3.52.
(3:55)
The result is obtained by contour integration, as is the next cosine transform. cos at a, Re(b) > 0: 5. f (t) ¼ 2 (t þ b2 )
if v < a,
if v > a:
(3:54)
t sin at cos vt dt (t 2 þ b2 )
¼
if v ¼ a,
1 þ , v)2 b2 þ (a þ v)2
which is the Laplace transform of the function þ v)t þ cos (a v)t]: t sin at a, Re(b) > 0: 4. f (t) ¼ 2 (t þ b2 )
0
¼0
cos at cos vt dt
1 2 [ cos (a
sin at cos vt dt t
¼ p=4
v)t]:
Re(b) > jIm(a)j:
0
¼ p=2
(3:53)
0
^c [ f (t)] ¼
a > 0:
^c [ f (t)] ¼
cos at,
b 1 ¼ 2 b2 þ (a
(3:51)
where si(y) and ci(y) ¼ Ci(y) are defined (3.36) and (3.39), respectively. The result is obtained through integration by parts, and manifests the shift property of the cosine transform.
sin at 1. f (t) ¼ t
3. f (t) ¼ e
^c [ f (t)] ¼
and
^c [ f (t)] ¼ P:V:
1 [ sin (a þ v)t þ sin (a 2
(3:49)
t
t
g
sin at cos vt dt
The result can be easily understood as Laplace transform of the function:
where Ei(y) is the exponential integral function defined by, dt,
bt
1 aþv a v þ ¼ 2 b2 þ (a þ v)2 b2 þ (a v)2
av
e
e
0
ln (ab) þ eav Ei( av):
e av Ei(av) 1 ð
sin at, a, Re(b) > 0: 1 ð
^c [ f (t)] ¼
ln bt cos vt dt (t 2 þ a2 )
p ¼ 2e 4a
Ei(y) ¼
bt
Re(a) > 0
1 ð 0
2. f (t) ¼ e
6. f (t) ¼ e
bt 2 cos at
,
^c [ f (t)] ¼
ab
cosh bv
bv
cosh ab if v > a:
if v < a, (3:56)
Re(b) > 0: 1 ð
e
bt 2
cos at cos vt dt
0
¼
1 2
rffiffiffiffi p e b
(a2 þv2 )=4b
cosh
av : 2b
(3:57)
3-8
Transforms and Applications Handbook
3.2.3.4 FCT of Orthogonal Polynomials
where Hen(x) is the Hermite polynomial given by,
1. Legendre polynomials: Hen (x) ¼ ( 1)n ex
2
f (t) ¼ Pn (1 2t ) 0 < t < 1, ¼ 0 t > 1,
n
^c [ f (t)] ¼
for jxj < 1
1) ,
n
( 1) p Jnþ12 (v=2)J 2
n
1 2
(v=2),
(3:58)
where Jy(z) is the Bessel function of the first kind, and order y, defined by Jy (z) ¼
1 X m¼0
( 1)m (z=2)yþ2m , G(m þ 1)G(y þ m þ 1)
jzj < 1, jarg zj < p:
f (t) ¼ (a
2
t )
1=2
t > a,
¼ 0,
He2n (t) cos vt dt
rffiffiffiffi p e 2
v2 =2
v2n
(3:61)
1 ð
^c [f (t)] ¼
e
t 2 =2
{Hen (t)}2 cos vt dt
¼ n!
rffiffiffiffi p e 2
v2 =2
Ln (v2 ),
(3:62)
which shows a rare symmetry with Equation 3.60. 3.2.3.5 FCT of Some Special Functions
T2n (t=a) 0 < t < a, n ¼ 0, 1, 2, . . .
Tn (x) ¼ cos (n cos ^c [f (t)] ¼ (a2
t 2 =2
which is obtained using the Rodriques formula for the Hermite polynomial given in (3) above. 2=2 (b) f (t) ¼ e t {Hen (t)}2 ,
(3:580 )
1. The complementary error function: f (t) ¼ t Erfc(at)
1
x), 1=2
t2)
n ¼ 0, 1, 2, . . . T2n (t=a) cos vt dt (3:59)
where J2n(x) is the Bessel function defined in Equation 3.580 with y ¼ 2n. 3. Laguerre polynomial: f (t) ¼ e
2
t =2
Ln (t 2 )
ex d n n x (x e ), n ¼ 0, 1, 2, . . . n! dxn 1 ð 2 ^c [f (t)] ¼ e t =2 Ln (t 2 ) cos vt dt
¼
dt:
x
t Erfc(at) cos vt dt
0
1 1 þ e 2a2 v2
1 : v2
(3:63)
v 6¼ a:
(3:64)
v2 =4a2
2. The sine integral function:
^c [f (t)] ¼
1 ð
si(at) cos vt dt
0
¼
0
{Hen (v)}2 ,
1 ð
t2
e
where si(x) is defined in Equation 3.39.
Ln (x) ¼
v2 =2
^c [f (t)] ¼
1 ð
f (t) ¼ si(at) a > 0,
where Ln(x) is the Laguerre polynomial defined by,
rffiffiffiffi p1 ¼ e 2 n!
2 Erf (x) ¼ p p
Erfc(x) ¼ 1
0
¼ ( 1)n (p=2)J2n (av),
a > 0:
Here the complementary error function is defined as
where the Chebyshev polynomial is defined by,
ða
n ¼ 0, 1, 2, . . .
),
0
2. Chebyshev polynomials: 2
e
2t ) cos vt dt
0
¼
1 ð
¼ ( 1)n
2
^c [ f (t)] ¼ Pn (1
x2 =2
0
and n ¼ 0, 1, 2, . . . ð1
dn (e dxn
=2
4. Hermite polynomials: 2 (a) f (t) ¼ e t =2 He2n (t) n ¼ 0, 1, 2, . . .
where the Legendre polynomial Pn(x) is defined as 1 dn 2 Pn (x) ¼ n (x 2 n! dxn
2
(3:60)
v þ a (1=2v) ln , v a
Note certain amount of symmetry with Equation 3.51.
3-9
Sine and Cosine Transforms
3. The cosine integral function: f (t) ¼ Ci(at) ¼
(c) f (t) ¼ t
^c [f (t)] ¼
Jn (at)
ci(at) a > 0, ^c [f (t)] ¼
where ci(x) is defined in Equation 3.36. 1 ð
n
n
t
0
Jn (at) cos vt dt
p (2a) n (a2 G(n þ 1=2) 0 < v < a, ¼ 0, v > a:
0
(3:65)
G(x) ¼
f (t) ¼ Ei( at) a > 0,
e t =tdt,
x
^c [f (t)] ¼
1 ð
jarg (x)j < p:
Ei( at) cos vt dt ^c [ f (t)] ¼ 1
(v=a):
(3:66)
5. Bessel functions: We list only a few here since a more comprehensive table is available in Chapter 9: (a) f (t) ¼ J0 (at) a > 0, where Jn(x) is the Bessel function of the first kind defined in Equation 3.580 .
1 ð
^c [ f (t)] ¼
v2 ) 1=2 for 0 < v < a, for v ¼ a,
for v > a:
(3:67)
(b) f (t) ¼ J2n (at) a > 0. ^c [ f (t)] ¼
1 ð
J2n (at) cos vt dt
¼ 1, for v ¼ a, ¼ 0, for v > a:
1=2
J y (x)]
(3:70)
for 0 < v < a,
¼
(v
2
a2 )
1=2
for v > a:
(3:700 )
(e) f (t) ¼ t y Yy (at) jRe(y)j < 1=2, a > 0, ^c [ f (t)] ¼
1 ð
t y Yy (at) cos vt dt p
p(2a)y [G(1=2
y)]
1
(3:71)
3.2.4 Examples on the Use of Some Operational Rules of FCT In this section, some simple examples on the use of operational rules of the FCT are presented. The examples are based on very simple functions and are intended to illustrate the procedure and the features in the FCT operational rules that have been discussed in Section 3.2.2.
0
¼ ( 1)n (a2 v2 ) for 0 < v < a,
(3:690 )
(v2 a2 ) y 1=2 , v > a, ¼ 0, for 0 < v < a:
0
¼ 0,
e t t x 1 dt:
0
¼ 0,
¼
J0 (at) cos vt dt
¼ (a2 ¼ 1,
(3:69)
Y0 (at) cos vt dt
0
1 ð
1 ð
Yy (x) ¼ cosec(yp)[Jy (x) cos (yp)
1 tan v
,
(d) f (t) ¼ Y0 (at) a > 0, where Yy(x) is the Bessel function of the second kind defined by
0
¼
1=2
0
where Ei( x) is defined by
Ei( x) ¼
v2 )n
Here, G(x) is the gamma function defined by
4. The exponential integral function:
1 ð
n ¼ 1, 2, . . .
and
p
¼
Ci(at) cos vt dt
¼ 0 for 0 < v < a, ¼ p=2v for v > a:
1 ð
a > 0,
T2n (v=a)
3.2.4.1 Differentiation-in-t (3:68)
Here, T2n(x) is the Chebyshev polynomial defined in Equation 3.59. Note the symmetry between this and Equation 3.29.
Let f(t) be defined as f(t) ¼ e at, where Re(a) > 0. Then according to Equation 3.44, its FCT is given by Fc (v) ¼
a : a2 þ v2
3-10
Transforms and Applications Handbook
To obtain the FCT for f00 (t), we have, according to the differentiation-in-t property, Equation 3.9 00
^c [ f (t)] ¼ ¼
2
v Fc (v) a2
0
f (0) ¼
a3 þ v2
a v 2 þa a þ v2 2
^c (e
at
, and that its
Consider the function f(t) ¼ tU(1 t), which is sometimes called a ramp function. It has a jump discontinuity of d ¼ 1 at t ¼ 1. Its derivative is given by f 0 (t) ¼ U(1 t), which also has a jump discontinuity at t ¼ 1. Using the definition for FCT, we obtain ^c [ f 0 (t)] ¼ ^c [U(1
t)] ¼
sin v : v
(3:73)
The FCT rule of differentiation with jump discontinuity (3.14) can also be applied to get
(because d ¼
00
Fc (v) ¼
2a
a2 3v2 , (a2 þ v2 )3
at
^c [t 2 e
] ¼ 2a
a > 0:
(3:74)
^c [ f (t þ a)]:
1 ð
e e
aa
which is much easier than direct evaluation.
Gc (v) ¼
and
sin av : v
a2
a)] e
0
a(tþt)
þe
ajt tj
dt:
(3:77)
Applying the operator ^c to Equation 3.77 and integrating over t first, the kernel product property in the shift-in-t operation in Equation 3.18 can be invoked to give, 81 0. The FCT of a positive shift in the t-domain is easy to obtain,
^c [e
This property, Equation 3.23, can often be used to generate FCTs for functions that are not listed in the tables. As an example, consider again the function f(t) ¼ e at, where Re(a) > 0. To obtain the FCT for the function g(t) ¼ t2e at, we can use Equation 3.23 on Fc(v) for f(t) ¼ e at. Thus,
1, and f (0) ¼ 0:)
3.2.4.3 Shift-in-t, Shift-in-v, and the Kernel Product Property
ajt aj
(3:76)
The convolution property for FCT is closely related to its kernel product property as illustrated by the following example. Let f(t) ¼ e at, Re(a) > 0, and g(t) ¼ U(t) U(t a), a > 0. The FCTs of these functions are given respectively by,
^c [ f 0 (t)] ¼ vFs (v) f (0) d cos (vt0 ) cos v sin v ¼v þ 2 ( 1) cos v, v v
^c [ f (jt
b)2
3.2.4.4 Differentiation-in-v Property
3.2.4.2 Differentiation-in-t of Functions with Jump Discontinuities
sin v , v
1 a a cos bt) ¼ þ 2 a2 þ (v þ b)2 a2 þ (v
at
(3:72)
This result is verified by noting that f00 (t) ¼ a2e FCT is given directly also by Equation 3.72.
¼
Equation 3.21 typifies the shift-in-v property and, when it is applied to the same function f(t) above, we obtain,
[U(t)
as required.
U(t
0
¼2
1 ð
[U(t)
U(t
a(tþt)
a)]
0
¼2
(3:75)
a)] e
a a2 þ v2
sin av , v
þe
ajt tj
9
= dt ;
a cos vt dt a2 þ v2
3-11
Sine and Cosine Transforms
3.3.2 Basic Properties and Operational Rules
3.3 The Fourier Sine Transform (FST) 3.3.1 Definitions and Relations to the Exponential Fourier Transforms Similar to the FCT, the FST of a function f(t), which is piecewise continuous and absolutely integrable over [0, 1), is defined by application of the operator ^s as 1 ð
Fs (v) ¼ ^s [ f (t)] ¼
f (t) sin vt dt,
v > 0:
(3:78)
1. Inverse transformation: The inverse transformation is exactly the same as the forward transformation except for the normalization constant. Combining the forward and inverse transformations leads to the Fourier sine integral formula, which states that,
The inverse operator ^s 1 is similarly defined f (t) ¼ ^s 1 [Fs (v)] ¼
2 p
t 0,
(3:79)
0
subject to the existence of the integral. Functions f(t) and Fs(v) defined by Equations 3.79 and 3.78, respectively, are said to form a FST pair. It is noted in Equations 3.3 and 3.79 for the inverse FCT and inverse FST that both transform operators have symmetric kernels and that they are involuntary or unitary up to a p factor of (2=p). FSTs are also very closely related to the exponential Fourier transform defined in Equation 3.6. Using the property that sin vt ¼ Im[e
e
jvt
],
(3:80)
fo (t) ¼ f (t) t 0, ¼ f ( t) t < 0:
^[ fo (t)] ¼ ¼
fo (t)e
1
2j
dt ¼
1 ð
f (t)e
jvt
dt þ
0
1 ð
f (t) sin vt dt ¼
21 3 ð 4 f (t) sin vt dt5 sin vt dv:
The sufficient conditions for the inversion formula 3.79 are the same as for the cosine transform. Where f(t) has a jump discontinuity at t ¼ t0, Equation 3.82 converges to the mean of f(t0 þ 0) and f(t0 0). 2. Transforms of derivatives: Derivatives transform in a fashion similar to FCT, even orders involving sine transforms only and odd orders involving cosine transforms only. Thus, for example, ^s [ f 00 (t)] ¼
v2 Fs (v) þ vf (0)
^s [ f 0 (t)] ¼
1 ð
f (t)e
jvt
dt
vFc (v),
(3:83)
v3 f (0) þ vf 00 (0),
(3:85)
if f(t) is continuous at least to order three. When the function f(t) and its derivatives have jump discontinuities at t ¼ t0, Equation 3.85 is modified to become, ^s [ f (in) (t)] ¼ v4 Fs (v) þ v3 f (0) þ vf 00 (0) v3 d cos vt0 þ v2 d0 sin vt0 þ vd 00 cos vt0 d 000 sin vt0
0
2j^s [f (t)],
(3:86)
and therefore, 1 ^[ fo (t)]: 2j
(3:84)
where f(t) is assumed continuous to the first order. For the fourth-order derivative, we apply Equation 3.83 twice to obtain,
0
^s [ f (t)] ¼
(3:82)
0
^s [ f (iy) (t)] ¼ v4 Fs (v)
Then the Fourier transform of fo(t) is jvt
Fs (v) sin vt dv
and
1 ] ¼ [e jvt 2j
one can consider the odd extension of the function f(t) defined over [0, 1) as
1 ð
2 ¼ p
1 ð 0
Fs (v) sin vt dv,
jvt
1 ð 0
0
1 ð
2 p
f (t) ¼
(3:81)
Equation 3.81 provides the relation between the FST and the exponential Fourier transform. As in the case for cosine transforms, many properties of the sine transform can be related to those for the Fourier transform through this equation. We shall present some properties and operational rules for FST in the next section.
where the jump discontinuities d, d0 , and d000 are as defined in Equation 3.13. Similarly, for odd-order derivatives, when the function f(t) has jump discontinuities, the operational rule must be modified. For example, Equation 3.84 will become ^s [ f 0 (t)] ¼
vFc (v) þ d sin vt0 :
(3:840 )
Generalization to other orders and to more than one location for the jump discontinuities is straightforward.
3-12
Transforms and Applications Handbook
3. Scaling: Scaling in the t-domain for the FST has exactly the same effect as in the case of FCT, giving, 1 a
^s [ f (at)] ¼ Fs (v=a) a > 0:
and fo (t) ¼
Fs(2n) (v) ¼ ^s [( 1)n t 2n f (t)],
(3:87)
4. Shifting: (a) Shift in the t-domain: As in the case of the FCT, we first define the even and odd extensions of the function f(t) as, fe (t) ¼ f (jtj),
derivatives involve only sine transforms and oddorder derivatives involve only cosine transforms. Thus,
t f (jtj): jtj
(3:88)
and Fs(2nþ1) (v) ¼ ^c [( 1)n t 2nþ1 f (t)]:
(3:95)
It is again assumed that the integrals in Equation 3.95 exist. 6. Asymptotic behavior: The Reimann–Lebesque theorem guarantees that any FST converges to zero as v tends to infinity, that is,
Then it can be shown that:
lim Fs (v) ¼ 0:
(3:96)
v!1
^s [ fo (t þ a) þ fo (t
a)] ¼ 2Fs (v) cos av
(3:89)
7. Integration: (a) Integration in the t-domain. In analogy to Equation 3.26, we have
and ^c [ fo (t þ a) þ fo (t
a)] ¼ 2Fs (v) sin av; a > 0: (3:90)
These, together with Equations 3.18 and 3.19, form a complete set of kernel-product relations for the cosine and the sine transforms. (b) Shift in the v-domain: For a positive b shift in the v-domain, it is easily shown that ^s [v þ b] ¼ Fs [ f (t) cos bt] þ Fc [ f (t) sin bt]
(3:91)
and combining with the result for a negative shift, we get ^s [ f (t) cos bt] ¼ (1=2)[Fs (v þ b) þ Fs (v
b)]: (3:92)
More generally, for a, b > 0, we have, ^s [ f (at) cos bt] vþb v b ¼ (1=2a) Fs þ Fs : a a
(3:93)
provided f(t) is piecewise smooth and absolutely integrable over [0, 1). (b) Integration in the v-domain. As in the FCT, integration in the v-domain results in division by t in the tdomain, giving, 21 3 ð ^c 1 4 Fs (b)db5 ¼ (1=t)f (t)
Fc
v a
b
:
in parallel with Equation 3.27. 8. The convolution property: If functions f(t) and g(t) are piecewise continuous and absolutely integrable over [0, 1), a convolution property involving Fs(v) and Gc(v) is
(3:94)
The shift-in-v properties are useful in deriving some FCTs and FSTs. As well, because the quantities being transformed are modulated sinusoids, these are useful in applications to communication problems. 5. Differentiation in the v-domain: The sine transform behaves in a fashion similar to the cosine transform when it comes to differentiation in the v-domain. Even-order
(3:98)
v
Equivalently,
(3:97)
0
2Fs (v)Gc (v) ¼ ^s
Similarly, we can easily show that ^s [ f (at) sin bt] vþb ¼ (1=2a) Fc a
2t 3 ð ^s 4 f (t)dt5 ¼ (1=v)Fc (v)
2Fs (v)Gc (v) ¼ ^s
81 0: 1 a2
t2
sin vt dt
0
¼ [sin av Ci(av)
cos av Si(av)]=a,
(3:111)
where Ci(x) and Si(x) are the cosine and sine integral functions defined in Equations 3.36 and 3.39 and ‘‘P.V.’’ denotes the principal value of the integral. Again, we
3-14
Transforms and Applications Handbook
note that Equation 3.111 is related to the FCT of the function, t2 ) 1 :
t(a2
f (t) ¼
which can also be related to the cosine transform in Equation 3.46 using again the differentiation-in-v property 3.95 of the sine transform. 4. f (t) ¼ ln t[1 U(t 1)]
Thus, 2
1
2
^c [ t(a
t ) ] ¼ cos av Ci(av) þ sin av Si(av):
(3:112)
(c) f (t) ¼
b b þ (a 2
^s [f (t)] ¼
1 ð 0
t)
t) bv
^s [ f (t)] ¼
0
2
:
aþt b2 þ (a þ t)2 1 ð
b sinvt dt b2 þ (a þ t)2 (3:113)
a t b2 þ (a t)2
aþt b2 þ (a þ t)2
¼ p sin ave
bv
Re(b) > 0
a t b2 þ (a t)2
:
1. f (t) ¼ e
at
^s [ f (t)] ¼
1 ð 0
v e at sin vt dt ¼ 2 a þ v2
te
at 2
sin vt dt
0
rffiffiffiffiffi 1 p ¼ ve 4 a3
v2 =4a
,
(3:117)
Ci(v)],
(3:118)
t ln bt sin vt dt (t 2 þ a2 ) av
ln ab
eav Ei( av)
e
av
Ei(av)] (3:119)
Note that Equation 3.119 is related to Equation 3.49 through the differentiation-in-v property of the FCT as defined in Equation 3.24. t þ a 6. f (t) ¼ ln a > 0, t a ^s [ f (t)] ¼
1 ð 0
Equation 3.116 is seen to be related to the result (3.45) through the differentiation-in-v property of the sine transform as defined in Equation 3.95. 2 3. f (t) ¼ te at jarg (a)j < p=2. ^s [ f (t)] ¼
0
(3:115)
which is also seen to be the Laplace transform of sin vt. e bt e at 2. f (t) ¼ Re(b), Re(a) > 0: t2 1 ð bt e e at ^s [ f (t)] ¼ sin vt dt t2 0 2 v a þ v2 1 v 1 v ¼ ln 2 b tan þ a tan : 2 a b b þ v2 (3:116)
1 ð
1 ð
p ¼ [2e 4
(3:114)
Re(a) > 0:
1)] sin vt dt
which is obtained easily through integration in parts. Here C ¼ 0.5772156649 . . . is the Euler constant and Ci(x) is the cosine integral function. t ln bt a, b > 0. 5. f (t) ¼ 2 (t þ a2 )
sin vt dt
3.3.3.2 FST of Exponential and Logarithmic Functions
U(t
1 [C þ ln v v
¼
^s [ f (t)] ¼
We note here the symmetry among the transforms in Equations 3.113 and 3.114, and those in Equations 3.43 and 3.42.
ln t[1
0
Re(b) > 0:
2
b b2 þ (a
¼ p sinave (d) f (t) ¼
b b þ (a þ t)2
2
^s [ f (t)] ¼
1 ð
¼
t þ a ln sin vt dt t a
p sin av: v
(3:120)
The result is obtained using integration by parts and the shift-in-t properties 3.88 through 3.90 of the sine transform. 3.3.3.3 FST of Trigonometric Functions 1. f (t) ¼
sin at t
a > 0, 1 ð
sin at sin vt dt t 0 v þ a ¼ (1=2) ln : v a
^s [ f (t)] ¼
(3:121)
This result is immediately understood when compared to Equation 3.120, taking into account the normalization used in Equations 3.78 and 3.79 for the definition of the FST.
3-15
Sine and Cosine Transforms
2. f (t) ¼
e
bt
t
sin at
3.3.3.4 FST of Orthogonal Polynomials
Re(b) > jIm(a)j 1 ð
^s [ f (t)] ¼
1. Legendre polynomial (defined in Equation 3.58):
bt
e
sin at sin vt dt
t
2t 2 )[1
f (t) ¼ Pn (1
0
2 b þ (v þ a)2 ¼ (1=4) ln 2 : b þ (v a)2
(3:122)
bt
^s [ f (t)] ¼
cos at 1 ð
e
bt
cos at sin vt dt
v a vþa ¼ (1=2) 2 þ b þ (v a)2 b2 þ (v þ a)
2 ,
(3:123)
which is also recognized as the Laplace transform of the function cos at sin vt. t cos at a, Re(b) > 0, 4. f (t) ¼ 2 (t þ b2 )
0
vi2 ph Jnþ1=2 ¼ 2 2
¼ ¼
p e 2
ab
bv
cosh ab
v > a:
(3:124)
Note the symmetry of Equation 3.124 with Equation 3.55. sin at a, Re(b) > 0, 5. f (t) ¼ 2 (t þ b2 ) ^s [ f (t)] ¼
1 ð 0
p e 2b p ¼ e 2b
1 ð
e
ab
sinh bv
v < a:
bv
sinh ab
v > a:
bt 2
(3:125)
T2nþ1 (t=a)[1
U(t
a)],
n ¼ 0, 1, 2, . . . ða
^s [ f (t)] ¼ (a2
t2)
1=2
T2nþ1 (t=a) sin vt dt
0
¼ ( 1)n
p J2nþ1 (av): 2
(3:128)
^s [ f (t)] ¼
1 ð
t 2 =2 2mþ1 2 Ln (t ),
m, n ¼ 0, 1, 2, . . .
t 2 =2 2mþ1 2 Ln (t ) sin vt
t 2m e
dt
0
rffiffiffiffi p ¼ (n!) 1 ( 1)m e 2
v2 =2
Hen (v)Henþ2mþ1 (v) (3:129)
p 2 f (t) ¼ e t =2 He2nþ1 ( 2t) 1 ð p 2 ^s [ f (t)] ¼ e t =2 He2nþ1 ( 2t) sin vt dt 0
¼ ( 1)n
rffiffiffiffi p e 2
v2 =2
p He2nþ1 ( 2v):
(3:130)
sin at sin vt dt 3.3.3.5 FST of Some Special Functions
0
rffiffiffiffi 1 p e ¼ 2 b
1=2
where Lan (x) ¼
The symmetry of Equation 3.125 with Equation 3.56 is apparent. 2 6. f (t) ¼ e bt sin at Re(b) > 0. ^s [ f (t)] ¼
t2 )
ex x a d n (e x xnþa ), is a Laguerre polynon! dxn mial L0n (x) ¼ Ln (x) as defined in Equation 3.60. Here, Hen(x) is the Hermite polynomial defined in Equation 3.61. 4. Hermite polynomials (defined in Equation 3.62):
sin at sin vt dt 2 (t þ b2 )
¼
f (t) ¼ (a2
f (t) ¼ t 2m e
v < a,
sinh bv
(3:127)
3. Laguerre polynomials.
t cos at sin vt dt (t 2 þ b2 ) p e 2
2t 2 ) sin vt dt
where Jv(x) is the Bessel function of the first kind defined in Equation 3.580 . 2. Chebyshev polynomial (defined in Equation 3.59):
0
^s [ f (t)] ¼
n ¼ 0, 1, 2, . . .
0
Re(b) > jIm(a)j
1 ð
1)]
^s [ f (t)] ¼ Pn (1
This result follows easily from the integration-in-v property 3.27 as applied to the cosine transform in Equation 3.53. 3. f (t) ¼ e
U(t
ð1
(v2 þa2 )=4b
sinh
av 2b
similar to Equation 3.57 for the cosine transform.
(3:126)
1. The complementary error function (defined in Equation 3.63): f (t) ¼ Erfc(at) a > 0,
3-16
Transforms and Applications Handbook
^s [ f (t)] ¼ ¼
1 ð
(c) f (t) ¼ t
Erfc(at) sin vt dt
0
1 [1 v
e
v2 =4a2
]:
(3:131)
2. The sine integral function (defined in Equation 3.116): f (t) ¼ si(at) 1 ð
^s [ f (t)] ¼
a > 0,
si(at) sin vt dt ¼ 0
0v a:
(3:132)
Note the symmetry of Equation 3.132 with Equation 3.65. 3. The cosine integral function (defined in Equation 3.36): f (t) ¼ Ci(at) ¼ ^s [ f (t)] ¼
1 ð
Jnþ1 (at), a > 0 and n ¼ 0, 1, 2, . . .
^s [ f (t)] ¼
1 ð
t
0
n
Jnþ1 (at) sin vt dt
p
1 p v(a2 G(n þ 1=2) 2n anþ1 0 a:
f (t) ¼ Ei( at) a > 0 1 ð ^s [ f (t)] ¼ Ei( at) sin vt dt
^s [ f (t)] ¼
2 1 v ln 2 þ 1 : a 2v
(3:134)
J0 (at) sin vt dt ¼ 0, 1=2
0 < v < a,
v > a:
(3:135)
(b) f (t) ¼ J2nþ1 (at) a > 0 ^s [ f (t)] ¼
1 ð
J2nþ1 (at) sin vt dt
0
¼ ( 1)n (a2
v2 )
1=2
T2nþ1 (v=a)
0 < v < a,
¼0
v > a,
a)
1=2
sin
1
v , a
v ln a
v2 a2
1=2 1 ,
(3:138)
1 ð
t y Yy 1 (at) sin vt dt
p 2y ay 1 p v(v2 G(1=2 y) ¼ 0 0 < v < a:
a2 )
y 1=2
v > a, (3:139)
As with the cosine transforms, more detailed results are found in the sections covering Henkel transforms.
0
a2 )
2
1=2
0
¼
5. Bessel functions (defined in Equation 3.58): (a) f (t) ¼ J0 (at) a > 0
¼ (v2
(3:137)
(e) f (t) ¼ t v Yy 1 (at) a > 0, jRe(y)j < 1=2
0
^s [ f (t)] ¼
,
0
(3:133)
1 ð
1=2
Y0 (at) sin vt dt
2 ¼ (a2 v2 ) p 0 0
Ci(at) sin vt dt
1 v2 ln ¼ 2v a2
n
(3:136)
where Tn(x) is the Chebyshev polynomial defined in Equation 3.59.
3.4 The Discrete Sine and Cosine Transforms (DST and DCT) In practical applications, the computations of the Fourier sine and cosine transforms are done with sampled data of finite duration. Because of the finite duration and the discrete nature of the data, much can be gained in theory and in ease of computation by formulating the corresponding discrete sine and cosine transforms (DST and DCT) directly. In what follows, we discuss the definitions and properties of the discrete sine and cosine transforms. It is possible to define four different types of each of the DCT and the DST (for details, see Rao and Yip, 1990). We shall concentrate on Type I, which can be defined by simply discretizing the FST and FCT, within a finite rectangular window of unit height.
3-17
Sine and Cosine Transforms
3.4.1 Definitions of DCT and DST and Relations to FST and FCT
Similar consideration in discretizing the FST kernel Ks (v, t) ¼ sin vt
Consider the transform kernel of the FCT given by Kc (v, t) ¼ cos vt:
(3:140)
Let vm ¼ 2pmDf and tn ¼ nDt be the sampled angular frequency and time, respectively. Here, Df and Dt are the sample intervals for frequency and time, respectively. m and n are positive integers. The kernel in Equation 3.140 can now be discretized as Kc (m, n) ¼ Kc (vm , tn ) ¼ cos (2pmnDf Dt):
(3:141)
will lead to the definition of the (N 1) 3 (N matrix, whose elements are given by
[S]mn
rffiffiffiffi 2 mnp ¼ sin N N
(3:142)
where m, n ¼ 0, 1, . . . , N. The transform kernel in Equation 3.142 is the DCT kernel of Type I. It represents the mnth element in an (N þ 1) 3 (N þ 1) transformation matrix, which, with the proper normalization, provides the definition for the DCT transformation matrix [C]. These elements are [C]mn
rffiffiffiffin mnpo 2 ¼ km kn cos , N N
m, n ¼ 0, 1, . . . , N
where ki ¼ 1 for i 6¼ 0 or N p ¼ 1= 2 for i 6¼ 0 or N
(3:143)
rffiffiffiffi N mnp 2X km kn cos Xc (m): N m¼0 N
Vectors Xc and x are said to be a DCT pair.
1:
(3:148)
rffiffiffiffi N 1 2X mnp sin x(n): N n¼1 N
(3:149)
The vectors x and Xs are said to form a DST pair. The inverse DST is given by rffiffiffiffi N 1 2X mnp sin x(n) ¼ Xs (m): N m¼1 N
(3:150)
It is evident in Equations 3.146 and 3.150 that both DCT and DST are symmetric transforms. Both are obtained by discretizing a finite time duration into N equal intervals of Dt each, resulting in an (N þ 1) 3 (N þ 1) matrix for [C] because the boundary elements are not zero, and resulting in an (N 1) 3 (N 1) matrix for [S] because the boundary elements are zero.
3.4.2 Basic Properties and Operational Rules Let cm denote the mth column vector in the matrix [C]. Consider the inner product of two such vectors:
hcm , cn i ¼
N X
mpp pnp km kp cos kp kn cos : N N p¼0
(3:151)
The summation can be carried out by defining the 2Nth primitive root of unity as (3:145)
It can be shown that [C] is a unitary matrix. Thus, the inverse transformation is given by x(n) ¼
Xs (m) ¼
(3:144)
which, in an element-by-element form, means rffiffiffiffi N mnp 2X Xc (m) ¼ km kn cos x(n): N n¼0 N
m, n ¼ 1, 2, . . . , N
3.4.2.1 The Unitarity Property
The discretization can be viewed as taking a finite time duration and dividing it into N intervals of Dt each. Including the boundary points, there are N þ 1 sample points to be considered. If the discrete N þ 1 sample points are represented by a vector x, the DCT of this vector is a vector Xc given by, Xc ¼ [C]x
1) DST transform
This matrix is also unitary and when it is applied to a data vector x of length N 1, it produces a vector Xs, whose elements are given by,
If we further let Df Dt ¼ 1=(2N), where N is a positive integer, we obtain the DCT kernel: Kc (m, n) ¼ cos (pmn=N)
(3:147)
(3:146)
W2N ¼ e
jp=N
¼ cos
p N
j sin
p N
,
(3:152)
and applying it to the summation in Equation 3.151. This gives
hcm , cn i ¼
"X N 1 km kn (W2N ) Re N p¼0
p(n m)
where Re[] denotes the real part of [].
þ
N X p¼1
(W2N )
p(nþm)
#
(3:153)
3-18
Transforms and Applications Handbook
Considering the first summation in Equation 3.153, and letting k ¼ (n m), the power series can be written as, N 1 X p¼0
p W2Nk
3.4.2.2 Inverse Transformation
1 W2NNk ¼ ð1 W2Nk Þ
¼ {2[1 cos (kp=N)]} 1 n Nk k 1 W2N W2N þ W2N(N
1)k
o :
(3:154)
Similarly, the second series in Equation 3.153 can be summed by letting l ¼ (n þ m), N X p W2Nl ¼ {2[1
Similar considerations can be applied to the DST matrix [S] to show that it is also unitary.
cos (lp=N)]}
As alluded to in Section 3.4.1, the unitary matrices [C] and [S] are symmetric and, therefore, the inverse transformations are exactly the same as the forward transformations, based on the above unitarity properties. Therefore, [C]
1
¼ [C] and
[S]
1
Recall that in the discretization of the FCT, the time and frequency intervals are related by
1
o 1 þ W2nNl : (3:155)
W2N(Nþ1)l
Hence, for m 6¼ n, (i.e., k 6¼ 0), the real part of Equation 3.154 is Re
"
N 1 X p¼0
p W2Nk
#
[1
¼
¼ [1
( 1)k ] [1 cos (kp=N)] {2[1 cos (kp=N)]} ( 1)k ]=2,
and the real part of Equation 3.155 is
Re
"
N X p¼1
p W2Nl
#
¼ ¼
[1
( 1)l ] [1 cos (lp=N)] {2[1 cos (lp=N)]}
[1
( 1)l ]=2:
hcm , cn i ¼ 0 for m 6¼ n:
(3:156)
For m ¼ n 6¼ 0 or N, the inner product is, hcm , cn i ¼ (1=N)Re
N 1 X p¼0
1þ
N X p¼1
p W2N2m
#
hcm , cn i ¼ (1=2N)Re
p¼0
1þ
N X p¼1
¼ 1,
!
1
Df ¼
1 : 2NDt
(3:159)
1 : 2T
(3:160)
3.4.2.4 Shift-in-t Because the data are sampled, we obtain the shift-in-time properties of DCT and DST by examining the time shifts in units of Dt. Thus, if x ¼ [x(0), x(1), . . . , x(N)]T, we define the right-shifted sequence as xþ ¼ [x(1), x(2), . . . , x(N þ 1)]T. Their corresponding DCTs are given by and
þ Xþ c ¼ [C]x :
(3:161)
The shift-in-time property seeks to relate Xþ c with Xc. It turns out that it relates not only to Xc but also to Xs, the DST of x. This is to be expected because the shift-in-time properties of FCT and FST are similarly related. It can be shown that the elements of Xþ c are given by
¼ 1:
Therefore, the inner product satisfies the orthonormality condition, hcm , cn i ¼ dmn
or
Because the DCT and DST deal with discrete sample points, a scaling in time has no effect in the transform, except in changing the unit frequency interval in the transform domain. Thus, as Dt changes to aDt, Df changes to Df=a, provided the number of divisions N remains the same. Hence, the properties 3.16 and 3.87 for the FCT and FST are retained, except for the 1=a factor, which is absent in the cases for DCT and DST. Equation 3.159 may also be interpreted as giving the frequency resolution of a set of discrete data points, sampled at a time interval of Dt. Using T ¼ NDt as the time duration of the sequence of data points, the frequency resolution for the transforms is
Xc ¼ [C]x
and for m ¼ n ¼ 0 or N, the inner product is, N 1 X
Df Dt ¼ 1=2N
Df ¼
Combining these, and noting that k and l differ by 2m, we obtain the orthogonality property for the inner product,
"
(3:158)
3.4.2.3 Scaling
p¼1
n W2Nl
¼ [S]:
(3:157)
where dmn is the Kronecker delta and the DCT matrix [C] is shown to be unitary.
mp mp Xcþ (m) ¼ cos Xc (m) þ km sin Xs (m) N rffiffiffiffiN 1 1 mp 1 p cos km x(0) þ p þ 1 x(1) N 2 N 2 1 mp 1 x(N) þ ( 1)m p x(N þ 1) : 1 cos þ ( 1)m p 2 2 N
(3:162)
3-19
Sine and Cosine Transforms
In Equation 3.162, Xc(m) and Xs(m) are respectively the mth element of the DCT of the vector [x(0), x(1), . . . , x(N)]T and the mth element of the DST of the vector [x(1), x(2), . . . , x(N þ 1)]T. While properties analogous to the so-called kernel-product properties for FCT in Section 3.2.2 may be developed, Equation 3.162 is more practical in that it provides for a way of updating a DCT of a given dimension without having to recompute all the components. The corresponding result of DST is Xsþ (m)
mp mp ¼ cos Xs (m) sin Xc (m) N rffiffiffiffiN 2 mp 1 1 m p x(0) þ 1 p ( 1) x(N) : sin N N 2 2 (3:163)
Here, it is noted that Xc(m) are the elements of the DCT the vector [x(0), . . . , x(N)]T.
errors (MSEs) in data compression and packs the most energy (variance) in the fewest number of transform coefficients. Consider a Markov-1 signal with correlation coefficient r. The N 3 N covariance matrix is a matrix [A], which is real, symmetric, and Toeplitz. It is well known that a nonsingular symmetric Toeplitz matrix has an inverse of tri-diagnonal form. In the case of the covariance matrix [A] for a Markov-1 signal, we can write [A]
1
¼ (1
r2 )
1
0
1 r B r 1 þ r2 B @ ... ... ... ...
d ¼ xþ
x
Dc ¼ X þ c
Xc
and
Ds ¼ X þ s
[A]
Xs :
(3:165)
As we can see from Equation 3.165, the main operational advantage of the FCT and FST, namely that in the differentiation properties, have not carried over to the discrete cases. As well, properties with both integration-in-t and integration-in-v are also lost in the discrete cases. We conclude this section by mentioning that no simple convolution properties exist in the cases of DCT and DST. For finite sequences, it is possible to define a circular convolution for two periodic sequences or a linear convolution of two nonperiodic sequences. With these, certain convolution properties for some of the DCTs may be developed. (For more details, the reader is referred to Rao and Yip, 1990). The results, however, are neither simple nor easy to apply.
3.4.3 Relation to the Karhunen–Loeve Transform (KLT) While the DCT and the DST discussed here are derived by discretizing the FCT and the FST, based on some unit time interval of Dt and some unit frequency interval of Df, their forms are closely related to the KLT in digital signal processing. KLT is an optimal transform for digital signals in that it diagonalizes the auto-covariance matrix of a data vector. It completely decorrelates the signal in the transform domain, minimizes the mean squared
1 ... ... C C: rA 1
(3:166)
1
¼ [B] þ [R]
where 0
(3:164)
where xþ is the right-shifted version of x. It is clear that the DCT and the DST of d are simply given by
... ... ... ... . . . 1 þ r2 ... r
0 0 ... ...
This matrix can be decomposed into a sum of two simpler matrices,
3.4.2.5 The Difference Property For discrete sequences, the difference operator replaces the differential operator for continuous sequences. The FCT and the FST of a derivative, therefore, are analogous to the DCT and the DST of the difference operator. We can define a difference vector d as
0 r ... ...
[B] ¼ (1
1 þ r2 B pffiffiffi 2r B B r2 ) 1 B 0 B B @ ... ...
pffiffiffi 2r 1 þ r2 r ... ...
... ...
0 r 1 þ r2 ... ...
and r2 )
[R] ¼ (1
r ... pffiffiffi 2r
1 ... C ... C C ... C C C ... A
1 þ r2
1
pffiffiffi ( 2 1)r 0 r2 p ffiffi ffi B ( 2 1)r 0 0 B B @ ... ... ... 0
...
...
...
...
1
C ... ... C pffiffiffi C: 0 ( 2 1)r A pffiffiffi . . . ( 2 1)r r2 (3:167)
We note that [R] is almost a null matrix and can be considered so when N is very large. Thus, the diagonalization of the matrix [B] is asymptotically equivalent to the diagonalization of the matrix [A] 1. Furthermore, it is well known that the similarity transformation that diagonalizes [A] 1 will also diagonalize [A]. From these arguments, it is concluded that the transformation that diagonalizes [B] will, asymptotically, diagonalize [A]. The transformation that diagonalizes [B] depends on a three-terms recurrence relation that is exactly satisfied by the Chebyshev polynomials. With these, it can be shown that the matrix [V] that will diagonalize [B] and, in turn, also [A] asymptotically, is defined by rffiffiffiffiffiffiffiffiffiffiffiffi 2 mnp cos , [V]mn ¼ kn km N 1 N 1 m, n ¼ 0, 1, . . . , N 1:
(3:168)
3-20
Transforms and Applications Handbook
As can be seen in Equation 3.168, these are the elements of the DCT matrix [C], except that N has been replaced by N 1. For large N, these are identical. The foregoing has briefly demonstrated that for a Markov-1 signal, the diagonalization of the covariance matrix, which leads to the KLT, is provided by a transformation matrix [V] which is almost identical to the DCT matrix [C]. This explains why the DCT performs so well in signal decorrelation, although it is signal independent. Similar arguments can be applied to the DST. In Figure 3.1, the basis functions forming the KLT for N ¼ 16 are shown. The signal is a Markov-1 signal with a correlation coefficient of r ¼ 0.95. It is clear that the set of basis functions and, hence, the KLT is signal dependent, because they are the eigenvectors of the autocovariance matrix of the signal vector. In Figures 3.2 and 3.3, the basis functions for N ¼ 16 of DCT and DST are shown. It is evident that they are very similar to the KLT basis functions. While it is true that the dimensions of the spaces spanned by the KLT and the DCT and DST are different, it can be shown that as N increases, both discrete transforms will asymptotically approach KLT.
However, it is true that the similarity of the basis functions does not guarantee the asymptotic behavior of the DCT and the DST, nor does it assure good performance. In applications, such as data compression and transform domain coding, the ‘‘variance distribution’’ of the transform coefficient is an important criterion of performance. The variance of a transform coefficient is basically a measure of the information content of that coefficient. Therefore, the higher the variances are in a few transform coefficients, the more room there is for data compression in that transform domain. Let [A] be the data covariance matrix and let [T] be the transformation. Then, the covariance matrix in the transform domain, [A]T, is given by, [A]T ¼ [T] [A] [T] 1 :
The diagonal elements of [A]T are the variances of the transform coefficients. In Table 3.1, comparisons are shown for the variance distributions of the DCT, the DST, and the discrete Fourier 0 1
1
2
2
3
3
4
4 5 5 6 6 7 7 8 8 9 9 10
10
11
11
12
12 13
13
14
14
15
15
16
16
FIGURE 3.1
KLT Markov-1 signal r ¼ 0.95, N ¼ 16.
(3:169)
FIGURE 3.2
DCT N ¼ 16.
3-21
Sine and Cosine Transforms
15
1 2 3
10 %
4 5
DFT DCT*
5
6
DST
7 8
0 0.1
9
FIGURE 3.4
10
When the transformation [T] in Equation 3.169 is not the KLT, [A]T will not be diagonal. The nonzero off-diagonal elements in [A]T form a measure of the ‘‘residual correlation.’’ The smaller the amount of residual correlation, the closer is the transform to being optimal. Figure 3.4 shows the residual correlation as a percentage of the total amount of correlation, for the transforms DCT, DST, and DFT, in a Markov-1 signal with N ¼ 16. As can be seen, again DCT and DST outperform DFT generally. There are other criteria of performance for a given transform, depending on what kind of signal processing is being done. However, using the KLT as a benchmark, DCT and DST are extremely good alternatives as signal independent, fast implementable transforms, because they are both asymptotic to the KLT. This asymptotic property of the discrete trigonometric transforms (particularly the DCT) has made them very important tools in digital signal processing. Although they are suboptimal, in the sense that they will not exactly diagonalize the data covariance matrix, they are signal independent and are computable using fast algorithms. KLT, though exactly optimal, is signal dependent and possesses no fast computational algorithm. Some typical applications are discussed in the next section.
11 12 13 14 15
FIGURE 3.3 DST N ¼ 16. TABLE 3.1 Variance Distributions for N ¼ 16, r ¼ 0.9 DCTa
DST
DFT
0
9.835
9.218
9.835
1
2.933
2.640
1.834
2 3
1.211 0.581
1.468 0.709
1.834 0.519
4
0.348
0.531
0.519
5
0.231
0.314
0.250
6
0.166
0.263
0.250
7
0.129
0.174
0.155
8
0.105
0.153
0.155
9
0.088
0.110
0.113
10 11
0.076 0.068
0.099 0.078
0.113 0.091
12
0.062
0.071
0.091
13
0.057
0.061
0.081
14
0.055
0.057
0.081
15
0.053
0.054
0.078
i
a
DCT is DCT-II here.
transform (DFT), based on a Markov-1 signal of r ¼ 0.9 and N ¼ 16. It is clearly seen that both DCT and DST outperform DFT is using variance distribution as a performance criterion.
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Percent residual correlation as a function of r, N ¼ 16.
3.5 Selected Applications This section contains some typical applications. We begin with fairly general applications to differential equations and conclude with quite specific applications in the area of data compression. (See Churchill, 1958 and Sneddon, 1972 for more applications.)
3.5.1 Solution of Differential Equations 3.5.1.1 One-Dimensional Boundary Value Problem Consider the second-order differential equation, y00 (t)
h2 y(t) ¼ F(t)
t0
(3:170)
3-22
Transforms and Applications Handbook
with boundary conditions: y0 (0) and 0 and y(1) ¼ 0, and F(t) ¼ A
exists and that the functions p(x) and f(x) have FCTs. We note from Equation 3.176 that
for 0 < t < b
¼ 0 otherwise:
p00 (x) ¼ h(x) and
We note that F(t) can be expressed in terms of a Heaviside step function, Thus,
leading to the following relation between their FCTs:
F(t) ¼ A[1
U(t
b)]:
A h2 Yc ¼ sin vb: v
0
2
v Yc
y (0)
A sin vb þ h2 )
A sin vb h2 v
y(t) ¼ ¼
v sin vb : v2 þ h2
(3:173)
U(t
b)
e
hb
cosh ht]
U(t
b) þ e
ht
sinh hb] for t > b:
t < b,
These can be rewritten as A (e hb cosh ht 1) for t < b, h2 A ht e sinh hb for t > b: ¼ h2
y(t) ¼
(3:174)
Consider a function y(x, y), which is bounded for x 0, y 0. Let y(x, y) satisfy the boundary value problem: q2 y q2 y þ ¼ qx2 qy2
h(x);
We further assume that
1 Ð 0
y(x, 0) ¼ f (x):
p(x) ¼
x
2r 3 ð 4 h(t)dt 5dr 0
(3:178)
Because Vc(v, y) is bounded for y > 0, Equation 3.178 has the following solution, vy
þ Pc (v)
(3:179)
where C is an arbitrary constant, to be determined by y(x, 0) ¼ f(x). In the v-domain, this means Vc (v, 0) ¼ Fc (v):
(3:180)
Thus, Vc (v, y) ¼ [Fc (v)
Pc (v)]e
vy
þ Pc (v):
(3:181)
This can be inverted and the solution in the (x, y) domain then is given by 1 ð 1 [ f (t) p(t)] y(x, y) ¼ p(x) þ p 0 y y þ dt: (x þ t)2 þ y2 (x t)2 þ y2
(3:182)
3.5.1.3 Time-Dependent One-Dimensional Boundary Value Problem Consider the function u(x, t), which is bounded for x, t 0. Let this function satisfy the partial differential equation,
(3:175)
h(x)dx ¼ 0, and that the function 1 ð
v2 Pc (v):
Here, we have made use of Equation 3.44 and the convolution result of Equation 3.20.
3.5.1.2 Two-Dimensional Boundary Value Problem
qy ¼ 0, qxx¼0
q2 Vc (v, y) ¼ qy2
Vc (v, y) ¼ Ce
The inversion of Yc can be accomplished with the use of Equations 3.34, 3.55, and 3.3. Noting that the inverse FCT has a normalization factor of 2=p, the solution for the original boundary value problem is given by A [1 h2 A [1 h2
v2 Vc (v, y) þ
v(v2
¼
(3:177)
Applying ^c for the x variable in Equation 3.175 reduces the partial differential equation to
(3:172)
Applying the boundary condition and solving for Yc, we obtain Yc ¼
v2 Pc (v) ¼ Hc (v)
(3:171)
Here, we assume h, A, and b to be constants. Applying the operator ^c to the differential equation and using the results in Equations 3.9 and 3.34, we get
p0 (0) ¼ 0,
(3:176)
qu q2 u ¼ h(x, t) þ qt qx2
(3:183)
so that u(x, 0) ¼ f(x) and u(0, t) ¼ g(t) are the initial and boundary conditions. Applying the FST for the variable x to Equation 3.183 and assuming the existence of all the integrals involved, we obtain
3-23
Sine and Cosine Transforms
qUs þ v2 Us ¼ vg(t) þ Hs (v, t): qt
(3:184)
{s(n)} is a symmetric real sequence, constructed out of {x(n)}, we have SF (k) ¼ Re[XF (k)]
The solution for Equation 3.184 is Us (v, t)e
v2 t
ðt
2
¼ [vg(t) þ Hs (v, t)]ev t dt þ C:
(3:185)
where {XF(k)} is the 2M-point DFT of the zero-padded sequence. Combining this with Equation 3.188 we see that
0
C is easily found to be Fs(v) using the condition Us(v, 0) ¼ Fs(v). With this, Equation 3.185 can be inverse transformed by applying the operator ^s 1 to get u(x, t) ¼
2 p
1 ð
Us (v, t) sin vx dv:
(3:186)
0
We note that, depending on the forms of the functions Fs and Hs, the inverse FST may be obtained by table look-up.
3.5.2 Cepstral Analysis in Speech Processing In cepstral analysis, a sequence is converted by a transform T, the logarithm of its absolute value is then taken and the cepstrum is then obtained by inverse transformation T 1. Figure 3.5 shows the essential steps in cepstral analysis. Here, {x(n)} is the input speech sequence, {X(k)} is the transform sequence, and the output {xR(n)} is called the real cepstrum. The transform may be any invertible transform. When T is an N-point DFT, the scheme can be implemented using the DCT. In the computation to obtain the real cepstrum using the DFT, the input sequence has to be padded with trailing zeros to double its length. However, a simple relation between the DFT and the DCT for real even sequences reduces the DFT to a DCT. Let x(n), n ¼ 0, 1, 2, . . . , M be the input speech sequence to be analyzed. To obtain the real cepstrum xR(n) using DFT, the sequences is padded with zeros so that x(n) ¼ 0, for n þ M þ 1, . . . , 2M 1. If we consider a symmetric sequence s(n) defined by s(n) ¼ x(n) 0 < n < M,
¼ 2x(n) n ¼ 0, M ¼ x(2M n) M < n 2M
1,
(3:187)
then the DFT of s(n) can be obtained as "
k
SF (k) ¼ 2 x(0) þ ( 1) x(M) þ
M X1 n¼1
# nkp x(n) cos : (3:188) M
Equation 3.188 is clearly in the form of a DCT of the sequence {x(n)} up to a constant factor of normalization. Now, because x(n)
X(k) T
FIGURE 3.5
Block diagram for cepstral analysis for x(n).
Re[XF (k)] ¼ 2[Xc (k)]
(3:189)
where Xc is the (M þ 1)-point DCT of the speech sequence {x(n)}. Equation 3.189 is valid up to a normalization constant. Because direct sparse matrix factorization of the (M þ 1) 3 (M þ 1) DCT matrix is possible, fast algorithms exist for the computation of the DCT. This means that in order to obtain the real cepstrum of {x(n)}, there is no need to pad the sequence with trailing zeros, and the computation for xR(k) can be achieved through the use of DCT of the sequence {x(n)}. Rather than using DCT as a means of computing the DFT, the transform T in the cepstral analysis can directly be a DCT or a DST. It has been found that the performance of speech cepstral analysis using DCT and DST is comparable to the traditional DFT cepstral analysis.
3.5.3 Data Compression Data compression is an important application of transform coding when retrieval of a signal from a large database is required. Transform coefficients with large variances can be retained to represent significant features for pattern recognition, for example. Those with small variances, below a certain threshold, can be discarded. Such a scheme can be used in reducing the required bandwidth for purposes of transmission or storage. The transforms used for these data compression purposes require maximal decorrelation of the data, with highest energypacking efficiency possible (efficiency is defined as how much energy can be packed into the fewest number of transform coefficients). The ideal or optimal transform is the KLT, which will diagonalize the data covariance matrix and pack the most energy into the fewest transform coefficients. Unfortunately, KLT is data dependent, and has no known fast computational algorithm, and, therefore, is not practical. On the other hand, Markov models describe most of the data systems quite well, and suboptimal but asymptotically equivalent transforms such as the DCT and the DST are data independent, and implementable using fast algorithms. Therefore, in many applications, such as storage of electrocardiogram (ECG) or vectorcardiogram (VCG) data, or video data transmission over telephone lines for video phones, suboptimal transforms such as the DCT are preferred over the optimal KLT. For such applications, depending upon the
|X(k)|
log |X(k)| log
xR(n) T–1
3-24
Transforms and Applications Handbook
Sampled ECG
One-dimensional DCT
Thresholding to keep 1/m coef
Storage
Inverse DCT
Reconstructed ECG
(a)
Padding with zeros to original length
Storage coeff (b)
FIGURE 3.6
(a) Data compression for storage, (b) reconstruction from compressed data.
x(n–N+1) an(N – 1) N×N
z–1
an(N – 2) Adaptive LMS algorithm
DCT
x(n – 1)
an1 Σ
–
Σ
+
r(n)
z–1 x(n)
y(n) output
an0
x(n) input
FIGURE 3.7
Adaptive transform domain LMS filtering.
required fidelity of the reconstructed data, compression ratios of up to 10:1 have been reported, and compression ratios of 3:1 to 5:1 using DCT for both ECG (one-dimensional) and VCG (twodimensional) are commonplace. Figure 3.6a and b show the block diagrams for processing, storage, and retrieval of a one-dimensional ECG, using m:1, compression ratio.
3.5.4 Transform Domain Processing While discarding low variance coefficients in the DCT domain will provide data compression, certain details or desired features in the original data may be lost in the reconstruction. It is possible to remedy this partially by processing the transform coefficients before reconstruction. Adaptive processing can be applied based on some subjective criteria, such as in video phone applications. Coefficient quantization is another means of processing to minimize the effect of noise. Other processing techniques such as subsampling (decimation) and up-sampling (interpolation) can also be performed in the DCT domain, effectively combining the operations of filtering and transform coding. Such processing techniques have been
successfully employed to convert high definition TV signals to the standard NTSC TV signals. One of the most popular digital signal processing tools is the adaptive least-mean-square (LMS) filtering. This can be done either in the time domain or in the transform domain. Figure 3.7 shows the block diagram for the adaptive DCT transform domain LMS filtering. Here an0,an1, . . . , an,N 1 are the adaptive weights for the transform domain filter. The desired response is {r(n)} and {y(n)} is the filtered output. It has been found that such transform domain filtering speeds up the convergence of the LMS algorithm for speech-related applications such as spectral analysis and echo cancellation.
3.5.5 Image Compression by the Discrete Local Sine Transform (DLS) 3.5.5.1 Introduction DCT has long been recognized as one of the best substitutes for the optimal, but data-dependent KLT, in image processing. Many standards, such as the JPEG (Joint Photographic Experts Group) and MPEG (Moving Pictures Experts Group) have adopted DCT as a standard transform technique for image compression.
3-25
Sine and Cosine Transforms
While both KLT and DCT satisfy the perfect reconstruction (PR) condition when no compression (or dropping of transform coefficients) takes place in the transform domain, both suffer from the artifact of ‘‘blocking’’ whenever compression is done. The severity of such an artifact depends on the amount of compression. In speech and audio processing, this appears as a clicking sound in the reconstructed speech. In image compression, it appears as ‘‘tiles’’ overlaying the reconstituted picture. The blocking artifact can be attributed to the fact that twodimensional image processing by transform generally takes place with blocks of pixels, the most common sizes being 8 3 8 and 16 3 16. When modification of the transform coefficients occurs in compression or other transform domain processing, the PR condition is violated. The mismatching of the edges in the reconstructed blocks products this artifact. Efforts to counter this compression artifact led to the development of lapped transforms (see Malvar, 1992). The transforms are based on basis functions with a wider support in the data domain than in the transform domain, leading to overlaps of the basis functions in the edge region of each block; hence, the name ‘‘lapped’’ transform. Many such lapped transforms can be constructed using different criteria. There are lapped orthogonal transforms (LOTs), modulated lapped transforms (MLTs), and hierarchical lapped transform (HLT). There are also lapped transforms based on the discrete sine or cosine basis functions. In this section, one such lapped transform based on the discrete sine basis function is described. This is called DLS. The transform is applied in image compression at different compression ratios and the results are compared with other lapped transforms.
0
If FT ¼
Xm ¼ FT xm
(3:190)
Here F is the lapped transform matrix of dimension N 3 M. One might interpret such a matrix as an M-dimensional matrix spanned by M N-dimensional vectors. Specifically, if M ¼ 2 and N ¼ 4, then the two-dimensional vector space is spanned by two linearly independent four-dimensional vectors. As can be imagined, such a scheme will provide additional flexibility in the design of the transform basis functions. When a data sequence is to be processed by a lapped transform, the basic block transform matrix F is of dimension N 3 M, whereas the overall transform matrix C will be in block diagonal form, given by
a21 a31 a22 a32 2 .. 6 . 6 a11 6 6 6 a21 6 6 a31 C¼6 6 a41 6 6 6 6 6 4 a11 a12
F
O
F
1 C C A
(3:191)
a41 , the matrix C will appear as a42 3 a12 a22 a32 a42
a11 a21 a31 a41
a12 a22 a32 a42 ..
.
7 7 7 7 7 7 7 7 7 7 7 7 7 7 5
(3:192)
when the length of the overlap is 2. For a data sequence xm of dimension K, the lapped transformed sequence Xm is given by X m ¼ CT xm :
(3:193)
Evidently, in the segmented form of xm (each segment of length N), the data points located at the ends, in the overlapped regions, will be processed in two consecutive block transforms. One can visualize this as a sliding window of size N moving over the data sequence in shifts of size M each. When compression or other processing is not applied, all invertible transforms should satisfy the PR condition. In terms of the transformation matrix, this PR condition is stated simply as
3.5.5.2 Elements of the Lapped Orthogonal Transform (LOT) In general, a lapped transform will take N sample points in the data domain and transform these into M coefficients in the conjugate domain, where N > M. Very often, N can be as much as twice the size of M. In matrix vector notations, a data vector xm of length N is transformed into a vector Xm of length M, and the transform is represented by the M 3 N matrix FT in the equation
B C¼B @
O
F
CCT ¼ IK
and
CT C ¼ IK
(3:194)
where IK is a K 3 K identity matrix. From Equation 3.194 conditions for the component block matrix F can be stated FT F ¼ I M
(3:195)
FT WF ¼ OM ,
(3:196)
and
where W is an M 3 M ‘‘one block shift’’ matrix defined by W¼
O1 O2
IL : O1
Here, L is the length of the overlap region, O1 is an L 3 (M L) null matrix, O2 is an (M L) 3 (M L) null matrix, and OM is an M 3 M null matrix. Thus, in addition to the usual orthonormality condition 3.195, lapped transforms require the additional ‘‘lapped orthogonality’’ condition 3.196 to preserve the overall PR requirement.
3-26
Transforms and Applications Handbook
3.5.5.3 The Discrete Local Sine Transform (DLS) By properly choosing a ‘‘core’’ and a ‘‘lapped’’ region together with a specified function, a lapped transform basis set can be constructed to satisfy the PR condition. The DLS is just such a set, based on the continuous bases of Coifman and Meyer [See Coifman and Meyer, 1991.] Let Fs be the DLS transform matrix, so that Fs ¼ [f0 , f1 , . . . , fM 1 ]:
(3:197)
Then the basis function fr’s are defined by p 2r þ 1 n fr (n) ¼ (2=M) b(n) sin p 2 M n 2 [0, M þ L
r 2 [0, M
1];
; e
1]
(3:198)
where n, r are respectively the index for the data sample and the index of the basis function; e ¼ (L 1)=2M; M is the number of basis functions in the set and L is the length of the lapped portion. b(n) is called a bell function and it controls the rolloff over the lapped portion of the basis function. It is given by 8 np 1 2np > > Se (n) ¼ sin , sin > > > 2(L 1) 4 L 1 > > > < 1, b(n) ¼ > C > e (n M) > > > > (n m)p 1 2(n M)p > > , sin : ¼ cos 2(L 1) 4 L 1
Time domain
Frequency domain
r=0
2
0 –0.5 5
10
15
0 0
100
200
300
400
100
200
300
400
100
200
300
400
100
200
300
400
100
200
300
400
100
200
300
400
100
200
300
400
100
200
300
400
r=1
0.5 2
0 –0.5 0
5
10
15
0 0
r=2
0.5 2
0 –0.5 0
5
10
15
0 0
r=3
0.5 2
0 –0.5 0
5
10
15
0 0
r=4
0.5 2
0 –0.5 0
5
10
15
0 0
r=5
0.5 2
0 –0.5 0
5
10
15
0 0
r=6
0.5 2
0 –0.5 0
5
10
15
0 0
r=7
0.5 2
0 –0.5 0
FIGURE 3.8
5
10
1,
for n ¼ L, . . . , M n ¼ M, . . . , M þ L
1,
1.
Figure 3.8 shows the DLS basis functions in time and frequency domains for M ¼ 8, L ¼ 8. These basis functions are very similar to those of MLT developed by Malvar (1992).
0.5
0
n ¼ 0, . . . , L
15
DLS basis functions in time and frequency domain, L ¼ M ¼ 8.
0 0
3-27
Sine and Cosine Transforms
3.5.5.4 Simulation Results (For Details, See Li, 1997.) The standard Lena image of 256 3 256 pixels is used in the simulations for image compression. The original image is represented by 8 bits=pixel or 8 bpp and is shown in Figure 3.9a. Compressions based on a 16 3 16 block transform (M ¼ L ¼ 16 for lapped transforms) result in reconstructed images represented by 0.4 bpp, 0.24 bpp, and 0.16 bpp. A signal-to-noise ratio is calculated for the compressed image, based on the energy (variance) of the original image and the energy of the residual image. The residual image is defined as the difference between the original image and the compressed image. For lapped transforms, zeros are padded on the actual border of the image to enable the transform. Table 3.2 shows a comparison of the final signal-to-noise ratios for the several lapped transforms against the more conventional DCT at different compression ratios. It is obvious that the lapped transforms are superior in performance compared to the DCT. Figures 3.9 through 3.11 depict the various reconstructed images using different lapped transforms at different compression ratios. It is seen that serious ‘‘block’’ artifacts are absent from the compressed images even at the very low bits per pixel rates. The performance of the DLS lies between those of the LOT and the MLT.
50
50
100
100
150
150
200
200
250
50
100 150 200 250
(a)
250
50
50
100
100
150
150
200
200
250 (a)
100 150 200 250
50 100
150
150
200
200
250 100 150 200 250
(c)
50
50
100
100
150
150
200
200
100
150
150 200
150
150
(c)
200
200
250
250
TABLE 3.2 Comparison of Signal-to-Noise Ratio (dB) DLS
LOT
MLT
DCT
0.4 bpp
16.3
15.8
16.5
13.9
0.24 bpp
13.8
13.6
14.3
12.2
0.16 bpp
12.2
12.2
12.7
11.2
(b)
100
250
FIGURE 3.9 Comparison of original and reconstructed image, M ¼ L ¼ 16, at 0.4 bpp: (a) original at 8 bpp, (b) DLS, (c) LOT, (d) MLT.
100 150 200 250
50
100
100 150 200 250
100 150 200 250
250 50
50
100
50
50 (d)
FIGURE 3.10 Comparisons for original and reconstructed image, M ¼ L ¼ 16, at 0.24 bpp: (a) original at 8 bpp, (b) DLS, (c) LOT, (d) MLT.
200
(d)
100 150 200 250
250 50
50
100 150 200 250
50
(b)
50
(a)
50
250
100
50
50
100 150 200 250
250
(b)
(c)
50
50
100 150 200 250
50
100 150 200 250
250 50
100 150 200 250
(d)
FIGURE 3.11 Comparisons of original and reconstructed image, M ¼ L ¼ 16, at 0.16 bpp: (a) original at 8 bpp, (b) DLS, (c) LOT, (d) MLT.
3.6 Computational Algorithms In actual computations of FCT and FST, the basic integrations are performed with quadratures. Because the data are sampled and the duration is finite, most of the quadratures can be implemented via matrix computations. The fact that the FST and the FCT are closely related to the Fourier transform translates directly to the close relations between the computation of the DCT and the DST with that of the DFT. Many algorithms have been developed for the DFT. The most well known among them is the
3-28
Transforms and Applications Handbook
Cooley–Tukey fast Fourier transform (FFT), which is often regarded as the single most important development in modern digital signal processing. More recently, there have been other algorithms such as the Winograd algorithm, which are based on prime-factor decomposition and polynomial factorization. While DST and DCT can be computed using relations with DFT (thus, fast algorithms such as the Cooley–Tukey or the Winograd), the transform matrices have sufficient structure to be exploited directly, so that sparse factorizations can be applied to realize the transforms. The sparse factorization depends on the size of the transform, as well as the way permutations are applied to the data sequence. As a result, there are two distinct types of sparse factorizations, the decimation-in-time (DIT) algorithms and the decimation-in-frequency (DIF) algorithms. (DIT algorithms are of the Cooley–Tukey type while DIF algorithms are of the Sande–Tukey type). In Section 3.6.1, the computations of FST and FCT using FFT are discussed. In Section 3.6.2, the direct fast computations of DCT and DST are presented. Both DIT and DIF algorithms are discussed. All algorithms discussed are radix-2 algorithms, where N, which is related to the sample size, is an integer power of two.
constant as indicated by Equation 3.145. This means that the DCT of {x(n)} can be computed using a 2N-point FFT of {s(n)}. We note here that SF (m) ¼
Let {x(n), n ¼ 1, 2, . . . , N 1} be an (N 1)-point data sequence. Its DST as defined in Equation 3.149 is given by
for n 6¼ 0 or N p ¼ 1= 2 for n ¼ 0 or N:
kn ¼ 1
Construct an even or symmetric sequence using {x(n)} in the following way, 0 < n < N,
¼ 2x(n) ¼ x(2N
n ¼ 0, N, n) N < n 2N
1:
(3:199)
Based on the fact that the Fourier transform of a real symmetric sequence is real and is related to the cosine transform of the halfsequence, it can be shown that the DFT of {s(n)} is given by "
# mnp cos x(n) : (3:200) SF (m) ¼ 2 x(0) þ ( 1) x(N) þ N n¼1 m
mnp sin x(n): N n¼1 N
Construct a (2N 1)-point odd or skew-symmetric sequence {s(n)} using {x(n)}, s(n) ¼ x(n)
0 < n < N, n ¼ 0, N, n) N < n 2N
1:
(3:202)
The Fourier transform of a real skew-symmetric sequence is purely imaginary and is related to the sine transform of the half-sequence. From this, it can be shown that the 2N-point DFT of {s(n)} in Equation 3.202 is given by
where
s(n) ¼ x(n)
rffiffiffiffi N 1 2X
¼0 ¼ x(2N
rffiffiffiffi N mnp 2X Xc (m) ¼ km kn cos x(n), N n¼0 N
(3:201)
3.6.1.2 FST of Real Data Sequence
3.6.1 FCT and FST Algorithms Based on FFT Let {x(n), n ¼ 0, 1, . . . , N} be an (N þ 1)-point sequence. Its DCT as defined in Equation 3.145 is given by
mn s(n)W2N ,
n¼0
where W2N ¼ e j2p=2N, the principal 2Nth root of unity, is used for defining the DFT. It should be pointed out that the direct 2N-point DFT of a real even sequence may be considered inefficient, because inherent complex arithmetics are used to produce real coefficients in the transform. However, it is well known that a real 2N-point DFT can be implemented using an N-point DFT for a complex sequence. For details, the reader is referred to Chapter 2.
Xs (m) ¼ 3.6.1.1 FCT of Real Data Sequence
2N X1
N 1 X
Thus, the (N þ 1)-point DCT of {x(n)} is the same as the 2N-point DFT of the sequence {s(n)}, up to a normalization
SF (m) ¼
2j
N 1 X n¼1
sin
mnp N
x(n):
(3:203)
Thus, the 2N-point DFT of {s(n)} is the same as the (N 1)-point DST of {x(n)}, up to a normalization constant. Again SF(m) is as defined in Equation 3.201 and the 2N-point DFT for the real sequence can be implemented using an N-point DFT for a complex sequence.
3.6.2 Fast Algorithms for DST and DCT by Direct Matrix Factorization 3.6.2.1 Decimation-in-Time Algorithms These are Cooley–Tukey-type algorithms, in which the time ordering of the input data sequence is permuted to allow for the sparse factorization of the transformation matrix. The essential idea is to reduce a size N transform matrix into a block
3-29
Sine and Cosine Transforms
diagonal form, in which each block is related to the same transform of size N=2. Recursively applying this procedure, one finally arrives at the basic 2 3 2 ‘‘butterfly.’’ We present here the essential equations for this reduction and also the flow diagrams for the DIT computations of DCT and DST, in block form.
gc (m) ¼ hc (m) ¼
1. DIT algorithm for the DCT: Let Xc (m) ¼
N X
CNmn ~ x(n),
n¼0
mnp
and Xc (N=2) ¼ gc (N=2):
for m ¼ 0, 1, . . . , N=2,
N=2 1 X mn C [~x(2n þ 1) þ ~x(2n 2CNm n¼0 N=2
1)],
(3:207)
We note that both gc(m) and hc(m) are DCTs of half the original size. This way, the size of the transform can be reduced by a factor of two at each stage. Some combinations of inputs to the lower order DCT are required as shown by the definition for hc(m), as well as some scaling of the output of the DCT transform. Figure 3.12 shows a signal flow graph for an N ¼ 16 DCT. Note the reduction into two N ¼ 8 DCTs in the flow diagram. 2. DIT algorithm for DST: Let
:
(3:205)
Equation 3.204 can be reduced to Xc (m) ¼ gc (m) þ hc (m), Xc (N m) ¼ gc (m) hc (m),
n¼0
(3:204)
be the DCT of the sequence {x(n)} (i.e., ~x(n) is x(n) scaled by the normalization constant and the factor kn, while Xc(m) is scaled by km, as in Equation 3.145). Here we have simplified the notations using the definition
N
mn ~x(2n), CN=2
for m ¼ 0, 1, . . . , N=2 1, and hc (N=2) ¼ 0 and where ~x(N þ 1) is set to zero:
m ¼ 0, 1, 2, . . . , N,
CNmn ¼ cos
N=2 X
Xs (m) ¼
for m ¼ 0, 1, . . . , N=2,
x(n), Smn N ~
n¼1
m ¼ 1, 2, . . . , N
1,
(3:208)
be the DST of the sequence {x(n)}, (i.e., ~x(n) is x(n) that has been scaled with the proper normalization constant as required in Equation 3.149 and we have defined
(3:206)
Here, gc and hc are related to the DCT of size N=2, defined by the following equations:
~ x(m)
N 1 X
Smn N ¼ sin
[C] N = 16
mnp : N
(3:209)
X(n)
0
0
1
1
2
2
3
3
4
[C]
5
N=8
4 5
6
6
7
7
8
(2)–1
9
(2C 116)–1
10
(2C 18)–1
11
(2C 316)–1
12 13 14 15 16 (0 input)
FIGURE 3.12 DIT DCT N ¼ 16 flow graph ! ( 1).
[C]
(2C 14)–1
N=8
(2C 516)–1 (2C 38)–1 (2C 716)–1
8 16 15 14 13 12 11 10 9 (0 output)
3-30
Transforms and Applications Handbook
Following the same reasoning for the DIT algorithm for DCT, Equation 3.208 can be reduced to Xs (m) ¼ gs (m) þ hs (m), Xs (N m) ¼ gc (m) hs (m), Xs (N=2) ¼
N=2 X1 n¼1
for m ¼ 1, 2, . . . , N=2
( 1)n ~x(2N þ 1):
1, and (3:210)
hs (m) ¼
N=2 X1
Xc (2m) ¼ Gc (m), for m ¼ 0, 1, . . . , N=2, and (3:212) Xc (2m þ 1) ¼ Hc (m) þ Hc (m þ 1), for m ¼ 0, 1, . . . , N=2
1:
Here,
Here, gs(m) and hs(m) are defined as N=2 1 1 X mn gs (m) ¼ m S [~x(2n þ 1) þ ~x(2n 2CN n¼1 N=2
1. The DIF algorithm for DCT: In Equation 3.204, consider the even-ordered output points and the odd-ordered output points,
Gc (m) ¼
1)], and (3:211)
Smn x(2n): N=2 ~
Hc (m) ¼
n¼1
As before, it can be seen that gs(m) and hs(m) are the DSTs of half the original size, one involving only the odd input samples, and the other involving only the even input samples. Figure 3.13 shows a DIT signal flow graph for the N ¼ 16 DST. Note that it is reduced to two blocks of N ¼ 8 DSTs. 3.6.2.2 Decimation-in-Frequency Algorithms These are Sande–Tukey-type algorithms in which the input sample sequence order is not permuted. Again, the basic principle is to reduce the size of the transform, at each stage of the computation, by a factor of two. It would be of no surprise that these algorithms are simply the conjugate versions of the DIT algorithms.
~ x(m)
n¼0
1 [~x(n) 2CNn
~x(N
Xs (m) ¼ Gs (m),
Xs (2m Xs (N
1) ¼ Hs (m) þ Hs (m 1) þ ( 1) ~x(N=2), for m ¼ 1, 2, . . . , N=2 1, and
1) ¼ Hs (N=2
1) þ ( 1)N=2þ1 ~x(N=2):
[S]
(2C 14)–1
N=8
(2C 516)–1 (2C 38)–1 (2C 716)–1
7
1,
mþ1
(2C 316)–1
6
(3:213)
for m ¼ 1, 2, . . . , N=2
(2C 18)–1
5
mn n)]CN=2 :
As can be seen, both Gc(m) and Hc(m) are DCTs of size N=2. Therefore, at each stage of the computation, the size of the transform is reduced by a factor of two. The overall result is a sparse factorization of the original transform matrix. Figure 3.14 shows the signal flow graph for an N ¼ 16 DIF type DCT. 2. The GIF algorithm for DST: Equation 3.209 can be split into even-ordered and odd-ordered output points, where
(2C 116)–1
2
4
n¼0
N=2 X1
mn n)]CN=2 þ ( 1)m ~x(N=2), and
[~x(n) þ ~x(N
[S] N = 16
1
3
N=2 X1
(3:214)
X(n) 1 2 3 4 5 6 7
8
8
9
15
10
14
11 12
[S] N=8
13 12
13
11
14
10
15
9
FIGURE 3.13 DIT DST N ¼ 16 flow graph ! ( 1).
3-31
Sine and Cosine Transforms
[C] N = 16
x(m) 0
X(n) 0
1
1
2
2
3
3
[C]
4
N=8
5
4 5
6
6
7
7
8
8
(2)–1
16
9
(2C116)–1
15
10
(2C18)1
14
11
(2C316)–1
13 12 11
12
(2C14)–1
[C]
(2C516)–1
N=8
(2C38)–1
10 9
13 14 15
(2C716)–1
16
(0 input)
(0 output)
FIGURE 3.14 DIF DCT N ¼ 16 flow graph ! ( 1). x(m)
[S] N = 16
(2C116)–1
1
1
(2C18)–1
2
2
(2C316)–1
3 4 5
3
(2C14)–1
[S]
(2C516)–1
N=8
6
(2C716)–1
7
4 5
(2C38)–1
6
X(n)
7
8
8
15
9
14
10
13
11
[S]
12
N=8
11
12 13
10
14
9
15
FIGURE 3.15 DIF DST N ¼ 16 flow graph ! ( 1).
Here, the outputs Gs(m) and Hs(m) are defined by DSTs of half the original size as Gs (m) ¼ Hs (m) ¼
N=2 X1
[~x(n)
~x(N
n)]Smn N=2 , and
n¼1
N=2 X1 n¼1
(3:215) 1 [~x(n) 2CNn
~x(N
n)]Smn N=2 :
Figure 3.15 shows the signal graph for an N ¼ 16 DIF-type DST. Note that this flow graph is the conjugate of the flow graph shown in Figure 3.13.
3.7 Tables of Transforms This section contains tables of transforms for the FCT and the FST. They are not meant to be complete. For more details and a
3-32
Transforms and Applications Handbook
more complete listing of transforms, especially those of orthogonal and special functions, the reader is referred to the Bateman manuscripts (Erdelyi, 1954). Section 3.7.3 contains a list of conventions and definitions of some special functions that have been referred to in the tables.
3.7.1 Fourier Cosine Transforms 3.7.1.1 General Properties
f(t) 1 Fc(t) 2 f(at) a > 0 3 f(at) cos bt
a, b > 0
4 f(at) sin bt
a, b > 0
5 t2n f(t) 6 t2n þ 1 f(t) Ð1 7 0 f (r)[g(t þ r)þ g(jt rj)]dr Ð1 8 t f (r)dr
9 f(t þ a) fo(t a) Ð1 10 0 f (r)[g(t þ r) þ go (t r)]dr
Ð1 Fc (v) ¼ 0 f (t) cos vt dt v > 0 (p=2)f(v) (1=a)Fc(v=a) vþb v b (1=2a) Fc þFc a a vþb (1=2a) Fs a v b Fs a 2n d ( 1)n 2n Fc (v) dv d 2nþ1 ( 1)n 2nþ1 Fs (v) dv 2Fc (v)Gc (v)
1=2
U(t a) 5 (t a) 6 a(t2 þ a2) 1 a > 0 7 t(t2 þ a2) 1 a > 0 8 (1 t2)(1 þ t2) 2 9 t(t2 a2) 1 a > 0
p = t
at
Re a > 0
3
p
te
at
Re a > 0
(p=2)(a2 þ v2)
1=2
[(a2 þ v2)1=2 þ a]1=2 n![a=(a2 þ v2)]nþ1 Pnþ1 v 2m m nþ1 . ( 1) 2m¼0 2m a p(v=8a)1=2 exp( v2 =8a) 2 . I 1=4 ( v =8a) p ( 1)n p2 n 1 a 2n 1 exp[ (v=2a)2 ]He2n (2 1=2 v=a) .
n
5 te
at
Re a > 0
p 6 exp( at2)= t Re a > 0 7 t2n exp ( a2t2) jarg aj < p=4
8 t
.
3=2
exp( a=t) Re a > 0 p 9 t 1=2 exp( a= t) Re a > 0 10 t 1=2 ln t 11 (t2 a2) 1 ln t a > 0 12 t 1 ln (1pþ t) 13 exp( t= 2) p sin(p=4pþ t= 2) 14 exp( t= 2) p cos(p=4 þ t= 2) a2 þ t 2 15 ln a>0 1 þ t2 2 16 ln[1 þ (a=t) ] a > 0
2Fs(v) sin av
a>0
2Fs (v)Gs (v)
f(t) 1 t 1e 2 t
Fc(v) (p=2)(1=v)1=2 (2p=v)1=2C(v) (2p=v)1=2[1=2 C(v)] (p=2v)1=2 {cos av[1 2C(av)] þ sin av[1 2S(av)]} (p=2v)1=2[cos av sin av] (p=2) exp ( av) 1=2 e av Ei(av) þ eav Ei(av) (p=2)v exp( v) cos av Ci(av) þ sin av Si(av) p
3
2
t
sin t
(p=a)1=2 exp [ (2av)1=2 ] cos (2av)1=2 p (p=2v)1=2 [p cos (2a v) sin (2a v)] (p=2v)1=2[ln(4v) þ C þ p=2] (p=2v){sin(av)[ci(av) ln a] cos(av)[si(av) p=2]} (1=2){[ci(v)]2 þ [si(v)]2} (1 þ v4 ) 1 v2 (1 þ v4 )
1
(p=v)[exp( v) (p=v)[1
(1=2) tan
sin2(at)
sin t t
n
a>0
n ¼ 2, 3, . . .
2
4 exp( bt ) cos at Re b > 0 5 (a2 þ t2) 1(1 2b cos t þ b2) 1 Re a > 0, jbj < 1 6 sin(at2) a > 0
7 sin[a(1
t2)]
8 cos(at2)
a>0
9 cos[a(1
t2)]
a>0
3.7.1.3 Exponential and Logarithmic Functions
f(t) 1 e at Re a > 0 2 (1 þ t)e t
p
exp( av)]
exp( av)]
3.7.1.4 Trigonometric Functions
(1=v)Fs(v)
3.7.1.2 Algebraic Functions
f(t)p 1 (1= t) p 2 (1= t)[1 U(t 1)] p 3 (1= t)U(t 1) 4 (t þ a) 1=2 jarg aj < p
4 e
Fc(v)
a(a2 þ v2) 1 2(1 þ v2) 2 pffiffiffiffi p 2 (a þ v2 ) 2 cos [3=2 tan
3=4 1
(v=a)]
10 tan 1(a=t)
a>0
a>0
1
Fc(v) (2v 2)
(p=2)(a v=2)v < 2a 0 v > 2a np Xr0
( 1)r (vþn 2r)n r!(n r)! 1=2
(1=2)(p=b) av cosh 2b (1=2)(p=a)(1
1
, 0< v < n a2 þ v 2 exp 4b b2 ) 1 (ea
b)
1
(ea av þ beav) 0 v < 1 (1=4)(2p=a)1=2 2 2 v v sin cos 4a 4a (1=2)(p=a)1=2 cos[a þ p=4 þ v2=(4a)] .
1=2 (1=4)(2p=a) 2 2 v v þ sin cos 4a 4a (1=2)(p=a)1=2 sin[a þ p=4 þ v2=(4a)] (2v) 1[e av Ei(av) eav Ei( av)
3-33
Sine and Cosine Transforms
3.7.2 Fourier Sine Transforms 3.7.2.1 General Properties
f(t) 1 Fs(t) 2 f(at) a > 0 3 f(at) cos bt
a, b > 0
4 f(at) sin bt a, b > 0
6 t2n þ 1 f(t) Ð1 0
f (r)
Ð tþr
jt rj
g(s) ds dr
8 fo(t þ a) þ fo(t a) 9 fe(t a) fe(t þ a) Ð1 10 0 f (r)[g(jt rj) g(t þ r)]dr
10 11 12 13
1 2
(2=v)Fs(v)Gs(v) 2Fs(v) cos av 2Fc(v) sin av 2Fs (v) Gc (v)
3 4
5
3.7.2.2 Algebraic Functions
1 2 3 4 5 6 7 8 9 10 11
Fs(v) p=2 (p=2v)1=2 (2p=v)1=2 S(v) (2p=v)1=2[1=2 S(v)] (p=2v)1=2 {cos av[1 2S(av)] sin av [1 2C(av)]} (p=2v)1=2(sin av þ cos av) (t a) 1=2U(t a) (p=2) exp( av) t(t2 þ a2) 1 a > 0 (p=2) cos av t(a2 t2) 1 a > 0 2 2 2 a>0 (pv=4a) exp( av) t(a þ t ) a2[t(a2 þ t2)] 1 a > 0 (p=2)[1 exp( av)] (p=4) exp( v) sin v t(4 þ t4) 1 f(t) 1=tp 1=p t 1= pt[1 U(t 1)] (1= t)U(t 1) (t þ a) 1=2 jarg aj < p
3.7.2.3 Exponential and Logarithmic Functions
(2av)1=2 ]
(p=2)(a=v)1=2 [J1=4 (a2 =8v) 2 2 . cos(p=8 þ a =8v) þ Y 1=4(a =8v) 2 . sin(p=8 þ a =8v)] (p=2)[C þ ln v] t 1 ln t t(t2 a2) 1 ln t a > 0 (p=2){cos av[Ci(av) ln a] þ sin av[Si(av) p=2]} p Ei( v=a) t 1 ln(1 þ a2t2) a > 0 tþa ln a>0 (p=v) sin av jt aj
3.7.2.4 Trigonometric Functions
2n
d Fs (v) dv2n 2nþ1 d ( 1)nþ1 2nþ1 Fc (v) dv
( 1)n
5 t2n f(t)
7
Ð1 Fs (v) ¼ 0 f (t) sin vt dt v > 0 (p=2)f(v) (1=a)Fs(v=a) vþb v b þFs (1=2a) Fs a a vþb (1=2a) Fc a v b Fc a
(p=a)1=2 exp [ sin (2av)1=2
3=2
exp( a=t) jarg aj < p=2 p 9 t 3=4 exp( a= t) jarg aj < p=2
8 t
6
Fs(v) p=4 0 < v < 2a p=8 v ¼ 2a 0 v > 2a (1=4)(v þ 2a) ln jv þ 2aj t 2 sin2(at) a > 0 þ (1=4)(v 2a) ln jv 2aj (1=2)v ln v t 2 [1 cos at] a > 0 (v=2) ln j(v2 a2)=v2j þ (a=2) ln j(v þ a)=(v a)j (p=2a)1=2{cos(v2=4a) sin(at2) a > 0 C[v=(2pa)1=2] þ sin(v2=4a) S[v=(2pa)1=2] 2 (p=2a)1=2{sin(v2=4a) cos(at ) a > 0 C[v=(2pa)1=2] cos(v2=4a) S[v=(2pa)1=2] 1 (p=2v)[1 exp( av)] tan (a=t) a > 0 f(t) t 1 sin2 (at)
a>0
3.7.3 Notations and Definitions 1. f(t): Piecewise smooth and absolutely integrable function on the positive real line. 2. Fc(v): The FCT of f(t). 3. Fs(v): The FST of f(t). 4. fo(t): The odd extension of the function f over the entire real line. 5. fe(t): The even extension of the function f over the entire real line. 6. C(v) is defined as the integral: (2p)
1=2
ðv
t
1=2
cos t dt:
0
f(t) 1 e at Re a > 0 2 Te at Re a > 0 3 t(1 þpat)e at Re a > 0 4 e at t Re a > 0 5 t 3=2e at Re a > 0 6 exp( at2) Re a > 0 7
2
t exp ( t =4a) Re a > 0
Fs(v) v(a2 þ v2) 1 (2av)(a2 þ v2) 2 3 2 2 3 (8a p v)(a 2þ v )2 1=2 (p=2)(a þ v ) 2 2 1=2 . [(a þ v ) a]1=2 1=2 2 2 1=2 (2p) [(a þ v ) a]1=2 1=2 j(1=2)(p=a) p exp ( v2 =4a)Erf 2jv a p 2 2av (pa) exp ( av )
7. S(v) is defined as the integral: (2p)
1=2
ðv
t
1=2
sin t dt:
0
8. Ei(x) is the exponential integral function defined as 1 ð
x
t 1 e t dt,
jarg (x)j < p:
3-34
Transforms and Applications Handbook
9. Ei(x) is defined as (1=2)[Ei(x þ j0) þ Ei(x j0)]. 10. Ci(x) is the cosine integral function defined as 1 ð
t
1
17. Jy(x) and Yy(x) are the Bessel functions for the first and second kind, respectively, Jy (x) ¼
cos t dt:
x
11. Si(x) is the sine integral function defined as ðx
1 X m¼0
( 1)m
(x=2)yþ2m m!G(y þ m þ 1)
and Yy(x) ¼ cosec{yp[Jy(x) cos yp
t
1
J y(x)]}.
18. U(t): is the Heaviside step function defined as
sin t dt:
0
U(t) ¼ 0 t < 0, ¼ 1 t > 0:
12. Iy(z) is the modified Bessel function of the first kind defined as 1 X m¼0
(z=2)yþ2m , m!G(y þ m þ 1)
jzj < 1, jarg (x)j < p:
13. Hen(x) is the Hermite polynomial function defined as ( 1)n exp (x2 =2)
dn [exp ( x2 =2)]: dxn
14. C is the Euler constant defined as
lim
m!1
"
m X n¼1
(1=n)
#
ln m ¼ 0:5772156649 . . .
15. ci(x) and si(x) are related to Ci(x) and Si(x) by the equations: ci(x) ¼ Ci(x), si(x) ¼ Si(x) 16. Erf(x) is the error function defined by ðx p (2= p) exp ( t 2 )dt: 0
p=2.
References Churchill, R.V. 1958. Operational Mathematics, 3rd ed. New York: McGraw-Hill. Coifman, R.R. and Meyer, Y. 1991. Remarques sur l’analyse de Fourier a fenetre, series I, C.R. Acad. Sci., Paris, 312, 259–261. Erdelyi, A. 1954. Bateman Manuscript, Vol. 1, New York: McGraw-Hill. Li, J. 1997. Lapped Transforms Based on DLS and DLC Basic Function and Applications, Ph.D. dissertation, McMaster University, Hamilton, Ontario, Canada. Malvar, H. 1992. Signal Processing with Lapped Transforms, Artech House, Boston. Rao, K.R. and Yip, P. 1990. Discrete Cosine Transform: Algorithms, Advantages, Applications. Boston: Academic Press. Sneddon, I.N. 1972. The Uses of Integral Transforms. New York: McGraw-Hill
4 Hartley Transform 4.1 4.2 4.3
Introduction................................................................................................................................... 4-1 Historical Background................................................................................................................. 4-1 Fundamentals of the Hartley Transform................................................................................ 4-2 The Relationship between the Hartley and the Sine and Cosine Transforms . The Relationship between the Hartley and Fourier Transforms . The Relationship between the Hartley and Hilbert Transforms . The Relationship between the Hartley and Laplace Transforms . The Relationship between the Hartley and Real Fourier Transforms . The Relationship between the Hartley and the Complex and Real Mellin Transforms
4.4 4.5 4.6
Elementary Properties of the Hartley Transform ................................................................. 4-7 The Hartley Transform in Multiple Dimensions................................................................ 4-10 Systems Analysis Using a Hartley Series Representation of a Temporal or Spacial Function .................................................................................................................... 4-10 Transfer Function Methodology and the Hartley Series to Electric Power Quality Assessment
4.7
.
The Hartley Series Applied
Application of the Hartley Transform via the Fast Hartley Transform........................ 4-17 Convolution in the Time and Transform Domains . An Illustrative Example Solution Method for Transient or Aperiodic Excitations
Kraig J. Olejniczak University of Arkansas
.
4.8 Table of Hartley Transforms ................................................................................................... 4-25 Appendix: A Sample FHT Program.................................................................................................. 4-28 Acknowledgments.................................................................................................................................. 4-31 References ................................................................................................................................................ 4-31
4.1 Introduction The Hartley transform is an integral transformation that maps a real-valued temporal or special function into a real-valued frequency via the kernel, cas (vx) cos (vx) þ sin (vx). This novel symmetrical formulation of the traditional Fourier transform (FT), attributed to Ralph Vinton Lyon Hartley in 1942,1 leads to a parallelism that exists between the function of the original variable and that of its transform. Furthermore, the Hartley transform permits a function to be decomposed into two independent sets of sinusoidal components; these sets are represented in terms of positive and negative frequency components, respectively. This is in contrast to the complex exponential, exp( jvx), used in classical Fourier analysis. For periodic power signals, various mathematical forms of the familiar Fourier series (FS) come to mind. For aperiodic energy and power signals of either finite or infinite duration, the Fourier integral can be used. In either case, signal and systems analysis and design in the frequency domain using the Hartley transform may be deserving of increased awareness due necessarily to the existence of a fast algorithm that can substantially lessen the computational burden when compared to the classical complex-valued fast Fourier transform (FFT).
Throughout the remainder of this chapter, it is assumed that the function to be transformed is real valued. In most engineering applications of practical interest, this is indeed the case. However, in the case where complex-valued functions are of interest, they may be analyzed using the novel complex Hartley transform formulation presented in Ref. [10].
4.2 Historical Background Ralph V. L. Hartley was born in Spruce Mountain, approximately 50 miles south of Wells, Nevada, in 1888. After graduating with the A.B. degree from the University of Utah in 1909, he studied at Oxford for 3 years as a Rhodes Scholar where he received the B.A. and B.Sc. degrees in 1912 and 1913, respectively. Upon completing his education. Hartley returned from England and began his professional career with the Western Electric Company engineering department (New York) in September of the same year. It was here at AT&T’s R&D unit that he became an expert on receiving sets and was in charge of the early development of radio receivers for the transatlantic radio telephone tests of 1915. His famous oscillating circuit, known as the Hartley oscillator, was invented during this work as well as a neutralizing circuit to offset the internal coupling of triodes that tended to cause singing. 4-1
4-2
4.3 Fundamentals of the Hartley Transform Perhaps one of Hartley’s most long-lasting contributions was a more symmetrical Fourier integral originally developed for steady-state and transient analysis of telephone transmission system problems.1 Although this transform remained in a quiescent state for over 40 years, the Hartley transform was rediscovered more than a decade ago by Wang3–6 and Bracewell7–9 who authored definitive treatises on the subject. The Hartley transform of a function f (x) can be expressed as either
1 H(n) ¼ pffiffiffiffiffiffi 2p
1 ð
f (x) cas (nx)dx
(4:1a)
1
or H( f ) ¼
1 ð
f (x) cas (2pfx)dx
(4:1b)
1
where the angular or radian frequency variable v is related to the frequency variable f by v ¼ 2pf and H( f ) ¼
pffiffiffiffiffiffi pffiffiffiffiffiffi 2pH(2pf ) ¼ 2pH(n):
(4:2)
Here the integral kernel, known as the cosine-and-sine or cas function, is defined as cas (nx) cos (nx) þ sin (nx) pffiffiffi p cas (nx) ¼ 2 sin nx þ 4 pffiffiffi p cas (nx) ¼ 2 cos nx 4
(4:3)
Figure 4.1 depicts the cas function on the interval [0, 2p]. Additional properties of the cas function are shown in Tables 4.1 through 4.5 below. The inverse Hartley transform may be defined as either 1 f (x) ¼ pffiffiffiffiffiffi 2p
1 ð
H(n) cas (nx)dn
(4:4a)
1
1.5 cas' ξ = cos ξ – sin ξ 1
0.5 Magnitude
During World War I, Hartley performed research on the problem of binaural location of a sound source. He formulated the accepted theory that direction was perceived by the phase difference of sound waves caused by the longer path to one ear then to the other. After the war, Hartley headed the research effort on repeaters and voice and carrier transmission. During this period, Hartley advanced Fourier analysis methods so that AC measurement techniques could be applied to telegraph transmission studies. In his effort to ensure some privacy for radio, he also developed the frequency-inversion system known to some as greyqui hoy. In 1925, Hartley and his fellow research scientists and engineers became founding members of the Bell Telephone Laboratories when a corporate restructuring set R&D off as a separate entity. This change affected neither Hartley’s position nor his work. R. V. L. Hartley was well known for his ability to clarify and arrange ideas into patterns that could be easily understood by others. In his paper entitled ‘‘Transmission of Information’’ presented at the International Congress of Telegraphy and Telephony in Commemoration of Volta at Lake Como, Italy, in 1927, he stated the law that was implicity understood by many transmission engineers at that time, namely ‘‘the total amount of information which may be transmitte over such a system is proportional to the product of the frequency-range which it transmits by the time during which it is available for the transmission.’’2 This contribution to information theory was later known by his name. In 1929, Hartley gave up leadership of his research group due to illness. In 1939, he returned as a research consultant on transmission problems. During World War II he acted as a consultant on servomechanisms as applied to radar and fire control. Hartley, a fellow of the Institute of Radio Engineers (I.R.E.), the American Association for the Advancement of Science, the Physical and Acoustical Societies, and a member of the A.I.E.E., was awarded the I.R.E. Medal of Honor on January 24, 1946, ‘‘For his early work on oscillating circuits employing triode tubes and likewise for his early recognition and clear exposition of the fundamental relationship between the total amount of information which may be transmitted over a transmission system of limited band and the time required.’’ Hartley was the holder of 72 patents that documented his contributions and developments. A transmission expert, he retired from Bell Laboratories in 1950 and died at the age of 81 on May 1, 1970.
Transforms and Applications Handbook
0
–0.5
–1 cas ξ = cos ξ + sin ξ
–1.5 0
2π Angle (rad)
FIGURE 4.1
The cas function on the interval [0, 2p].
4-3
Hartley Transform TABLE 4.1
TABLE 4.4 Trigonometric Functions of Some Special Angles
Selected Trigonometric Properties of the cas Function cas j ¼ cos j þ sin j
The cas function
cas j ¼ 12 [(1 þ j)exp( jj) þ (1
The cas function
cas0 j ¼ cas ( j) ¼ cos j
The complementary cas function
Angles
j)exp( jj)]
sin j
pffiffiffi pffiffiffi 2 cos j þ p4 2 sin j þ 3p 4 ¼
The complementary cas function
cos j ¼ 12 [cas j þ cas ( sin j ¼ 12 [cas j cas ( jþsec j cas j ¼ csc sec jþcsc j
Relation to cos Relation to sin Reciprocal relation
cas t cas y ¼ cos (t
Function product relation
j)]
Indefinite integral relation
d dt cas t
Derivative relation
1 2(
pffiffiffi 3 þ 1)
pffiffiffi 2
1 2 (1
90 ¼ p2
y) þ sin (t þ y)
cas 2j ¼ cas2 j cas2 ( j) Ð cas (t)dt ¼ cas ( t) ¼
Double angle relation
30 ¼ p6
60 ¼ p3
j)]
jþtan j csc j cas j ¼ cot j sec csc j sec j
Quotient relation
0
45 ¼ p4
cas j ¼ cot j sin j þ tan j cos j
Product relation
cas
08 ¼ 0
cas0 t
1
120 ¼ 2p 3
1 2 (1
150 ¼ 5p 6
1 2 (1
1808 ¼ p
1
270 ¼
¼ cas ( t) ¼ cas0 t
þ
3p 2
pffiffiffi 3)
þ
pffiffiffi 3)
pffiffiffi 3)
1
cas (t þ y) ¼ cos t cas y þ sin t cas0 y
Angle–sum relation
y) ¼ cos t cas0 y þ sin t cas y
Angle–difference relation
cas (t
Function–sum relation
cas t þ cas y ¼ 2 cas 12 (t þ y) cos 12 (t cas t
Function–difference relation
cas y ¼ 2
cas0 12 (t
þ
y) sin 12 (t
y) y)
TABLE 4.5 The Trigonometric Function of an Arbitrary Angle Y p(x, y)
TABLE 4.2
Signs of the cas Function
Quadrant
cas þ
I III
r
y
þ and
II
θ
þ and
IV
x
X
O
TABLE 4.3 Variations of the cas Function Quadrant
cas þ1 ! þ 1 with a maximum at
I
þ1 !
II
1!
III
p 4
1
1 with a minimum at 5p 4
1 !þ1
IV
or f (x) ¼
1 ð
H( f ) cas (2pfx)df :
(4:4b)
1
The angular frequency variable v, with units of radians per second, is equivalent to the frequency variable v in the Fourier domain; however, it is used here to further distinguish H(v), the Hartley transform of f (x), from the FT of f (x), F(v). From Hartley’s original formulation expressed in Equations 4.1a and 4.4a, it is clear that the inverse transformation (synthesis equation) calls for the identical integral operation as the direct transformation (analysis equation). The peculiar scaling coefficient
pffiffiffiffiffiffi 1= 2p, chosen by Hartley for the direct and inverse transformations, is used to satisfy the self-inverse condition depicted in Figure 4.2. When the independent variable is angular frequency with units of radians per second, other coefficients may be used provided that the product of the direct and inverse transform coefficients is 1=2p.
HT f (x)
HT H(f)
f (x)
FIGURE 4.2 The self-inverse property associated with the Hartley transform.
4-4
Transforms and Applications Handbook
If one lets u be any angle in the x–y plane and p(x, y) denotes any point on the terminal side of that angle, then denoting the positive distance from the origin to p as r, x y xþy cas u ¼ cos u þ sin u ¼ þ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : r r x2 þ y2
The existence of the Hartley transform of f (x) given by Equations 4.1a and b is equivalent to the existence of the FT of f (x) given by 1 f (x) ¼ 2p
1 ð
1
1 ð
f (z) cos [v(x
z)]dz dv:
(4:5)
1
Equation 4.5 can also be equivalently expressed by the following three equations: 1 f (x) ¼ pffiffiffiffiffiffi 2p
1 ð
1
[C(v) cos (vx) þ S(v) sin (vx)]dv
1 C(v) ¼ pffiffiffiffiffiffi 2p
1 ð
1
H( f ) þ H( f ) ¼ H (f) ¼ 2 1 ð
(4:7)
¼ Ho( f ) ¼
H( f )
H( f ) 2
(4:8)
where He( f ) and Ho( f ) are the even and odd part of the Hartley transform H( f ), respectively Alternatively, Equation 4.5 can be expressed as
where
1 f (x) ¼ pffiffiffiffiffiffi 2p
1 F(v) ¼ pffiffiffiffiffiffi 2p
1 ð
F(v)ejvx dv
(4:9a)
1
1 ð
f (x)e
jvx
dx:
(4:9b)
1
Although the transform pair defined by Equations 4.1a and 4.4a are equivalent to either Equations 4.6 through 4.8 or Equations 4.9a and b, note that the variables x and v are symmetrically embedded in the former but in neither of the latter. To derive Equations 4.1a and 4.4a, let H(n) ¼ C(v) þ S(v)jv¼n ,
1 ð
1
[C(v) sin (vx) þ S(v) cos (vx)]dv ¼ 0:
(4:10)
the linear combination of the cosine and sine transforms. Then, Equation 4.4a follows by linearity applied to Equations 4.7
(4:11)
When Equation 4.11 is added to the right-hand side of Equations 4.1a and 4.6 results. It is interesting to note that Equations 4.5 through 4.8 are similar to Equations 4.1a and 4.4a when f (x) is real, in that C(v) and S(v) are real, as is H(v) via Equation 4.10. This is in stark contract to the complex nature of Equation 4.9b when f (x) is real. The following expressions are used to further explain the physical nature of the Hartley transform. The functions f (x) ¼ f e(x) þ f o(x), x > 0, and H(v) ¼ H e(v) þ H o(v), v > 0, can be resolved into their even and odd components as follows:1 1 f e (x) ¼ [ f (x) þ f ( x)], 2
1 ¼ pffiffiffiffiffiffi 2p
1 ¼ pffiffiffiffiffiffi 2p 1 ¼ pffiffiffiffiffiffi 2p
H e (n) cos (nx)dn
1 ð
H(n) cos (nx)dn
1
1 ¼ pffiffiffiffiffiffi 2p
1
f ( x)],
1 ¼ pffiffiffiffiffiffi 2p
x>0
1 ð
H o (n) sin (nx)dn
1 ð
H(n) sin (nx)dn
1
(4:14)
(4:15)
1
n>0
1 ð
f e (x) cos (nx)dx
1 ð
f (x) cos (nx)dx
1
(4:16)
(4:17)
1
1 H o (n) ¼ [H(n) 2 1 ¼ pffiffiffiffiffiffi 2p
(4:12)
(4:13)
1 H e (n) ¼ [H(n) þ H( n)], 2 1 ¼ pffiffiffiffiffiffi 2p
x>0
1 ð
1 f o (x) ¼ [ f (x) 2
f (x) sin (vx)dx
1
1 pffiffiffiffiffiffi 2p
1 ¼ pffiffiffiffiffiffi 2p
f (x) cos (vx)dx
e
1 S(v) ¼ pffiffiffiffiffiffi 2p
(4:6)
and 4.8. Because C(v) and S(v) are an even and odd function of v, respectively, then
H( n)],
n>0
1 ð
f o (x) sin (nx)dx
1 ð
f (x) sin (nx)dx:
1
1
(4:18)
(4:19)
4-5
Hartley Transform
It is readily known that when the function to be transformed is real valued, then its FT exhibits Hermitian symmetry. That is, F( v) ¼ F*(v)
(4:20)
where the superscript* denotes complex conjugation. This implies that the FT is over specified because a dependency exists between transform values for positive and negative values of v, respectively. This inherent redundancy is not present in the Hartley transform. Observe the effect of positive and negative values of v in Equation 4.1a. Specifically, for negative values of v, cas ( nx) ¼
pffiffiffi 2 cos nx
p pffiffiffi p ¼ 2 cos nx þ : 4 4
4.3.1 The Relationship between the Hartley and the Sine and Cosine Transforms The Hartley transform is trivially related to the cosine and sine transforms (see also Chapter 3) by the linear combination in Equation 4.10 and to each transform individually using the fifth and sixth entries of Table 4.1, respectively.
(4:21)
4.3.2 The Relationship between the Hartley and Fourier Transforms
For positive values of v, pffiffiffi p cas (nx) ¼ 2 sin nx þ : 4
transform, to include the Dirac delta function, then even these signals can be handled using methods similar to those for finiteenergy signals. This should not be surprising because the Hartley transform is simply a symmetrical representation of the FT.
(4:22)
From Equation 4.4a it is clear that the function f (x) is composed of an equal number of positive and negative frequency components. In light of the two equations above, it seems that any two components, one at v and the other v, vary as the cosine and sine of the same angle. Thus, whereas Equations 4.7 and 4.8 represent a resolution into sine and cosine components, each of which is further decomposed into positive and negative frequencies, the Hartley transform of Equation 4.1a amalgamates these two resolutions into one. Equation 4.6 alludes to the fact that although C(v) and S(v) are each defined for positive and negative values of v, because of their respective symmetry properties, they are completely specified by their values over either half range alone. This is due to the Hermitian symmetry existing into the FT as shown by Equation 4.20. Note in Equation 4.1a that H(v) is a single function that contains no redundancy; the value of H(v) for v < 0 is independent of that for v > 0. Therefore, H(v) must be specified over the entire range of v. Although not all time functions can be represented via the Fourier integral, for those functions where such a representation exists, there is a unique relationship between the function and its FT. This is possible if and only if the integral is convergent. Sufficient conditions (although not necessary to guarantee convergence of the Fourier integral) are the well-known Dirichlet conditions, which are stated below for convenience. Ð1 1. 1 jf (x)jdx < 1, that is, f (x) is absolutely integrable. 2. f (x) has a finite number of discontinuities over any finite interval. 3. f (x) has a finite number of local maximum and local minimum points over any finite interval. The above sufficient conditions include most finite-energy signals of engineering interest. Unfortunately, important signals such as periodic signals and the unit step function are not absolutely integrable. If we allow the FT, and thus the Hartley
The Hartley transform is closely related to the familiar FT. It can be easily shown via Equation 4.9b that these transforms are related in a very simple way H(n) ¼ [5{F(v)}
({F(v)}]v¼n
(4:23)
where 5{F(v)} ¼ R(v) ¼ H e (n) ¼ H e ( n)
(4:24)
H o (n) ¼ H o ( n)
(4:25)
({F(v)} ¼ I(v) ¼ and
1 þ j1 1 j1 F( f ) þ F( f ) 2 2 1 1 H( f ) ¼ e jp=4 F( f ) þ e jp=4 F*( f ): 2 2
H( f ) ¼
(4:26)
The FT expressed in terms of the Hartley transform is H(n) þ H( n) F(v) ¼ 2
F(v) ¼ [H e (n)
j
H(n)
H( n) 2
(4:27) n¼v
jH o (n)]n¼v ,
(4:28)
or alternatively as 1 F( f ) ¼ e 2
jp=4
1 H( f ) þ e jp=4 H( f ): 2
(4:29)
To summarize, the FT is the even part of the Hartley transform plus negative j times the odd part; similarly, the Hartley transform is the real part plus the negative imaginary part of the FT. Equation 4.23 will be used most often by the engineer when computing the Hartley transform of an arbitrary time or spacial function when the FT is known or readily available via a table lookup; when this is not the case, direct evaluation of Equation 4.1a or 4.1b is required
4-6
Transforms and Applications Handbook
4.3.3 The Relationship between the Hartley and Hilbert Transforms
Lastly, to obtain the Hartley transform of f (x), apply Equation 4.23 to F(v) above, or evaluate Equation 4.4a directly. Thus,
The Hilbert transform (see also Chapter 7), ^f (x), of a function f (x), is obtained by convolving f (x) with the function 1=px. That is, ^f (x) ¼ f (x) 1 ¼ 1 * px p
1 ð
1
f (l) dl x l
4.3.4 The Relationship between the Hartley and Laplace Transforms Because the Hartley transform is the symmetrical form of the classical FT defined in Equations 4.9a and b, it is most convenient to review how the FT relates to the one-sided or unilateral Laplace transform (LT). Although the unilateral LT is concerned with time functions for t > 0, the FT includes both positive and negative time but falters with functions have finite average power because the concept of the Dirac delta function must be introduced. For most functions of practical engineering significance, the conversion from the Laplace to the FT of f (x) is quite straightforward. However, more difficult situations do exist but are rarely encountered in practical engineering problems; thus, these situations will not be discussed any further.
When the LT of a function f (x) has no poles on the jv axis and poles only in the LHP, the FT may be computed from the LT by simply substituting s ¼ jv. These transforms include all finiteenergy signals defined for positive time only. As an example, because a
for all values of a, then if a is positive, the single pole of F(s) resides in the LHP at s ¼ a. Thus, F(v) ¼ ^{e
1 aþv : ( ¼ 2 a þ jv a þ v2
When the LT of a function f (x) has poles in the LHP and on the jv axis, those terms with LHP poles are treated in the same manner as described above in Section 4.3.4.1. Each simple pole on the imaginary axis will result in two terms in the Fourier domain: one is obtained by substituting s ¼ jv and the other is found by the method of residues. The latter term results in a d function having strength of p times the residue at the pole. Mathematically, this is expressed as F(v) ¼ F(s)js¼jv þ p
X
kn d(v
vn ):
(4:30)
n
For example, consider the LT of the function cos v0tu(t). Via partial fraction expansion, F(s) can be written as F(s) ¼
1 s 2 ¼ þ s2 þ v20 s þ jv0 s
1 2
jv0
:
Invoking Equation 4.30 leads to the following expression in the Fourier domain F(v) ¼
jv v20
v2
þ
p [d(v þ v0 ) þ d(v 2
v0 )]:
Once again, to obtain the Hartley transform of f (x), apply Equation 4.23 to F(v).
4.3.5 The Relationship between the Hartley and Real Fourier Transforms The real Fourier transform (RFT) of a real signal f (x) of finite energy can be defined as
4.3.4.1 F(s) with Poles in the Left-Half Plane (LHP) Only10
at
1 H(n) ¼ 5 a þ jv
4.3.4.2 F(s) with Poles in the LHP and on the jv Axis10
where the integral is assumed to be taken of its principal value. Here,* denotes linear convolution (see Property 7 in Section 4.4 and Section 1.3). The FT of ^f (x) is found by convolving 1=px with the FT of f (x), F( f ). Applying Property 7 yields j sgn f F( f ). The Hartley transform of ^f (x) is then found via Equation 4.23. Thus, a Hilbert transform simply shifts all positive-frequency components by 908 and all negative-frequency components by þ908. The amplitude always remains constant throughout this transformation.
1 F(s) ¼ +{e at u(t)} ¼ 5(s) > sþa
1 1 u(t)} ¼ F(s)js¼jv ¼ ¼ : s þ a s¼jv jv þ a
F(V) ¼ 2
1 ð
1
f (x) cos [2pVx þ Q(V)]dx
(4:31)
where Q(V) ¼
(
0, p , 2
if V 0 if V < 0
(4:32)
and V ¼ f is the frequency variable with units of Hertz. The inverse RFT is given by f (x) ¼
1 ð
1
F(V) cos [2pVx þ Q(V)]dV:
(4:33)
4-7
Hartley Transform
The transform pair (4.31) and (4.33) can also be written for V 0 as e
F (V) ¼ 2
F o (V) ¼ 2
1 ð
f (x) cos (2pVx)dx
(4:34)
Thus, the complex Mellin transform is the FT of f 00 (x). The Hartley transform of f 00 (x) ¼ f (ex)esx can then be found by direct application of Equation 4.23. The inverse complex Mellin transform can be written as
1 1 ð
f (x) sin (2pVx)dx
x
f (e ) ¼ e
(4:35)
1 ð
FM (s þ jv)e jvx df :
(4:41)
1
1
The real Mellin transform can be written as
and f (x) ¼
sx
1 ð 0
[F e (V) cos (2pVx) þ F o (V) sin (2pVx)]dV:
1 H( f ) ¼ 12 1 H(f )
1 1
F e (V) : F o (V)
F (s, v) ¼ 2
(4:36)
Thus, F(V) equals Fe(V) for V 0, and Fo(V) for V < 0. Note the similarity between Equations 4.34 and 4.35 with Equations 4.7 and 4.8. The Hartley transform of f (x) is related to the RFT by
e
1 ð
f (x) sin (vx)dx:
(4:42)
1
o
F (s, v) ¼ 2
00
(4:43)
1
(4:37)
The Mellin transform is useful in scale-invariant image and speech recognition application.11 The complex Mellin transform is given by FM (s) ¼
f 00 (x) cos (vx)dx
and
By analogy to Equations 4.34 and 4.35, the Hartley transform of f 00 (x) is related to the real Mellin transform by
4.3.6 The Relationship between the Hartley and the Complex and Real Mellin Transforms
1 ð
1 ð
1 H( f ) ¼ 12 1 H(f )
1 1
F e (s, v) : F o (s, v)
(4:44)
The inverse real Mellin transform is given by f (ex ) ¼ esx
1 ð
[F e (s, v) cos (vx)
0
f (x)x
s1
dx
þ F o (s, v) sin (vx)]df :
(4:38)
(4:45)
0
where the complex variable s ¼ s þ jv. If one substitutes exp(x) for the variable x, then Equation 4.38 becomes
FM (s) ¼
1 ð
0
f (x)e
xs
dx
(4:39)
1
where f 0 (x) ¼ f (ex ) Thus, from Equation 4.39, the complex Mellin transform is the two-sided or bilateral LT of f 0 (x). Equation 4.39 can also be written as FM (s þ jv) ¼
1 ð
f 00 (x)ejvx dx
1
(4:40)
4.4 Elementary Properties of the Hartley Transform In this section, several Hartley transform theorems are presented. These theorems are very useful for generating Hartley transform pairs as well as in signal and systems analysis. In most cases, proofs are presented; examples to illustrate their application are left to specific example problems contained later in this chapter. Property 1: Linearity If f1(x) and f2(x) have the Hartley transforms H1( f ) and H2( f ), respectively, then the sum af1(x) þ bf2(x) has the Hartley transform aH1( f ) þ bH2( f ). This property is established as follows: 1 ð
1
[af1 (x) þ bf2 (x)]cas (2pfx)dx
¼a
where f 00 (x) ¼ f (ex )esx :
1 ð
1
f1 (x) cas (2pfx)dx þ b
¼ aH1 ( f ) þ bH2 ( f ):
1 ð
1
f2 (x) cas (2pfx)dx (4:46)
4-8
Transforms and Applications Handbook
Property 2: Power spectrum and phase The power spectrum for a signal f (x) can be expressed in the Fourier domain as P( f ) ¼ jF( f )j2 ¼ 5{F( f )}2 þ ({F( f )}2 : The power spectrum can be obtained directly from the Hartley transform using Equations 4.16 through 4.19 as follows: 2
2
P( f ) ¼ jF( f )j ¼ 5{F( f )} þ ({F( f )} e
2
o
¼ [H ( f )] þ [ H ( f )]
þsin(2pfx0 )cos(2pf T) sin(2pfx0 )sin(2pf T):
Expanding Equation 4.50 into four integrals and grouping the first and third and second and fourth integrals, respectively, the final result is
H( f )]2
[H( f )]2 þ [H( f )]2 : 2
(4:47)
Ho( f ) 1 ({F( f )} 1 ¼ tan F( f ) ¼ tan 5{F( f )} He( f ) H( f ) H( f ) F( f ) ¼ tan 1 : H( f ) þ H( f )
Property 3: Scaling=Similarity
f (kx) cas (2pfx)dx ¼
1
H( f ) ¼
1 ð
f (x) cos (2pf0 x) cas (2pfx)dx
H( f ) ¼
1 ð
f (x) cos (2pf0 x) cos (2pfx)dx
1
1
þ
1 ð
f (x) cos (2pf0 x) sin (2pfx)dx:
(4:52)
1
Notice that if the function–product relations (i.e., cos a and b and cos a sin b) are expanded and grouped accordingly, the following relation results:
2pfx0 dx0 f (x ) cas k k 0
1 f ¼ H : k k
(4:51)
Property 6: Modulation
(4:48)
If the Hartley transform of f (x) is H( f ), then the Hartley transform of f (kx) where k is a real constant greater than zero is determined by
1
H( f ) ¼ cos (2pf T) H( f ) þ sin (2pf T)H( f ):
If f (x) is modulated by the sinusoid cos (2pf0x), then transforming to the Hartley space via Equation 4.1b yields
Note that the power spectrum P( f ) will always be even.
1 ð
(4:50)
cas[2pf (x0 þT)]¼cos(2pfx0 )cos(2pf T)þcos(2pfx0 )sin(2pf T)
2
The phase associated with the FT of f (x) is well known; this is expressed as
1 ð
1
f (x0 ) cas [2pf (x0 þ T)]dx0 :
Notice that the basis function in Equation 4.50 can be expanded using the appropriate entry of Table 4.1 in the following manner:
2
1 1 ¼ [H( f ) þ H( f )]2 þ [H( f ) 4 4 P( f ) ¼
H( f ) ¼
1 ð
(4:49)
For k negative, the limits of integration for the new variable x0 ¼ kx are interchanged. Therefore, when k is negative, the last term in Equation 4.49 becomes (1= k)H( f=k). The amalgamation of these two solutions can be expressed as follows: If f (x) has the Hartley transform H( f ) then f (k=x) has the Hartley transform (1=jkj)H( f=k). Property 4: Function reversal If f (x) and H( f ) are a Hartley transform pair, then the Hartley transform of f ( x) is H( f ). This is clearly seen when k ¼ 1 is substituted into the last expression appearing in Property 3.
1 H( f ) ¼ H( f 2
(4:53)
Property 7: Convolution (*) If f1(x) has the Hartley transform H1( f ) and f2(x) has the Hartley transform H2( f ), then f1(x) * f2(x) has the Hartley transform 1 [H1 ( f )H2 ( f ) þ H1 ( f )H2 ( f ) þ H1 ( f )H2 ( f ) 2 H1 ( f )H2 ( f )]:
(4:54)
To obtain this result directly, simply substitute the convolution integral
Property 5: Function shift=delay If f (x) is shifted in time by a constant T, then by substituting x0 ¼ x T, the Hartley transform becomes
1 f0 ) þ H( f þ f0 ): 2
f1 (x) * f2 (x) ¼
1 ð
1
f1 (l)f2 (x
l)dl
(4:55)
4-9
Hartley Transform
into Equation 4.1b and utilize Property 5. The result is as follows: H( f ) ¼
H( f ) ¼
1 ð
[f1 (x) * f2 (x)] cas (2pfx)dx
¼
1 ð
2
¼
1 ð
1
1
1
4
1 ð
f1 (l)f2 (x
1
2
f1 (l)4
1 ð
f2 (x
1
¼
3
l)dl5cas (2pfx)dx 3
l) cas (2pfx)dx5dl:
¼
1
(4:56)
f1 (t)[cos (2pf l)H2 ( f ) þ sin (2pf l)H2 ( f )]dl:
Factoring the H2() term to the right and utilizing Equations 4.12 through 4.19, the result follows. Note that Equation 4.54 simplifies for the following symmetries: .
. . .
If f1(x) and=or f2(x) is even, or if f1(x) is even and f2(x) is odd, or if f1(x) is odd and f2(x) is even, then f1(x) * f2(x) ¼ H 1( f ) H 2 ( f ) If f1(x) is odd, then f1(x) * f2(x) ¼ H1( f ) H2(f ) If f2(x) is odd, then f1(x) * f2(x) ¼ H1(f ) H2( f ) If both functions are odd, then f1(x) * f2(x) ¼ H1( f ) H2( f )
In most practical situations, it is possible to shift one of the functions entering into the convolution such that it exhibits even or odd symmetry. When this is possible, Equation 4.54 simplifies to one real multiplication vs. the single complex multiplication (¼four real multiplications and three real additions) in the Fourier domain.
1 ð
1
4
1 ð
1
3
f1 (l)f1 (x þ l)dl5cas (2pfx)dx
2
f1 (x)4
1 ð
1
3
f1 (x þ l) cas (2pfx)dx5dl:
(4:59)
¼
1 ð
1
f1 (x)[cos (2pf l)H1 ( f ) sin (2pf l)H1 (f )]dl:
Factoring H1() to the right and utilizing Equations 4.12 through 4.19, the desired result follows. Property 9: Product If f1(x) is multiplied by a second function f2(x), then the product f1(x) f2(x) is 1 [H1 ( f ) * H2 ( f ) þ H1 (f ) * H2 ( f ) þ H1 ( f ) * H2 (f ) 2 H1 (f ) * H2 (f )] ¼ H1e ( f ) * H2e ( f ) H1o ( f ) * H2o ( f ) þ H1e ( f ) * H2o ( f ) þ H1o ( f ) * H2e ( f ):
Property 10: nth derivative of a function f (n) (x) The nth derivative of a function f (x) is np (4:60) (2pf )n H[(1)n f ]: 2 This property is derived by recursive application of Equation 4.23 to the FT of the function df (x)=dx and its higher-order derivatives. A summary of the above properties appears in Table 4.6. f (n) (x) ¼ cas0
TABLE 4.6 A Summary of Hartley Transform Theorems
Property 8: Autocorrelation (.) If f1(x) has the Hartley transform H1( f ), then the autocorrelation of f1(x) described by the equation below
f1 (x) f1 (x) ¼ f1 (x) * f1 (x) ¼
1
2
Invoking the function shift=delay property with T ¼ l,
Invoking the function shift=delay property (i.e., Property 5), 1 ð
1 ð
1 ð
1
f1 (l)f1 (x þ l)dl,
(4:57)
Theorem
f (x)
Linearity
f1(x) þ f2(x)
Power spectrum
P( f ) ¼ 12 {H( f )2 þ H(f )2 }
Scaling= similarity
f (kx)
Reversal Shift
f (x) f (x T)
Modulation
f (x) cos (2pf0t)
Convolution
f1(x) * f2(x)
has the Hartley transform 1 H1 ( f )2 þ H1 (f )2 ¼ [H e ( f )]2 þ [H o ( f )]2 : 2
(4:58)
Comparing Equations 4.55 through 4.57, it is evident that the convolution and correlation integrals are closely related. Substituting the correlation integral of Equation 4.57 into the direct Hartley transform and utilizing Property 5, the result is as follows:
H( f ) H1( f ) þ H2( f ) h i H(f )H( f ) F( f ) ¼ tan1 H( f )þH(f ) 1 f H k
k
H(f ) H( f ) ¼ cos (2pfT)H( f ) þ sin (2pfT)H(f )
H( f ) ¼ 12 H( f f0 ) þ 12 H( f þ f0 ) 1 2 [H1 ( f )H2 ( f ) þ H1 (f )H2 ( f )
þ H1 ( f )H2 (f ) H1 (f )H2 (f )]
Autocorrelation
f1 (x) f1 (x)
Product
f1(x)f2(x)
nth derivative
f(n)(x)
2 2 1 2 [H1 ( f ) þ H1 (f ) ] 1 2 [H1 ( f )*H2 ( f ) þ H1 (f )*H2 ( f )
þ H1 ( f )*H2 (f ) H1 (f )*H2 (f )]
n n f (n) (x) ¼ cas0 np 2 (2pf ) H[(1) f ]
4-10
Transforms and Applications Handbook
4.5 The Hartley Transform in Multiple Dimensions The Hartley transform also exists the dimensions. For a function f (x, y) the two-dimensional Hartley transform and its inverse is
H(y, n) ¼
f (x, y) ¼
1 ð
1
1 ð
1
1 ð
f (x, y) cas [2p(yx þ ny)]dx dy
(4:61)
H(y, n) cas [2p(yx þ ny)]dy dn:
(4:62)
1
1 ð
1
Although a three-dimensional (3D) Hartley transform exists, it is above and beyond the scope of this treatise. That is, the user will not typically utilize the higher dimension continuoustime integral. Therefore, the reader is referred to Ref. [9] for details.
the impulse function may be a type of basis function, and indeed it is. There are a variety of basis functions that can be used for linear systems analysis. In addition to the impulse function, d(t), one of the most familiar basis functions is the complex exponential f (t) ¼ exp(jv0t) corresponding to the FS. Another frequently used basis function is the complex exponential, f(t) ¼ exp(st) where s ¼ s þ jv is a complex number. Clearly, the Fourier basis function is a specialization of exp(st) with s ¼ 0. When applications involve linear systems analysis, sinusoidal functions are a convenient choice for basis functions. The reason for this choice is that the sum or difference of two sinusoids of the same frequency is still a sinusoid, and the derivative or integral of a sinusoid is still a sinusoid. These characteristics lend themselves well to sinusoidal steady-state analysis using the phasor concept. Before proceeding further, it is helpful to summarize briefly properties and characteristics of basis functions. A most desirable quality of a set of basis functions is known as finality of coefficients. Referring to the equation below, x(t)
4.6 Systems Analysis Using a Hartley Series Representation of a Temporal or Spacial Function The Hartley series (HS) is an infinite series expansion of a periodic signal in which the orthogonal basis functions in the series are the cosine-and-sine function, cas (kn0t), where n0 ¼ 2pf0 ¼ 2p=T0 is the fundamental radian frequency. This series formulation differs from the FS in the selection of the basis functions; namely, the cas function vs. the complex exponential, fk(t) ¼ exp (j2pkt=T0), k ¼ 0, 1, 2, . . . over the interval t0 t t0 þ T0 where T0 is the fundamental period of the periodic function. The HS, so named as a result of the analogy drawn by Hartley to the FS,1 is capable of representing all functions in that interval providing they satisfy certain mathematical conditions developed by Dirichlet (see Section 4.3). If a system is linear and its impulse response is available, then the response of this system to applied inputs can be found using the principles of linearity and superposition. If the forcing function or excitation is represented as a weighted sum of individual components, called basis functions, then it is only necessary to calculate the response of the system to each of these components and add them together. This method leads to the convolution integral that was presented in Chapter 1. Before proceeding with a mathematical description of a set of basis functions fk(t), consider a desired forcing function being represented as a sum of weighted (i.e., having different strengths) impulse functions. These impulse functions produce responses that are amplitudescaled and time-shifted versions of the response to a unit impulse. Summing all responses to each impulse results in the total response of the system to the forcing function. It seems that
N X
an fn (t),
(4:63)
n¼ N
a function represented by a finite number of coefficients and basis functions in the form of a linear combination can always be more a accurately described by adding additional terms (i.e., increasing N) to the linear combination without affecting any of the earlier coefficients. This desirable quality can be achieved if the basis functions are orthogonal over the time interval of interest (see also Section 1.5).
Definition 4.1: A set of functions {fn}, n ¼ 0, 1, 2, . . . is an orthogonal set on the interval a t b if for every i 6¼ k, (fi , fk ) ¼ 0 where (,) denotes the inner product. Here, the inner product of two functions f and g is defined as ðb
( f , g) ¼ f (t)g*(t)dt: a
Using the integral relationship for an inner product, the condition for orthogonality of basis functions is that for all k, tþT ð0 t
fn (t)fk* (t)dt ¼
lk , 0,
if k ¼ n if k ¼ 6 n
(4:64)
where fk* (t) is the complex conjugate of fk(t) and the lk are real and lk 6¼ 0. If the basis functions are real, then fk* (t) can be
4-11
Hartley Transform
replaced by fk(t). Note that Equation 4.64 can be expressed more compactly by the following notation:
lk , * ðfn (t)fk (t)Þ ¼ lk dnk 0,
if k ¼ n if k ¼ 6 n
(4:65)
where dnk is the Kroneckar delta function. In order to calculate the coefficients an appearing in Equation 4.63, the orthogonality property of the basis functions really demonstrates its desirable quality. If Equation 4.63 is multiplied on both sides by f*i (t), for any i, and then integrated over the specified interval t to t þ T, the following results: tþT ð0 t
f*i (t)x(t)dt ¼ ¼
tþT ð0
f*i (t)
"
N X
n¼ N
t
N X
an
n¼ N
tþT ð0
1 li
¼ (x, x)
^ak fk ,
!
^ aj fj
j
k
X
^ak
^ak fk , x
a^*j (x, fj )
^ak fk , x
k
X
^aj fj
j
X
!
j
X
X
k
^aj fj
X
X
x
j
!
!
^ ak (fk , x)
k
X
a^*j (fk , fj )
j
k
X
¼ (x, x)
(4:66)
X
x,
X
þ
þ
X
a^*j aj
j
t
k
^ak a*k þ
X
^ ak
X
^a*j dkj :
j
k
(4:70)
Note that in the above step, the following results were used: tþT ð0
f*i (t)x(t)dt
t
1 X
ak fk (t):
(4:68)
k¼ 1
1 X
k¼ N
(4:71)
a*j ¼ (x, f*j ) ¼ (fj , x)
(4:72)
(fk , fj ) ¼ dkj :
(4:73)
Utilizing only one set of subscripts and adding X
(4:69)
X
aj a*j
j
j
jaj j2
to the right-hand side of the previous equation, ¼ (x, x)
X j
jaj j2
X
^aj a*j
j
2 (x, x)
j
X j
X
jaj j2 þ
^a*j aj þ
X j
j^aj
X j
^aj ^a*j þ
aj j2
X
aj a*j
j
(4:74)
where X j
^ak fk (t),
aj ¼ (x, fj )
(4:67)
However, for practical numerical calculations it is computationally necessary to truncate the above sum to 2N terms. In this way, an approximation to the signal f (t) may be calculated; this is guaranteed by the convergence properties of the FS via the Riemann–Lebesgue lemma. If we now denote the truncated linear combination of 2N basis functions by ^f (t) ¼
N X
¼ (x, x)
when the basis functions are orthogonal. When the basis functions are complex, as in the case of the FS, a complex-valued coefficient ai ¼ ai will result. For real-valued signals of interest, the imaginary terms will always cancel. Now that the coefficients to Equation 4.63 have been calculated, is it possible to find a different set of coefficients that yield a better approximation to x(t) for the same value of N? To investigate this question, it is necessary to measure the closeness of the approximation of Equation 4.63 when N is finite and when N approaches infinity. One measure that is frequently used is the mean-squared error. This approach is generalized in detail for complex basis functions by minimizing the mean-squared error of the N-term truncation approximation to an infinite series. The decomposition of a time function into a weighted linear combination of basis functions is an exact representation when the function is described by f (t) ¼
0 e kx
an fn (t) dt
f*i (t)fn (t)dt:
2 ^ak fk ¼ k¼1
^xk ¼ x 2
#
From Equation 4.64 above,
ai ¼
how can the possibly complex-valued weighting factors, âk, be selected in order to minimize the mean-squared error between f(t) and ^f (t)? Let the mean-squared error be represented by e, then
j^aj
aj j2 ¼
X
(^aj
j
aj ) ^a*j
X (^aj a*j ¼
aj )(^aj
aj )*:
j
In Equation 4.74, the first and second terms are independent of âj and are strictly greater than or equal to zero. The ‘‘best PN aj fj k choice’’ of âj, j ¼ 1, . . . , N is chosen such that kx j¼1 ^ is as small as possible. Therefore, choose âj ¼ aj. This results in the following: 0 x
N X j¼1
^aj fj ¼ kxk2
X j
jaj j2 :
4-12
Transforms and Applications Handbook
From the above expression, the well-known Bessel’s inequality is formed when N in the sum over j approaches 1: 1 X j¼1
jaj j2 kxk2 :
a
f (t)e jvt dt ) 0
as jvj ) 1:
a
f (t) cos (vt)dt ) 0
as jvj ) 1:
x(t) ¼
a
a
an e jnv0 t
(4:75)
n¼ 1
1 an ¼ T0
tþT ð0
x(t)e
jnv0 t
dt,
(4:76)
t
which can also be written as a single-sided series 1 a0 X þ [an cos (nv0 t) þ bn sin (nv0 t)] 2 n¼1
(4:77)
by nothing that 1 a* n ¼ an ¼ (an 2
jbn )
from which f (t) sin (vt)dt ) 0 as jvj ) 1,
by linearity, ðb
1 X
where
and ðb
(x, fn )2 ¼ kxk2
must be satisfied. Note that (x, fn) ¼ an. Now, attention turns to the analog of the complex FS represented as follows:
x(t) ¼
Because this also implies that ðb
1 X n¼1
When Bessel’s inequality is an exact equality, the familiar Parseval’s equality results. From the results presented there, it can be concluded that the aj of Equation 4.71 are the best coefficients from the standpoint of minimizing the approximation error, e, when only a finite number of terms are used. Thus, the use of orgthogonal basis functions provide two desirable qualities: they guarantee the finality of coefficients and also the same coefficients minimize the mean-squared error of the function representation. An additional property that is vitally important in the discussion of the FS is the Riemann–Lebesgue Lemma. Briefly, this lemma states that supposing the function f (t) is absolutely integrable on the interval (a, b), then ðb
shown that a necessary and sufficient condition for an orthonormal set {fn(t)} to be complete is that for each function x considered, Parseval’s equation
f (t) cas (vt)dt ) 0
as jvj ) 1:
Note that (a, b) may range from 1 to 1. The importance of this result foreshadows the concept of a complete set of basis functions. A set of basis functions is termed complete in the sense of mean convergence if the error in the approximation of f (t) can be made arbitrarily small by making the value of N in Equation 4.63 sufficiently large. That is, lim kf
N!1
SN k ¼ 0
where SN(.), N ¼ 1, 2, . . . is the partial sum of piecewise continuous functions defined on the open interval (a, b). Also, it can be
an ¼ an þ an* bn ¼ an an* : The properties and use of the FS are well known and well documented in the literature. The set of basis functions used by the Hartley transform and in the HS is the set {fn(t)}, n ¼ 0, 1, 2, . . . where {fn(t)} ¼ cas (nv0t). This is an orthogonal set over the interval t t0 t þ T0 and is capable of representing any time function that the FS can in that interval. This set of time functions possesses a FS or HS if the well-known Dirichlet conditions are met as presented in Section 4.3. pffiffiffiffiffiffi Let {fn (t)} ¼ cas (nv0 t)= 2p, n ¼ 0, 1, 2, . . . on the interval [ p, p].
Definition 4.2: A set of functions {fn}, n ¼ 0, 1, 2, . . . is an ‘‘orthonormal set’’ on the interval a t b if (fi , fk ) ¼ dik ¼
1, 0,
if i ¼ k if i ¼ 6 k:
4-13
Hartley Transform
Claim A set of p functions {fn(t)}, n ¼ 0, 1, 2, . . . where {fn (t)} ¼ ffiffiffiffiffiffi cas (nv0 t)= 2p is an orthonormal set on the interval p t p. Proof
(fi , fk ) ¼ ¼
ðp
fi (t)fk* (t)dt ¼
p ðp p
1 ¼ 2p
ðp
1 T0
t
x(t) cas (kv0 t)dt ¼ 0 þ 0 þ 0, . . . , þ gk þ 0 þ 0 þ
This gives what will be termed the HS,
fi (t)fk (t)dt
x(t) ¼
p
1 1 pffiffiffiffiffiffi cas (iv0 t): pffiffiffiffiffiffi cas (kv0 t)dt 2p 2p ðp
tþT ð0
1 gi ¼ T0
p
If each function of the integrand in the above equation is expanded to cos () þ sin () and then multiplied together, four terms result: cos () cos (), sin () sin (), and two cross products, cos () sin () and sin () cos (). The integral of the two cross products are zero by the familiar orthogonality property for the cosine and sine functions, respectively. The other two integrands, when evaluated on the interval from p to p, equal 0 for i 6¼ k and p when i ¼ k. Therefore, (fi , fk ) ¼
1, 0,
Thus, the basis functions {fn} are an orthonormal system on the interval [p, p]. Let the periodic signal x(t) with period T0,
x(t) cas (iv0 t)dt:
(4:79)
t
gk ¼
5{ak } ({ak } k 6¼ 0 k ¼ 0: ak
(4:80)
Specifically, from Equation 4.76 let
5{ak } ¼ ({ak } ¼
1 T0
tþT ð0
1 T0
x(t) cos (kv0 t)dt
t
tþT ð0
x(t) sin (kv0 t)dt
t
then
8t
be written as an orthogonal series expansion (i.e., a linear combination possessing an orthogonal set of basis functions) 1 X
tþT ð0
2p T0
It is a simple matter to show that
if i ¼ k if i ¼ 6 k:
x(t þ T0 ) ¼ x(t)
gi cas (iv0 t)
i¼1
v0 ¼
cas (iv0 t) cas (kv0 t)dt:
1 X
5{ak } ({ak } ¼
1 T0
tþT ð0
x(t) cas (kv0 t)dt:
t
i¼1
If v0 ¼ v0, then the result follows. The FS coefficients are also related to the HS coefficients by
where fi(t) are orthogonal basis functions. It has been shown previously that
ai ¼ %{gi } j2{gi }
x(t) ¼
gi fi (t)
(4:78)
where %{} and 2{} are the even and odd parts of a function,
fi (t) ¼ cas (iv0 t) is an orthogonal basis function over the interval [t, t þ T0] tþT ð0 t
cas (iv0 t) cas (kv0 t)dt ¼
T0, 0,
if i ¼ k if i ¼ 6 k
where v0 ¼ 2p=T0. Therefore the gi in Equation 4.78 are readily obtained using the orthogonality property, x(t) cas (kv0 t) ¼
1 X
i¼1
(4:81)
gi cas (iv0 t) cas (kv0 t)
1 %{ui } ¼ (ui þ ui ) 2
(4:82)
1 2{ui } ¼ (ui ui ) 2
(4:83)
As an example, the two-sided FS for the square wave i 1 i 1 : 1 i 1 < t < i þ 1 i odd 2 4 2 4 8 > t, that is, z(t) ¼ 0 for t < 0. In Equations 4.98 and 4.102, (*) denotes conventional or linear convolution. The limits of integration may be changed to [0, t] if i(t) is a causal signal, that is, if i(t) ¼ 0 for t < 0. Convolution in the time domain becomes a simple complex multiplication in the s-domain; this property makes the LT particularly attractive for systems analysis. The familiar FT also possesses a similar convolution property 1 V(v) ¼ pffiffiffiffiffiffi 2p
1 ð
1
n(t)ejvt dt
1 Z(v) ¼ pffiffiffiffiffiffi 2p
(4:104)
1 ð
i(t)ejvt dt
(4:105)
1 ð
z(t)ejvt dt
(4:106)
1
1
^{n(t)} ¼ ^{z(t)*i(t)} ¼ V(v) ¼ Z(v)I(v):
(4:107)
Note that in Equations 4.104 through 4.107 above, the factor pffiffiffiffiffiffi 1= 2p is often omitted in engineering work; when this factor is included in the transform, the inverse transform is 1 n(t) ¼ pffiffiffiffiffiffi 2p
(4:101)
The impedance Z may be an open-circuit driving point or a transfer impedance. Other problems are frequently encountered in electric circuit analysis, and many of these are of the same form as Equation 4.101 with Z(s) replaced by a transfer function, frequency response matrix or similar parameter. This is the case, for example, in the presence of power electronic loads and sources characterized by nonsinusoidal waveforms. ‘‘Quasiperiodic’’ transient inputs energizing a relaxed electric power system (i.e., zero initial conditions) at time 0 , produce responses throughout the system that may be superimposed upon the sinusoidal steady-state solution. In the time domain, the impedance Z(v) is, in fact, an impulse response, z(t). In this context, z(t) is the voltage response to an input that is a unit current impulse. The responses to these quasiperiodic inputs are found analytically via the convolution integral 1 ð
1 I(v) ¼ pffiffiffiffiffiffi 2p
(4:100)
1 ð
V(v)e jvt dv:
(4:108)
1
Analogously, a salient property of the Hartley transform for this application is that convolution is rendered to a simple sum of real products under the transform, *{n(t)} ¼ *{z(t) * i(t)} ¼ V(v):
(4:109)
Specifically, 1 2
V(n) ¼ [Z(n)I(n) þ Z(n)I(n) þ Z(n)I(n) Z(n)I(n)] ¼ Z(n)
(4:110)
[I(n) þ I(n)] [I(n) I(n)] þ Z(n) 2 2
¼ Z(n)I e (n) þ Z(n)I o (n)
(4:111) (4:112)
1 2
¼ [Va (n) Va (n) þ Vb (n) þ Vb (n)]
(4:113)
where Va(v) ¼ Z(v) I(v) and Vb(v) ¼ Z(v) I(v). Thus, it is possible to solve a certain class of electric circuit problems using the Hartley transform. As with the DFT=FFT, the DHT=FHT can be readily used for performing convolution. The DHT assumes periodicity of the function being transformed; that is H(kVv) ¼ H[(N þ k)Vv]. Therefore, H(kVv) for N k 1, is equivalent to H[(N k)Vv]. When convolution is represented by a*, linear or time-domain convolution is implied. In the frequency domain, as a result of the characteristic modulo N operations inherent in the DFT or DHT, a different form of convolution results in the time domain. Circular or cyclic convolution, denoted by (), in the time domain is the result of multiplication of two functions in the frequency domain. Let n represent the nth point of some finite-duration sequence, then cyclic convolution in the time domain is expressed as
N 1 X f1 mod (t) f2 mod (n t) f1 (n) f2 (n) ¼ t¼0
N
N
(4:114)
4-20
Transforms and Applications Handbook
where t and n t are depicted modulo N. The equivalent form of Equations 4.110 through 4.113 in the Hartley domain is expressed as 1 2
V(kVn ) ¼ [Z(kVn )I(kVn ) þ Z((N þ Z(kVn )I((N Z((N
k)Vn ) k)kVn )]
k)Vn )I((N
¼ Z(kVn )I e (kVn ) þ Z((N 1 2
¼ [Va (kVv ) þ Vb ((N
k)Vn )I(kVn )
Va ((N
k)Vn )I o (kVn )
(4:115) (4:116)
k)Vv ) þ Vb (kVn )
k)Vn )]
(4:117)
where Va (kVv ) ¼ Z(kVv )I(kVv ) and Vb (kVv ) ¼ Z(kVv ) I((N k)Vv ). There are times when cyclic convolution is desired and other times when linear convolution is needed. Because both the DFT and DHT perform cyclic convolution, it would be unfortunate if methods for obtaining linear convolution by cyclic convolution were nonexistent. Fortunately, this is not the case. Linear convolution can be extracted from cyclic convolution, but at some expense. For finite-duration sequences f1(n) and f2(n) of length M and L, respectively, their convolution is also finite in duration. In fact, the duration is M þ L 1. Therefore, a DFT or DHT of size N M þ L 1 is required to represent the output sequence in the frequency domain without overlap. This implies that the N-point circular convolution of f1(n) and f2(n) must be equivalent to the linear convolution of f1(n) to f2(n). By increasing the length of both sequences to N points (i.e., by appending zeros), and then circularly convolving the resulting sequences, the end result is as if the two sequences were linearly convolved. Clearly with zero padding, the DHT can be used to perform linear filtering. It should be clear that aliasing results in the time domain if N < M þ L 1. When N zero values are appended to a time sequence of N data samples, the 2N-point DHT reduces to that of the N-point DHT at the even index values. The odd values of the 2N sequence represent the interpolated DHT values between the original N-point DHT values. The more zeros padded to the original N-point DHT, the more interpolation takes place on the sequence. In the limit,
infinite zero padding may be viewed as taking the discrete-time Hartley transform of an N-point windowed data sequence. The prevalent misconception that zero padding improves the resolution of the sequence or additional information is obtained is well known. Zero padding does not increase the resolution of the transform made from a given finite sequence, but simply provides an interpolated transform with a smoother appearance. The advantage of zero padding is that signal components with center frequencies that lie between the N frequency bins of an unpadded DHT can now be discerned. Thus, the accuracy of estimating the frequency of spectral peaks is also enhanced with zero padding. When comparing the number of real operations performed by Equation 4.96 (with v replaced by kVv) and Equation 4.116 or 4.117, the DHT always offers a computational advantage of two as compared to the DFT method; in many (if not most) applications, currents in electrical engineering calculations exhibit symmetry, which results in a computational advantage of four favoring the Hartley method. In the case where z(t) or i(t) in Equation 4.109 contains even symmetry, the four-term product of Equation 4.115 reduces to Z(kVv)I(kVv) or Va(kVv). If z(t) or i(t) is odd, then Equation 4.115 degenerates to Z(kVv)I(N k) Vv or Vb(kVv) and Z((N k)Vv)I(kVv), respectively. That is, only one, vs. the FFT’s four real multiplications, is needed. The above symmetry conditions are more often the rule than the exception. Other symmetries exist as discussed by Bracewell for the Hartley transform.9 As a brief example of the method, consider the periodic load current shown in Figure 4.7 having the description f (t) ¼
1 X
fm (t)
(4:118)
m¼1
where f0 (t) ¼ f (t)[u(t) u(t T)] ¼ eet [u(t) u(t 1)] (4:119) fm (t) ¼ f0 (t mT)
(4:120)
and T ¼ 1. The transfer function, H(s), for the RC network in Figure 4.7 is clearly 1=(s þ 1). (Note the system is initially relaxed—zero initial conditions.) If one denotes ym(t) as the zero-state response due to fm(t), then from the time shift property
f(t) + 0
T=1
t
f0(t)
0
FIGURE 4.7
T=1
Injected load current into a simple RC network.
f(t)
1Ω
1F
y(t) –
t
4-21
Hartley Transform
of a LTI system, ym(t) ¼ y0(t mT). From the principle of superposition, y(t) ¼ Sym(t). Thus, the crux of the problem is to find y0(t), the response for 0 t < 1 due to the single pulse, f0(t), in Figure 4.7. The convolution of f0(t) and the impulse response, h(t), by the convolution integral is straightforward. In fact, the response, y0(t), depicted in Figure 4.8, is readily calculated as y0 (t) ¼
tee t , ee t ,
if 0 t 1 if t > 1
(4:121)
y0 (t) ¼ + {Y0 (s)} ¼ +
1
e e s (s þ 1)2
1
Y0 (s) 1 e sT
4.7.2 An Illustrative Example
¼ + 1 {Y(s)}
(4:123)
In this section, an illustrative example is presented from subtransmission and distribution engineering to illustrate the calculation of nonsinusoidal waveform propagation in an electric power system. Figure 4.9 displays the distribution network model and the injected nonlinear load current into the network. The electrical load at bus 8 causes a nonsinusoidal current to
1.00
y0(t), V
0.80 0.60 0.40 0.20 0.00 0.0
1.0
2.0
3.0
4.0
5.0 t (s)
6.0
7.0
8.0
9.0
10.0
0.0
1.0
2.0
3.0
4.0
5.0 t (s)
6.0
7.0
8.0
9.0
10.0
1.79
y(t), V
1.43 1.08 0.72 0.36 0.01
FIGURE 4.8
(4:124)
(4:122)
where Y0(s) ¼ F0(s)H(s) and F0 (s) ¼ +{ f0 (t)}. The calculation of the steady-state system response to the input f (t) in Figure 4.7 is more interesting. It is well known that for periodic y(t), y(t) ¼ +
y(t) ¼ f (t) h(t) ¼ DHT{DHT{f (t)} DHT{h(t)}}
where y(t) is shown in Figure 4.8. An FHT software program to compute the DHT efficiently can be found in the Appendix of this section. Additional details concerning circular convolution of aperiodic inputs are discussed below.
or alternatively by 1
when Y0(s) represents the LT of any one period of y(t) (i.e., y0(t)). In general, the partial fraction expansion of Y(s) is a nontrivial computation. How then does one solve for the response y(t)? Utilizing the assumed periodicity of the DHT (FHT), one can perform conventional convolution via circular convolution if provisions are made for aliasing (i.e., zero padding). This method of solution can be summarized by (assuming one of the convolving functions is even)
Output response (top) y0(t) to the input pulse f0(t) and (bottom) y(t) to the input f (t) ¼
P9
m¼0 fm (t)
¼ f0 (t
m).
4-22
Transforms and Applications Handbook
1 0.001
0.1
2
0.001
j0.01
–j50
6
j0.1 –j40
0.01 j0.1 –j200
0.001
j0.01 1
3 j0.01 2
–j50
–j100
100 j10
0.001
Transformer model
j0.1
j0.01
8 –j40
0.01
0.001 j0.01
i8(t)
4 1.00
–j50
5 0.001
j0.001 1
–j100
0.1
I8 (t) p.u.
0.05
0
–0.05
–0.1 0
0.0083
0.0166
0.0249
0.0332
t (s)
FIGURE 4.9
Load current injected at bus 8 of an example 8-bus distribution system.
propagate throughout the system and impact other loads in an unknown fashion. The fast decay in this current results in high frequency signals in the network. An important consideration in this method is the selection of the sampling interval T and its effect on the maximum frequency component, Vv,max ¼ p=T, represented in the simulation. Because power systems are essentially band-limited due necessarily to system components designed to operate at or near the power frequency (e.g., distribution transformers), the nonlinear load current containing frequency components above Vv,max become negligible. That is, no matter how close the current approaches an impulse (e.g., a lightning strike), the
significant energy components above Vv,max are multiplied in the transform domain by the system impedance frequency components that are asymptotically approaching zero. This can be seen by observing the Fourier magnitude, jI8(kVv)j and jZ18(kVv)j, in Figures 4.10 and 4.11 for selected values of N (and thus T). The Hartley transform of i(t), I8(kVv), is shown in Figure 4.12. Load currents that decay rapidly are becoming less unusual with the advent of high-power semiconductor switches. Referring to the system in Figure 4.9, the transformer at the load bus is modeled as a conventional T equivalent. A lumped capacitance is used to model electrostatic coupling between the primary and secondary windings, and two lumped capacitances
4-23
Hartley Transform
|Z18 (kΩω)| p.u.
|I8 (kΩω)| p.u.
0.004 0.003 0.002 0.001 0 0
32
64
96
128 N
160
192
224
256
0
32
64
96
128 N
160
192
224
256
0.0008 0.0006 0.0004 0.0002 0
FIGURE 4.10 Fourier magnitude of the injected bus current, i(t), and system impulse response, z18(t) for N ¼ 256.
|I8 (kΩω)| p.u.
0.004 0.003 0.002 0.001 0 0
512
1024 N
1536
2048
0
512
1024 N
1536
2048
|Z18(kΩω)| p.u.
0.0006 0.0004 0.0002 0
FIGURE 4.11 Fourier magnitude of the injected bus current, i(t), and system impulse response, z18(t) for N ¼ 2048.
TABLE 4.9 Calculated Eigenvalues for the Example 8-Bus Power System
0.004 |I8 (kΩν )| p.u.
are used to model interwinding capacitance. Bus 1 is the substation bus and the negative-sequence impedance equivalent tie to the remainder of the network is shown as a shunt R-L series branch. The circuits shown between busses are all three-phase balanced, fixed series R-L branches, and frequency independent (i.e., R ¼ R(v)). The latter assumption need not be made because frequency dependence may be included if required. The importance of frequency dependence should not be underestimated, particularly for cases in which significant energy components of the injection current spectrum lie above and beyond the 17th harmonic of 60 Hz, or approximately 1 kHz. Distributed parameter models can be readily represented as lumped parameters placed at the terminals of long lines. These refinements are quite important in actual applications, but they are omitted from this abbreviated example. If the injection current at bus 8 were ‘‘in phase’’ with the line to neutral voltage at that bus, the nonlinear device at bus 8 would be a source. Similarly, other phase values would result in different generation or load levels. Each bus voltage was calculated using the Hartley transform simulation algorithm. These results were verified using an Euler predictor–trapezoidal corrector integration algorithm and time domain convolution implemented by Equation 4.99. In order to choose an adequate time step, T, for calculating the ‘‘theoretical solution’’ by the predictor–corrector method, it was necessary to capture all system modes. The eigenvalues calculated by the International Mathematics and Statistical Library (IMSL) subroutine EVLRG are shown in Table 4.9. Routine EVLRG computes the eigenvalues of a real matrix by first balancing the matrix; second, orthogonal similarity transformations are used to reduce this balanced matrix to a real upper Hessenberg matrix; third, the shifted QR algorithm is used to compute the eigenvalues of the Hessenberg matrix. This method is generally accepted as being most reliable. In this example, the transfer impedance between the substation bus (bus 1) and bus 8 is of interest. Figures 4.13 and 4.14 display the DFT of z18(t), Z18(kVv). Of course, two graphs are required to illustrate this transfer impedance because the DFT is
0.002 0 –0.002 –0.004 0
512
1024 N
1536
2048
FIGURE 4.12 Hartley transform of the injected bus current, i(t).
li
R{li }
l1
757,718
0
l2.3
13,548
52,901
T{li }
l4
11,988
l5.6
8,976
l7.8
8,690
l9.10
5,033
l11.12 l13
4,245 1,523
l14.15
214
l16
40
l17
3
0
19,575 50,131 38,050
43,600 0 4,877 0
0
|Z18 (kΩω)| p.u.
4-24
Transforms and Applications Handbook
0.0006 0.0004 0.0002 –0 0
1024 N
1536
2048
Fourier magnitude of the transfer impedance, Z18(kVv).
FIGURE 4.13
3.14
0
–3.14 0
FIGURE 4.14
512
1024 N
1536
0.03
2048
Fourier phase of the transfer impedance, Z18(kVv).
v1 (t) p.u.
LZ18 (kΩω) rads
512
The algorithm implemented by Equation 4.124 assumed that the input is time limited and the system impulse response is band limited. That is, the periodic input is truncated to an integer multiple of its fundamental frequency and the system impulse response is of infinite duration. For stable systems, the system impulse response z(t) must decrease to zero or to negligible values for large jtj. In reality, the system impulse response cannot be both time limited and band limited; therefore, one band limits in the frequency domain such that negligible signal energy exists for t T0. The convolution of an aperiodic excitation with the system impulse response can be regarded as a periodic convolution of functions having an equal period. Through suitable modifications to the method presented in Equation 4.124, one can use circular convolution to compute an aperiodic convolution when each function is zero everywhere outside some single time window of interest.
0.015 0 –0.015
0.0005
–0.0005 0
FIGURE 4.15
0
0.0083
0.0166 t (s)
0.0249
0.0332
0
0.0083
0.0166 t (s)
0.0249
0.0332
0
0.0083
0.0166 t (s)
0.0249
0.0332
0
0.0083
0.0166 t (s)
0.0249
0.0332
0.03
0
v2 (t) p.u.
|Z18 (kΩν)| p.u.
–0.03
512
1024 N
1536
2048
0.015 0 –0.015 –0.03
Hartley transform of the transfer impedance, Z18(kVv).
a complex transformation. Figure 4.15 shows the DHT of z18(t), Z18(kVv). One figure illustrates this real transform. The resulting bus voltages due to the current injection at bus 8 are depicted in Figures 4.16 and 4.17.
v3 (t) p.u.
0.03
0 –0.015 –0.03
4.7.3 Solution Method for Transient or Aperiodic Excitations
0.03 v4 (t) p.u.
Convolution of two finite-duration waveforms is straightforward. One simply samples the two functions every T seconds and assumes that both sampled functions are periodic with period N. If the period is chosen according to that discussed earlier, there is no overlap in the resulting convolution. As long as N is chosen correctly, discrete convolution results in a periodic function where each period approximates the continuous convolution results of Equation 4.102.
0.015
0.015 0 –0.015 –0.03
FIGURE 4.16 Resulting bus voltages due to the current injection at bus 8.
4-25
Hartley Transform
v5 (t) p.u.
0.03 0.015 0
–0.015 –0.03 0
0.0083
0.0166 t (s)
0.0249
0.0332
Similarly for h(t), simply replace x with h and L by M in Equation 4.125. If one allows these zero-augmented functions to be periodic of period N, then the intervals of padded zeros disallow the two functions to be overlapped even though the convolution is a circular one. These periodic functions are formed by the superposition of the nonperiodic function shifted by all multiples of the fundamental period, T0 where T0 ¼ NT. That is,
v6 (t) p.u.
0.03
fp (t) ¼
0.015 0
–0.015 –0.03 0
0.0083
0.0166 t (s)
0.0249
0.0332
0
0.0083
0.0166 t (s)
0.0249
0.0332
0
0.0083
0.0166 t (s)
0.0249
0.0332
v7 (t) p.u.
0.15 0.075 0
–0.075 –0.15
v8 (t) p.u.
0.3 0.15 0
1 X
k¼ 1
f (t þ kT0 ):
(4:126)
Thus, while the result is a periodic function (i.e., due to the assumed periodicity of the DHT=FHT), each period is an exact replica of the desired aperiodic convolution. The relationship between the DHT and HT for finite-duration waveforms is different when the input i(t) is time limited. Because i(t) is time limited, its Hartley transform cannot be band limited; therefore, sampling this function leads to aliasing in the frequency domain. It is necessary to choose the sampling interval T to be sufficiently small such that aliasing is reduced to an insignificant level. If the number of samples of the time-limited waveform is chosen as N, then it is not necessary to window in the time domain. For this set of waveforms, the only error introduced is aliasing. Errors introduced by aliasing can be reduced by choosing T sufficiently small. This allows the DHT sample values to agree reasonably well with samples of the HT.
–0.15 –0.3
FIGURE 4.17 continued.
Resulting bus voltage due to the current injection at bus 8,
Let the functions x(t) and h(t) be convolved where both functions are finite in length. Let the larger sequence, x(t), contain L discrete points and the smaller contain M discrete points. Then the resulting convolution of these functions can be obtained by circularly or cyclically convolving suitable zeroaugmented functions. That is, 8 < x(n þ n0 ), Xpad (n) ¼ 0, : x (n þ nN) pad
if 1 n L if L þ 1 n N otherwise
(4:125)
where n0 is the first point in the function window of interest N is the smallest power of two greater than or equal to M þ L
1
4.8 Table of Hartley Transforms Tables 4.10 through 4.12 contain the Hartley transforms of commonly encountered signals in engineering applications. When scanning the table entries, the Hartley transform entries seem to have more sophisticated expressions; this is usually the case. More exotic Hartley transforms may be generated in one of three ways. First, one can apply the elementary properties provided in Section 4.4 to the entries of Tables 4.10 and 4.11; second, one can alternatively apply Equation 4.23 to the FT entries of more comprehensive table listings such as those found in Refs. [12–14]; or third, use a DHT or FHT algorithm to evaluate numerically the Hartley transform of a discretetime signal generated using a high-level computing language (e.g., FOTRAN, C, Cþþ, etc.). A sample FHT algorithm, coded in the C programming language, is included in the Appendix. Note that in the eighth entry to Table 4.11, an is a complex number representing the FS expansion of the arbitrary periodic function. The value an is also equal to 1=T FT(n=T) where FT( f ) is the FT of F( f ) over a single period evaluated at n=T. Also, in that same entry, note that gn ¼ 5{an } ({an }.
4-26
Transforms and Applications Handbook TABLE 4.10
Hartley Transforms of Energy Signals f (t)
Description Rectangular pulse
u t þ T2
Exponential
be
at
Triangular
1
2 jtj T ,
F( f )
u t
u(t)
2
b jv þ a T 2 Tf 2 sinc 2
jtj < T2
Because f (t) is even, H( f ) ¼ F( f )
¼1
b(a þ 2p f ) a2 þ (2p f )2
cos p f T T p2 f 2
Because f (t) is even, H( f ) ¼ F( f )
pffiffiffi p (p2 f 2 =a2 ) a e 2a a2 þ 4p2 f 2
Gaussian
e
a2 t 2
Double exp
e
ajtj
Damped sine
e
at
sin (v0t)u(t)
v0 (a þ j2pf )2 þ v20
Damped cosine
e
at
cos (v0t)u(t)
a þ j2pf (a þ j2pf )2 þ v20
One-sided exp
1 at b a (e
Cosine pulse
H( f )
T sinpTfpTf ¼ T sinc Tf
T
e
bt
)u(t) cos v0 t u t þ T2 u t
Because f (t) is even, H( f ) ¼ F( f ) Because f (t) is even, H( f ) ¼ F( f )
v0 ða2 þ v20 4p2 f 2 þ 4pf aÞ 2 ða2 þ v20 4p2 f 2 Þ þ (4pf a)2 (a 2pf )ða2 þ v20 4p2 f 2 Þ þ (4pf a)(a þ 2pf ) 2 ða2 þ v20 4p2 f 2 Þ þ (4pf a)2
1 (a þ j2pf )(b þ j2pf ) T 2
h
T sin pT( f f0 ) 2 pT( f f0 )
[ab
pT( f þf0 ) þ sinpT( f þf0 )
i
ab 2p f (a þ b þ 2pf ) (2pf )2 ]2 þ [2pf (a þ b)]2
Because f (t) is even, H( f ) ¼ F( f )
TABLE 4.11 Hartley Transforms of Power Signals f (t)
Description
H( f )
Impulse
Kd (t)
K
K
Constant
K
Kd ( f )
Kd ( f )
Unit step
u(t)
1 2 d( f ) 1 pf
Because f (t) is even, H( f ) ¼ F( f ) P1 n 1 gn d f T 1 0 4p d ( f )
Signum function
sgnt ¼
Cosine wave
cos v0t
Sine wave
sin v0t P1 nT) 1 d(t P1 jn2pf0 t 1 an e Ae jv0t
1 1 2 d( f ) þ j2pf 1 jpf 1 f0 ) þ d( f þ f0 )] 2 [d( f j f0 ) d( f þ f0 )] 2 [d( f P1 1 d f Tn 1 T P1 n 1 an d f T
tu(t)
j 0 4p d ( f )
Impulse train Periodic wave Complex sinusoid Unit ramp
TABLE 4.12 Hartley Transforms of Various Engineering Signals f (t) ¼ d(t) F(s) ¼ 1
t jtj
Ad ( f
a)
as
F(v) ¼ cos av þ sin av
f (t) ¼ u(t)
Because f (t) is even, H( f ) ¼ F( f ) 1 2 [d( f
f0)
f0 )
1 4p2 f 2
F(n) ¼ p[d(n
a) þ d(n þ a)]
F(n) ¼ p[d(n
a)
f (t) ¼ sin at
d(n þ a)]
F(s) ¼ s2 þs a2
F(n) ¼ n2 n a2 þ p2 [d(n
f (t) ¼ sin at u(t) F(s) ¼ s2 þa a2 a a2
jat
F(n) ¼ n1 þ pd(n) a)
as
F(s) ¼ e s
F(n) ¼ n1 ( cos an sin an) þ pd(n) f (t) ¼ e atu(t), a > 0
F(s) ¼ s þ1 a
n F(n) ¼ aa2 þ þ n2
f (t) ¼ cos at
d( f þ f0 )]
H( f ) ¼ F( f )
1 4p2 f 2
F(n) ¼ n2
F(s) ¼ 1s
f (t) ¼ u(t
1 þ 2pf
f (t) ¼ cos at u(t)
F(n) ¼ 1 f (t) ¼ d (t
F(s) ¼ e
F( f )
f (t) ¼ e
þ p2 [d(n
F(n) ¼ p[d(n Pþ1
f (t) ¼
F(n) ¼
f (t) ¼ e
2
d(n þ a)]
nT)
d(n
, a>0 2
f (t) ¼ e t =2s pffiffiffiffiffiffi F(n) ¼ s 2pe
a)
a) þ d(n þ a)] þ jp[d(n
n¼ 1 d(t P n0 þ1 n¼ 1 ajtj
F(n) ¼ a2 2a þ n2
a) þ d(n þ a)]
s2 n2 =2
f (t) ¼ 1, 0 t a
n0 ),
v0 ¼ 2p T
a)
d(n þ a)]
4-27
Hartley Transform TABLE 4.12 (continued) Hartley Transforms of Various Engineering Signals F(s) ¼ 1
F(n) ¼ n1 (sin an
þ
cos an þ 1)
f (t) ¼ 1,
ata sinh as F(s) ¼ s 2 sin an F(n) ¼ n f (t) ¼ t, 0 t a 1 (1þas)e as F(s) ¼ s2
f (t) ¼
1]
cos 2an
1]
0ta t>a
F(s) ¼
1) þ pad(n)
F(s) ¼
sþa F(s) ¼ (sþa) 2 þ b2 at
sin bt u(t),
2
2
1 z 1 s2 þ 2zvn s þ v2n v2n
n2 þ 2zvn n n2 )2 þ (2zvn n)2
n
f (t) ¼ e
at
coshbt u(t),
sþa F(s) ¼ (sþa) 2 2
at
sinhbt u(t), 2
2
a sin bt a2
a > jbj
ab
þ
pa 2(a2
f (t) ¼ cos abt2
b2 )
cos at u(t) b2
1,
1,
k k
n,
k ¼ 4m or k ¼ 4m þ 1 k ¼ 4m þ 2 or k ¼ 4m þ 3 at
e
F(s) ¼
a>0
)u(t),
1 s(s þ a)(s þ b)
bt
u(t), a, b > 0
2
b) bt a e
sþa s(s þ a)(s þ b)
a)n2
a(a þ b)]n þ (a þ b n(a2 þ n2 )(b2 þ n2 )
at
bt
e
u(t),
n3
a)
d(n þ b)]
s F(s) ¼ (s2 þ a2 )(s 2 þ b2 ) F(n) ¼ (a2 n2 )(bn 2 n2 ) þ 2(b2 p a2 ) [d(n þ 2(a2 p b2 ) [d(n b) þ d(n þ a sin at b sin bt f (t) ¼ u(t) a2 b2
a, b > 0
þ ap ab d(n)
a, b > 0
)u(t),
1 (s þ a)(s þ b)
a)e
at
b)e
(a
sþa (s þ a)(s þ b)
at
s (s þ a)(s þ b) 3
be
bt
]u(t),
d(n þ a)]
F(s) ¼
a, b > 0
)u(t),
2
e bt b)(c
b)
þ (a
1 (s þ a)(s þ b)(s þ c)
2
e ct c)(b
sþa (s þ a)(s þ b)(s þ c)
2
c)
b)]
i u(t),
a, b, c > 0
i u(t),
a, b, c > 0
3
3
þ (R aQ)n þ (Q a)n F(n) ¼ aP þ (aR (aP)n 2 þ n2 )(b2 þ n2 )(c2 þ n2 )
ct
c)
n4
P ¼ abc, Q ¼ a þ b þ c, R ¼ ab þ ac þ bc pffiffiffiffiffiffiffiffiffiffiffi a2 þ a2 sin (at þ f)u(t), f ¼ tan 1 a
f (t) ¼ a) þ d(n þ a)]
a, b > 0
a)n2 þ n3
a(a þ b)]n þ (a þ b (a2 þ n2 )(b2 þ n2 )
[ab
bt
þ ac þ bc)n (a þ b þ c)n n F(n) ¼ abc þ (ab (a2 þ n2 )(b2 þ n2 )(c2 þ n2 ) h a)e at (a b)e bt (a c)e f (t) ¼ (b(a a)(c a) þ (a b)(c b) þ (a c)(b
b)
m integer
1 s(s þ a)
b)n abn F(n) ¼ n(aþ2 þ(anþ2 )(b 2 þ n2 ) h at f (t) ¼ (b ea)(c a) þ (a
a > jbj
þ 2(b2pb a2 ) [d(n
[d(n
f (t) ¼ b 1 a [(a
F(s) ¼
n2 ) 4a2 b2
n2 )
k¼0
n n a k
f (t) ¼ b 1 a (ae
b sin at u(t) b2
n2 )(b2
ak
F(n) ¼ aab
F(s) ¼ (s2 þ a2ab )(s2 þ b2 ) F(n) ¼ (a2
n positive integer
þ (a þ b)n n F(n) ¼ ab (a2 þ n2 )(b2 þ n2 )
F(s) ¼
b2
b 2an F(n) ¼ (ab(a 2 þ b2 þ n2 )2
f (t) ¼
zvn > 0
b2
F(s) ¼ (s þ a)b2
a > 0,
a)]
b)]
2
b2 ) þ (a2 þ b2 )n þ an2 þ n3 (a2 þ b2 þ n2 )2 4a2 b2
F(n) ¼ a(a f (t) ¼ e
F(s) ¼
2
vn
F(n) ¼ (v2
n P
f (t) ¼ b 1 a (e
a>0
þ b ) þ 2abn bn F(n) ¼ b(a (a2 þ b2 þ n2 )2 4b2 n2 pffiffiffiffiffiffiffiffiffiffiffiffiffi zvn t f (t) ¼ epffiffiffiffiffiffiffi2ffi sin vn 1 z2 tu(t),
F(s) ¼
u(t),
þ a) þ d(n
1 (s þ a)n
F(n) ¼ aab þ [ab
þ b2 ) þ (a2 b2 )n þ an2 þ n3 (a2 þ b2 þ n2 )2 4b2 n2
F(s) ¼ (sþa)b2 þ b2
at
þ b) þ d(n
ab (a þ b)n n p F(n) ¼ n(a 2 þ n2 )(b2 þ n2 ) þ ab d(n) a) 1 at a b(a þ a(a f (t) ¼ ab b a e b
cos bt u(t), a > 0
f (t) ¼ e
1
1)! e
pa2 2(b2 a2 ) [d(n
F(n) ¼ n(aa2 þnn2 ) þ pa d(n) 1 1 b b a e at þ b a a e f (t) ¼ ab
ana
as F(s) ¼ 1 se2 F(n) ¼ n12 (cos an at
2
F(n) ¼
F(s) ¼
2
F(n) ¼ a(a
F(s) ¼
n
f (t) ¼ a1 (1
1)
sin at t
f (t) ¼ e
f (t) ¼ (nt
ak ¼
F(n) ¼ 4a sinn2(an=2)
F(n) ¼ p,
t, f (t) ¼ a,
b)]
b2 cos bt u(t) b2
n3 n2 )(b2 n2 ) pb2 2(a2 b2 ) [d(n
F(n) ¼ (a2
F(n) ¼ n12 [2 cos an þ sin 2an
a t, 0 t a f (t) ¼ a þ t, at0
f (t) ¼
a)]
d(n
3
e as )2 s2
F(s) ¼ 2a(coshs2 as
a2 cos at a2
þ a)
F(s) ¼ (s2 þ a2 s)(s2 þ b2 )
F(n) ¼ n12 [(1 an) cos an þ (1 þ an) sin an
t, 0ta f (t) ¼ 2a t, a t 2a F(s) ¼ (1
n2 pa n2 )(b2 n2 ) þ 2(b2 a2 ) [d(n pb d(n 2(a2 b2 ) [d(n þ b)
F(n) ¼ (a2
as
e s
2
F(s) ¼ (s2 þ a2 s)(s2 þ b2 )
a a
F(s) ¼ s2s þþ aa2 F(n) ¼ aa2
n n2
þ
p 2
1 þ aa d(n
f (t) ¼ sin (at þ u)u(t)
a) þ 1
a a
d(n þ a) (continued)
4-28
Transforms and Applications Handbook
TABLE 4.12 (continued) Hartley Transforms of Various Engineering Signals
n sin u n2
þ p2 [(sin u þ cos u)d(n
b
tan
a
F(n) ¼ aBc þ (cA
a)
2
2
A¼a þb
cos u)d(n þ a)]
þ (sin u
f (t) ¼ a12 (1
a
1
f (t) ¼ a12 (1
cos at)u(t)
F(s) ¼ s(s2 þ1 a2 )
F(s) ¼ s(s þ1 a)2
F(n) ¼ n(a2 1 n2 ) þ ap2 d(n) 2ap2 [d(n a) þ d(n þ a)] h i pffiffiffiffiffiffiffiffiffiffiffi a2 þ a 2 cos (at þ f) u(t), f ¼ tan 1 aa f (t) ¼ aa2 a2
f (t) ¼ a12 [a
2
2
at
ate
F(n) ¼
b)d(n a) n2 ) þ 2b(b2 a2 ) [(a pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 at 2 a) þ b e sin (bt þ f)u(t), f (t) ¼ b (a f ¼ tan 1 a b 1 , a > 0 F(s) ¼ (s þsa)þ2aþ b2 2
2
2
2
2
F(s) ¼
1 s[(s þ a)2 þ b2 ] 2
2
(a þ b)d(n þ a)]
3
þ (2a a)n þ n F(n) ¼ a(a þ b ) (a(a2 þþbb2 þ n2aa)n 2 )2 4b2 n2 h i at sin (bt f) u(t), f (t) ¼ a2 þ1 b2 þ bpeffiffiffiffiffiffiffiffiffiffi a2 þ b2
vn
f ¼ tan
b a
,a > 0
2
1
f ¼ cos
F(s) ¼ s
2
b
f ¼ tan
ae
at
þ a(a
2
c
b
(c
a,
f) u(t),
a) þ b
a, c > 0
2
2
2
3
b ) þ (a þ b þ 2ac)n (2a þ c)n n F(n) ¼ c(a þ[(a 2 þ b2 þ n2 )2 4b2 n2 ](c2 þ n2 ) qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 f (t) ¼ a2 þa b2 þ 1b (a a2a)þ bþ2 b e at sin (bt þ f) u(t), b f ¼ tan 1 a b a tan 1 a ,a > 0 þa F(s) ¼ s[(s þs a) 2 þ b2 ] 2
2
2
2
2
3
(a þ b 2aa)n þ (2a a)n n F(n) ¼ a(a þ b ) þ þ a2ap þ b2 d(n) n[(a2 þ b2 þ n2 )2 4b2 n2 ] at ct e sin f) p(bt ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi u(t), f (t) ¼ c(a2 1þ b2 ) c[(c ea)2 þ b2 ] þ pffiffiffiffiffiffiffiffiffiffi 2 b a2 þ b2 (c a) þ b2 b f ¼ tan 1 c b a þ tan 1 a
a, c > 0
1 F(s) ¼ s(s þ c)[(s þ a)2 þ b2 ] 2
2
2
2
2
3
b ) (a þ b þ 2ac)n (2a þ c)n þ n F(n) ¼ c(a þn[(a þ c(a2 pþ b2 ) d(n) 2 þ b2 þ n2 )2 4b2 n2 ](c2 þ n2 ) h ct f (t) ¼ c(a2 aþ b2 ) þ c[(c(c a)a)e 2 þ b2 ] pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (a a)2 þ b2 ffi e at sin (bt f)u(t), a, c > 0 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi þ pffiffiffiffiffiffiffiffiffiffi 2 2 2 2 b a þb
2
F(s) ¼
2
a)te
at
a>0
]u(t),
a)n2
n3
þ ap a2 d(n)
at
b2
a1 b þ a0 b(a b)
s2 þ a1 s þ a0 s(s þ a)(s þ b)
e
bt
i
u(t),
2
a, b > 0
3
þ [a1 (a þ b) ab a0 ]n þ (a þ b a1 )n F(n) ¼ a0 ab þ [a1 ab a0 (a þ b)]n n[(a2 þ b2 þ n2 )2 4b2 n2 ] a0 1 2 b2 a1 a þ a0 )2 f (t) ¼ c2 þ bc [(a i þ b2 (a1 2a)2 ]1=2 e at sin (bt þ f) u(t), a > 0 b 2a) 1 f ¼ tan 1 a2 b(a tan 1 a b2 a1 a þ a0
þ n4 ]
0p þ aab d(n)
s þ a1 s þ a0 F(s) ¼ s[(s þ a)2 þ b2 ]
þ (2a1 a a1 F(n) ¼ a1 A þ a1 (A 2a)n n[(a2 þ b2 þ n2 )2 A ¼ a2 þ b2
A)n2 þ (2a 4b2 n2 ]
1)n3 þ n4
þ
a0 p A d(n),
*******************
F(s) ¼ (s þ c)[(s þ1 a)2 þ b2 ] 2
a>0
)u(t),
= * Program FHT . C**************************************************
v n 2zvn n þ vp2 d(n) F(n) ¼ 2 n 2 2 n n ðvn n Þ þ (2zvn n)2 ct e at ffi sin (bt f (t) ¼ (c ea)2 þ b2 þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 1
þ c(a2ap þ b2 ) d(n)
2
z, a > 0
1 ðs2 þ 2zvn s þ v2n Þ 2
n4
Appendix: A Sample FHT Program
z
1
2
1
n p þ a2 þ F(n) ¼ n(aa2 þþbb2 þ n2an 2 )2 b2 d(n) 4b2 n2 p ffiffiffiffiffiffiffiffiffiffiffiffiffiffi nt 1 epzv ffiffiffiffiffiffiffiffi2ffi sin vn 1 z2 t þ f u(t), f (t) ¼ v2 n
b a
2
þ (2a F(n) ¼ aa þ (a 2aa)n n(a2 þ n2 )2 h 2 þ a0 e f (t) ¼ aab0 þ a a(aa1 a b)
p
n p F(n) ¼ a n(a22an þ n2 ) þ a2 d(n)
p a)d(n a) þ (a þ a)d(n þ a)] F(n) ¼ n(an2þ an2 ) þ ap a2 d(n) 2a2 [(a h at i e 1 sin (bt f) u(t), f ¼ tan 1 ba , a > 0 f (t) ¼ a2 þ b2 þ bpffiffiffiffiffiffiffiffiffiffi a2 þ b2 nþa (a2 þ n2 )(b2
at
1
ac)n2 (2a þ c a)n3 4b2 n2 ](c2 þ n2 )
2aa, B ¼ a þ b
e
a F(s) ¼ s(ss þ þ a)2
F(s) ¼
tan
a
aB)n þ (A þ 2ac n[(a2 þ b2 þ n2 )2
F(s) ¼ s(ss2þþaa2 )
1 (s þ a)(s2 þ b2 )
b
c
F(s) ¼ s(s þ c)[(ss þþaa)2 þ b2 ]
F(s) ¼ s sinsu2 þþ aa2cos u F(n) ¼ a cosau2
1
f ¼ tan
(c
a) þ b
=* =* This FHT algorithm utilizes an efficient permutation algorithm =* developed by David M.W. Evans. Additional details may be found =* in: IEEE Transaction on Acoustics, Speech, and Signal Processing, =* vol. ASSP-35, n. 8, pp. 1120–1125, August 1987. =* =* This FHT algorithm, authored by Lakshmikantha S. Prabhu, is =* optimized for the SPARC RISC platform. Additional details may =* be found in his M.S.E.E. thesis referenced below. =* =* L.S. Prabhu, ‘‘A Complexity-Based Timing Analysis of Fast =* Real Transform Algorithms,’’ Master’s Thesis, University of =* Arkansas, Fayetteville, AR, 72701-1201, 1993. =********************************************************* *****************
Hartley Transform
=* This program assumes a maximum array length *= of 2^M ¼ N where =* M ¼ 9 and N ¼ 512. *= =* See Line 52 if the array length is increased. *= # include # include # define M 3 # define N 8 float* myFht ( ); main ( ) { =* Read the integer values 1, . . . , N into the vector X[N].*= int i; float X[N] ; for (i ¼ 0; i < N; iþþ) X[i] ¼ iþ1; for (i ¼ 0; i < N; iþþ) printf (‘‘%fn;n’’, X[i]); myFht (X, N, M); printf (‘‘nn’’); for (i ¼ 0; i < N; iþþ) printf (‘‘%d: %fnn’’, i, X[i]=N); =* It is assumed that the user divides by the integer N.*= } float* myFht (x, n, m) float* x; int n, m; { int i, j, k, kk, 1, 10, 11, 12, 13, 14, 15, m1, n1, n2, NN, s; int diff ¼ 0, diff2, gamma, gamma2 ¼ 2, n2 2, n2 4, n 2, n 4, n 8, n 16; int itemp, ntemp, phi, theta by 2; float ee, temp1, temp2, xtemp1, xtemp2; float h sec b, x0, x1, x2, x3, x4, x5, xtemp; double cc1, cc2, ss1, ss2; double sine[257]; =********************************************************* **************** = =* Digit reverse counter. *= =********************************************************* ***************** = int powers of 2[16], seed[256]; int firstj, log2 n, log2 seed size; int group no, nn, offset; log2 n ¼ m >> 1; nn ¼ 2 (log2 n 1); if ( (m % 2) ¼ ¼ 1 ) log2 n ¼ log2 n þ 1; seed[0] ¼ 0; seed[1] ¼ 1;
4-29
for (log2 seed size ¼ 2; log2 seed size < ¼ log2 n; log2 seed sizeþþ) { for ( i ¼ 0; i > 1) ] ¼ seed[i]; } } for (offset ¼ 1, offset < nn; offsetþþ) { {firstj ¼ nn * seed[offset]; i ¼ offset; j ¼ firstj; xtemp ¼ x[i]; x[i] ¼ x[j]; x[j] ¼ xtemp; for ( group no ¼ 1; group no < seed[offset]; group noþþ) { i ¼ i þ nn; j ¼ firstj þ seed[group.no]; xtemp ¼ x[i]; x[i] ¼ x[j]; x[j] ¼ xtemp; } } j ¼ 0; n1 ¼ n 1; n 16 ¼ n >> 4; n 8 ¼ n >> 3; n 4 ¼ n >> 2; n 2 ¼ n >> 1; =********************************************************* ****************** = =* Start the transform computation with 2-point butterflies.*= =********************************************************* ***************** = for (i ¼ 0; i < n; i þ ¼ 2) { s ¼ iþ1; xtemp ¼ x[i]; x[i] þ ¼ x[s]; x[s] ¼ xtemp x[s]; } =********************************************************* ****************** = =* Now, the 4-point butterflies.*= =********************************************************* ****************** = for ( i ¼ 0; i < N; i þ ¼ 4) {
4-30
Transforms and Applications Handbook
xtemp ¼ x[i]; x[i] þ ¼ x[iþ2]; x[iþ2] ¼ xtemp x[iþ2]; xtemp ¼ x[iþ1]; x[iþ1] þ ¼ x[iþ3]; x[iþ3] ¼ xtemp x[iþ3]; } =********************************************************* ****************** = =* Sine table initialization.*= =********************************************************* ***************** = NN ¼ n 4; sine[0] ¼ 0; sine[n 16] ¼ 0.382683432; sine[n 8] ¼ 0.707106781; sine[3*n 16] ¼ 0.923879533; sine[n 4] ¼ 1.000000000; h sec b ¼ 0.509795579; diff ¼ n 16; theta by 2 ¼ n 4 >> 3; j ¼ 0; while (theta by2 > ¼ 1) { for ( i ¼ 0; i < ¼ n 4; i þ ¼ diff) { sine[j þ theta by 2] ¼ h sec b * (sine[j] þ sine[j þ diff] ); j ¼ j þ diff; } j ¼ 0; diff ¼ diff >> 1; theta by 2 ¼ theta by 2 >> 1; h sec b ¼ 1 = sqrt(2 þ 1=h sec b); =********************************************************* ****************** = =* Other butterflies.*= =********************************************************* ****************** = for ( i ¼ 3; i < ¼ m; iþþ ) { diff ¼ 1; gamma ¼ 0; ntemp ¼ 0; phi ¼ 2 (m-i) >> 1; ss1 ¼ sine[phi]; cc1 ¼ sine[n 4 phi]; n2 ¼ 2 (i-1); n2 2 ¼ n2 >>1; n2 4 ¼ n2 >> 2; gamma2 ¼ n2 4; diff2 ¼ gamma2 þ gamma2 1; item ¼ n2 4; k ¼ 0;
=********************************************************* ****************** = =* Initial section of stages 3, 4, . . . for which sines & cosines are*= =* not required.*= =********************************************************* ***************** = for (k ¼ 0; k < (2 (m-i)>>1); kþþ) { 10 ¼ gamma; 11 ¼ 10 þ n2 2; 13 ¼ gamma2; 14 ¼ gamma2 þ n2 2; 15 ¼ 11 þ itemp; x0 ¼ x[10]; x1 ¼ x[11]; x3 ¼ x[13]; x5 ¼ x[15]; x10 ¼ x0 þ x1; x[11] ¼ x0 x1; x[13] ¼ x3 þ x5; x[14] ¼ x3 x5; gamma ¼ gamma þ n2; gamma2 ¼ gamma2 þ n2; } gamma ¼ diff; gamma2 ¼ diff2; =********************************************************* ***************** = =* Next sections of stages 3, 4, . . . *= =********************************************************* ****************** = for ( j ¼ 1; j < 2 (i-3); jþþ ) { for ( k ¼ 0; k < (2 (m-i) >> 1); kþþ) { 10 ¼ gamma; 11 ¼ 10 þ n2 2; 13 ¼ gamma2; 14 ¼ 13 þ n2 2; x0 ¼ x[10]; x1 ¼ x[11]; x3 ¼ x[13]; x4 ¼ x[14]; x[10] ¼ x0 þ x1 * cc1 þ x4 * ss1; x[11] ¼ x0 x1 * cc1 x4 * ss1; x[13] ¼ x3 x4 * cc1 þ x1 * ss1; x[14] ¼ x3 þ x4 * cc1 x1 * ss1; gamma ¼ gamma þ n2; gamma2 ¼ gamma2 þ n2; } itemp ¼ 0; phi ¼ phi þ ( 2 (m-i) >> 1 );
Hartley Transform
ntemp ¼ (phi < n 4) ? 0 : n 4; ss1 ¼ sine[phi ntemp]; cc1 ¼ sine[n 4 phi þ ntemp]; diffþþ;diff2-; gamma ¼ diff; gamma2 ¼ diff2; } } }
Acknowledgments The author would like to thank Mrs. Robert William Hartley and Dr. Sheldon Hochheiser, Senior Research Associate, AT&T Archives, for their assistance in accumulating the biographical information on R.V.L. Hartley. The assistance of R.N. Bracewell, G.T. Heydt, and Z. Wang is gratefully acknowledged.
References 1. R. V. L. Hartley, A more symmetrical Fourier analysis applied to transmission problems, Proc. IRE, 30, 144–150, March 1942. 2. R. V. L. Hartley, Transmission of information, Bell Sys. Tech. J., 7, 535–563, July 1928. 3. Z. Wang, Harmonic analysis with a real frequency function—I. Aperiodic case, Appl. Math. Comput., 9, 53–73, 1981.
4-31
4. Z. Wang, Harmonic analysis with a real frequency function—II. Periodic and bounded case, Appl. Math. Comput., 9, 153–163, 1981. 5. Z. Wang, Harmonic analysis with a real frequency function— III. Data sequence, Appl. Math. Comput., 9, 245–255, 1981. 6. Z. Wang, Fast algorithms for the discrete W transform and for the discrete Fourier transform, IEEE Trans. Acoust. Speech Signal Process., ASSP-32, 803–816, 1984. 7. R. N. Bracewell, Discrete Hartley transform, J. Opt. Soc. Am., 73, 1832–1835, December 1983. 8. R. N. Bracewell, The fast Hartley transform, Proc. IEEE, 72, 1010–1018, 1984. 9. R. N. Bracewell, The Hartley Transform, Oxford University Press, New York, 1986. 10. A. D. Poularikas and S. Seeley, Signals and Systems, 2nd edn., Krieger, Malabar, FL, 1994. 11. K. J. Olejniczak and G. T. Heydt, eds., Special section on the Hartley transform, Proc. IEEE, 82, 372–447, 1994. 12. G. A. Campbell and R. M. Foster, Fourier Integrals for Practical Applications, Van Nostrand, Princeton, NJ, 1948. 13. A. Erdélyi, Tables of Integral Transforms, Vol. 1, McGrawHill, New York, 1954. 14. W. Magnus and F. Oberhettinger, Formulas and Theorems of the Special Functions of Mathematical Physics, pp. 116–120, Chelsea, New York, 1949.
5 Laplace Transforms
Alexander D. Poularikas University of Alabama in Huntsville y
Samuel Seely
5.1 Introduction................................................................................................................................... 5-1 5.2 Laplace Transform of Some Typical Functions .................................................................... 5-2 5.3 Properties of the Laplace Transform ....................................................................................... 5-3 5.4 The Inverse Laplace Transform .............................................................................................. 5-10 5.5 Solution of Ordinary Linear Equations with Constant Coefficients .............................. 5-13 5.6 The Inversion Integral............................................................................................................... 5-17 5.7 Applications to Partial Differential Equations..................................................................... 5-20 5.8 The Bilateral or Two-Sided Laplace Transform.................................................................. 5-27 References ................................................................................................................................................ 5-43
5.1 Introduction* The Laplace transform has been introduced into the mathematical literature by a variety of procedures. Among these are: (a) in its relation to the Heaviside operational calculus, (b) as an extension of the Fourier integral, (c) by the selection of a particular form for the kernel in the general Integral transform, (d) by a direct definition of the Laplace transform, and (e) as a mathematical procedure that involves multiplying the function f(t) by est dt and integrating over the limits 0 to 1. We will adopt this latter procedure. Not all functions f(t), where t is any variable, are Laplace transformable. For a function f(t) to be Laplace transformable, it must satisfy the Dirichlet conditions—a set of sufficient but not necessary conditions. These are 1. f(t) must be piecewise continuous; that is, it must be single valued but can have a finite number of finite isolated discontinuities for t > 0. 2. f(t) must be of exponential order; that is, f(t) must remain less than Meao t as t approaches 1, where M is a positive constant and ao is a real positive number.
is called the Laplace transformation of f(t). Here s can be either a real variable or a complex quantity. Observe the shorthand notation +{f(t)} to denote the Laplace transformation of f(t). Observe also that only ordinary integration is involved in this integral. To amplify the meaning of condition (2), we consider piecewise continuous functions, defined for all positive values of the variable t, for which lim f (t)ect ¼ 0,
t!1
Functions of this type are known as functions of exponential order. Functions occurring in the solution for the time response of stable linear systems areÐ of exponential order zero. Now we 1 can recall that the integral 0 f (t)est dt converges if 1 ð 0
j f (t)est jdt < 1,
2
1 ð
f (t)est dt
written +{f (t)}
(5:1)
s ¼ s þ jv
If our function is of exponential order, we can write this integral as
For example, such functions as: tan bt, cot bt, et are not Laplace transformable. Given a function f(t) that satisfies the Dirichlet conditions, then
F(s) ¼
c ¼ real constant:
1 ð 0
j f (t)ject e(sc)t dt:
This shows that for s in the range s > 0 (s is the abscissa of convergence) the integral converges; that is
0
1 ð 0
j f (t)est jdt < 1,
Re(s) > c:
* All the contour integrations in the complex plane are counterclockwise.
5-1
5-2
Transforms and Applications Handbook 1
To carry out the integration, define the quantity x ¼ t 2 , then 1 1 dx ¼ 12 t 2 dt, from which dt ¼ 2t 2 dx ¼ 2x dx. Then
Abscissa of convergence jω σ + jω
4 F(s) ¼ pffiffiffiffi p
Region of absolute convergence
S-plane
1 ð
2
x2 esx dx:
0
But the integral 0
c
σ
σ
1 ð
2
x2 esx dx ¼
0
Thus, finally,
σ – jω
F(s) ¼ FIGURE 5.1
pffiffiffiffi p : 4s3=2
1 : s3=2
(5:4)
Path of integration for exponential order function.
The restriction in this equation, namely, Re(s) ¼ c, indicates that we must choose the path of integration in the complex plane as shown in Figure 5.1.
5.2 Laplace Transform of Some Typical Functions We illustrate the procedure in finding the Laplace transform of a given function f(t). In all cases it is assumed that the function f(t) satisfies the conditions of Laplace transformability.
Example 5.3 Find the Laplace transform of f (t) ¼ erfc 2pk ffit, where the error function, erf t, and the complementary error function, erfc t, are defined by 2 erf t ¼ pffiffiffiffi p
ðt
2
eu du,
0
2 erfc t ¼ pffiffiffiffi p
1 ð
2
eu du,
t
SOLUTION Example 5.1
Consider the integral
Find the Laplace transform of the unit step function f(t) ¼ u(t), where u(t) ¼ 1, t > 0, u(t) ¼ 0, t < 0.
SOLUTION By Equation 5.1 we write
+{u(t)} ¼
1 ð 0
u(t)e
st
dt ¼
1 ð
est dt ¼
0
1 est 1 ¼ : s s
(5:2)
0
The ofÐ convergence is found from the expression Ð 1 region 1 st jdt ¼ 0 est dt < 1, which is the entire right half0 je plane, s > 0.
2 I ¼ pffiffiffiffi p
Find the Laplace transform of the function f (t) ¼ 2 2 F(s) ¼ pffiffiffiffi p
1 ð 0
1
t 2 est dt:
t p
2 I ¼ pffiffiffiffi p
1 ð
1 ð
3
k where l ¼ : 2
6 u2 7 7 est 6 4 e du5dt
0
e
2
(5:5)
pl t
ffi
2
1 ð
3
6 st 7 2 7 ffi 4 e 5dtdu ¼ spffiffiffi p
u2 6
l2 u2
1 ð
l2 s exp u2 2 du u
0
The value of this integral is known
which leads to (5:3)
0
2
Change the order of integration, noting that u ¼ plffit , t ¼ lu2
Example 5.2 qffiffiffi
1 ð
pffiffiffiffi p 2 p ffiffiffi ffi e ¼ s p 2
pffi 2l s
,
pffiffi k 1 + erfc pffiffi ¼ exp { k s}: s 2 t
(5:6)
5-3
Laplace Transforms
Example 5.4
Re(s) > sg :
Find the Laplace transform of the function f(t) ¼ sinh at.
Thus,
SOLUTION
+{ f (t) þ g(t)} ¼ F(s) þ G(s):
Express the function sinh at in its exponential form at
sinh at ¼
e e 2
at
As a direct extension of this result, for K1 and K2 constants,
:
+{K1 f (t) þ K2 g(t)} ¼ K1 F(s) þ K2 G(s):
(5:10)
The Laplace transform becomes 1 +{ sinh at} ¼ 2
1 ð 0
¼
s2
THEOREM 5.2 Differentiation
(sa)t e e(sþa)t dt
a : a2
(5:7)
A moderate listing of functions f(t) and their Laplace transforms F(s) ¼ +{f(t)} are given in Table A.5.1.
Let the function f(t) be piecewise continuous with sectionally continuous derivatives df(t)=dt in every interval 0 t T. Also let f(t) be of exponential order ect as t ! 1. Then when Re(s) > c, the transform of df(t)=dt exists and df (t) + ¼ s+{ f (t)} dt
5.3 Properties of the Laplace Transform We now develop a number of useful properties of the Laplace transform; these follow directly from Equation 5.1. Important in developing certain properties is the definition of f(t) at t ¼ 0, a quantity written f(0þ) to denote the limit of f(t) at t approaches zero, assumed from the positive direction. This designation is consistent with the choice of function response for t > 0. This means that f(0þ) denotes the initial condition. Correspondingly, f (n) (0þ) denotes the value of the nth derivative at time t ¼ 0þ, and f (n) (0þ) denotes the nth time integral at time t ¼ 0þ. This means that the direct Laplace transform can be written
F(s) ¼ lim R!1
ðR
f (t)est dt,
R > 0, a > 0:
ðT df (t) df (t) e ¼ lim + T!1 dt dt
ðT
e
st (1)
f
+{ f (t) þ g(t)} ¼ F(s) þ G(s):
(5:9)
dt ¼
0
f (t)e
1
st
dt þ
1 ð 0
g(t)e
st
dt,
st
dt
with the result
e
st
t1 f (t) þ e 0
st
t 2 f (t) þ þ e
ðT 0
e
st (1)
f
(t)dt ¼
st
f (0þ) þ e
T f (t)
tn
t1
But f(t) is continuous so that f(t1 hence
Proof From Equation 5.8 we write
0
tn
t1
Each of these integrals is integrated by parts by writing
The Laplace transform of the linear sum of two Laplace transformable functions f(t) þ g(t) with respective abscissas of convergence sf and sg, with sg > sf, is
[f (t) þ g(t)]e
dt:
ðt1 ðt2 ðT (t)dt ¼ [ ] þ [ ] þ þ [ ]: 0
0
THEOREM 5.1 Linearity
+{ f (t) þ g(t)} ¼
st
0
u ¼ e st du ¼ se df dv ¼ dt v ¼ f dt
1 ð
(5:11)
Write the integral as the sum of integrals in each interval in which the integrand is continuous. Thus, we write
(5:8)
We proceed with a number of theorems.
st
f (0þ):
Proof Begin with Equation 5.8 and write
a!0þ a
1 ð
f (0þ) ¼ sF(s)
1
ðT þs e
st
f (t)dt:
0
0) ¼ f(t1 þ 0), and so forth,
sT
ðT
f (T) þ s e 0
st
f (t)dt:
5-4
Transforms and Applications Handbook
However, with limt!1 f(t)est ¼ 0 (otherwise the transform would not exist), then the theorem is established.
Proof Because f(t) is Laplace transformable, its integral is written 9 12 t 8 t 3 ð = ð < ð f (j)dj ¼ 4 f (j)dj5e + ; : 1
THEOREM 5.3 Differentiation
st
dt:
1
0
This is integrated by parts by writing Let the function f(t) be piecewise continuous, have a continuous derivative f (n1)(t) of order n 1 and a sectionally continuous derivative f (n)(t) in every finite interval 0 t T. Also, let f(t) and all its derivatives through f (n 1)(t) be of exponential order ect as t ! 1. Then the transform of f (n)(t) exists when Re(s) > c and it has the following form: +{ f (n) (t)} ¼ sn F(s)
sn 1 f (0þ) sn n f (n
1)
f (j)dj
1
dv ¼ e
st
dt
du ¼ f (j)dj ¼ f (t)dt 1 e s
v¼
st
:
Then
sn 2 f (1) f (0þ)
(0þ):
ðt
u¼
(5:12)
Proof The proof follows as a direct extension of the proof of Theorem 5.2.
9 2 8 t 31 t 1 ð = < ð st ð e 1 + f (j)dj ¼ 4 f (j)dj5 þ f (t)e ; : s s 1
¼
1 ð
1 s
1
f (t)e
st
dt þ
1 s
0
Example 5.5
dt
0
0
ð0
st
f (j)dj
1
from which Find +{tm} where m is any positive integer.
9 8t = 1 c and any a > c ða s
F(s)ds ¼
ða 1 ð
Now introduce a new variable t ¼ t l. This converts this equation to the form
s 0
¼e
Express this in the form
¼
1 ð
ða
f (t) e
st
s
0
dsdt ¼
1 ð 0
s
0
F(s)
Find the Laplace transform of the pulse function shown in Figure 5.3.
Because the pulse function can be decomposed into step functions, as shown in Figure 5.3, its Laplace transform is given by
+{ f (t l)} ¼ esl F(s):
+f2½u(t)
(5:18)
Proof Refer to Figure 5.2, which shows a function f(t) u(t) and the same function delayed by the time t ¼ l, where l is a positive constant. We write directly
f (t l)u(t l)e
(5:19)
SOLUTION
The substitution of t l for the variable t in the transform +{ f(t)} corresponds to the multiplication of the function F(s) by els; that is,
st
dt:
u(t
1:5)g ¼ 2
1 s
1 e s
1:5s
2 ¼ (1 s
e
1:5s
THEOREM 5.10 Complex Translation The substitution of s þ a for s, where a is a real or complex, in the function F(s þ a), corresponds to the Laplace transform of the product e atf(t). Proof We write
e
at
f (t)e
0
st
dt ¼
1 ð
f (t)e
(sþa)t
dt
for Re(s) > c
Re(a),
0
f (t)
f(t) 2
2u(t)
f (t)
f(t – λ) 2
0
1.5
t
λ t
FIGURE 5.2
)
where the translation property has been used.
1 ð
0
f (t)
f (t)est dt
Example 5.8
THEOREM 5.9 Time Delay; Real Translation
+{ f (t l)u(t l)} ¼
l
1 ð
+{ f (t þ l)u(t þ l)} ¼ esl F(s):
f (t) F(s)ds ¼ + : t
1 ð
sl
f (t)u(t)est dt ¼ esl
because u(t) ¼ 0 for l t 0. We would similarly find that
f (t) st (e eat )dt: t
Now if f(t)=t has a limit as t ! 0, then the latter function is piecewise continuous and of exponential order. Therefore, the last integral is uniformly convergent with respect to a. Thus, as a tends to infinity 1 ð
1 ð
+{ f (t)u(t)} ¼ esl
est f (t)dtds:
A function f(t) at the time t ¼ 0 and delayed time t ¼ l.
1.5
FIGURE 5.3
t
–2
–2u(t – 1.5)
Pulse function and its equivalent representation.
5-7
Laplace Transforms
Example 5.9
which is F(s þ a) ¼ +{eat f (t)}:
(5:20)
Given f1(t) ¼ t and f2(t) ¼ eat, deduce the Laplace transform of the convolution t * eat by the use of Theorem 5.11.
In a similar way we find F(s a) ¼ +{eat f (t)}:
(5:21)
SOLUTION Begin with the convolution
ðt teat t teat eat t t * eat ¼ (t t)eat dt ¼ a 0 a a2 0
THEOREM 5.11 Convolution
0
The multiplication of the transforms of two sectionally continuous functions f1(t) (¼F1(s)) and f2(t) (¼F2(s)) corresponds to the Laplace transform of the convolution of f1(t) and f2(t). F1 (s)F2 (s) ¼ +{ f1 (t) * f2 (t)}
¼
Then 1 1 1 1 1 1 : ¼ 2 +{t * e } ¼ 2 a s a s2 s s (s a)
(5:22)
at
where the asterisk * is the shorthand designation for convolution. Proof By definition, the convolution of two functions f1(t) and f2(t) is f1 (t) * f2 (t) ¼
1 ð
f1 (t t)f2 (t)dt ¼
0
1 ð
f1 (t)f2 (t t)dt:
By Theorem 5.11 we have F1 (s) ¼ +{ f1 (t)} ¼ +{t} ¼
+{t * eat } ¼
0
¼
1 ð 0
¼
1 ð
21 3 ð 4 f1 (t t)f2 (t)dt5est dt 0
f2 (t)dt
0
1 ð 0
¼
1 ð 0
f2 (t)dt
1 ð
f1 (j)es(jþt) dj:
t
1 ð 0
f2 (t)est dt
The multiplication of the transforms of three sectionally continuous functions f1(t), f2(t), and f3(t) corresponds to the Laplace transform of the convolution of the three functions +{ f1 (t) * f2 (t) * f3 (t)} ¼ F1 (s)F2 (s)F3 (s):
(5:24)
Proof This is an extension of Theorem 5.11. The result is obvious if we write F1 (s)F2 (s)F3 (s) ¼ +{ f1 (t) * +1 {F2 (s)F3 (s)}}:
Example 5.10
But for positive time functions f1(j) ¼ 0 for j < 0, which permits changing the lower limit of the second integral to zero, and so
¼
1 1 : s2 (s a)
THEOREM 5.12
f1 (t t)est dt:
Now effect a change of variable, writing t t ¼ j and therefore dt ¼ dj, then
1 : sa
and
Thus, 9 81 = s1 þ s2 :
(5:25)
Proof Begin by considering the following line integral in the z-plane: f2 (t) ¼
The Laplace transform of f1(t), the integral on the right, converges in the range Re(s z) > s1, were s1 is the abscissa of convergence of f1(t). In addition, Re(z) ¼ s2 for the z-plane integration involved in Equation 5.25. Thus, the abscissa of convergence of f1(t) f2(t) is specified by
This means that the contour intersects the x-axis at x1 > s2 (see Figure 5.4). Then we have
This situation is portrayed graphically in Figure 5.4 for the case when both s1 and s2 are positive. As far as the integration in the complex plane is concerned, the semicircle can be closed either to the left or to the right just so long as F1(s) and F2(s) go to zero as s ! 1. Based on the foregoing, we observed the following: . . .
1 ð
f1 (t)f2 (t)e
st
1 dt ¼ 2pj
0
1 ð
.
f1 (t)dt
ð
F2 (z)e
(zs)t
dz:
. .
C2
0
Assume that the integral of F2(z) is convergent over the path of integration. This equation is now written in the form 1 ð
f1 (t)f2 (t)e
st
1 dt ¼ 2pj
0
1 ¼ 2pj
s2 þj1 ð
s2 j1
s2 þj1 ð
s2 j1
F2 (z)dz
1 ð
Poles of F1(s z) are contained in the region Re(s z) < s1 Poles of F2(z) are contained in the region Re(z) < s2 From (a) and Equation 5.27 Re(z) > Re(s s1) > s2 Poles of F1(s z) lie to the right of the path of integration Poles of F2(z) are to the left of the path of integration Poles of F1(s z) are functions of s whereas poles of F2(z) are fixed in relation to s
Example 5.11 Find the Laplace transform of the function f(t) ¼ f1(t) f2(t) ¼ et e2t u(t).
f1 (t)e(sz)t dt
0
SOLUTION D
F2 (z)F1 (s z)dz ¼ +{ f1 (t)f2 (t)}:
From Theorem 5.13 and the absolute convergence region for each function, we have
(5:26) F1 (s) ¼
1 , sþ1
s1 > 1
F2 (s) ¼
1 , sþ2
s2 > 2:
C2
jy
(5:27)
z-plane
Further, f(t) ¼ exp [(2 þ 1)t] u(t) implies that sf ¼ s1 þ s2 ¼ 3. We now write Region of values of s σ1 σ2
x1
1 1 zþ2 szþ1 1 1 1 1 ¼ : 3 þ s z (1 þ s) 3 þ s z þ 2
F2 (z)F1 (s z) ¼ σ > σ2 + σ1
x
To carry out the integration dictated by Equation 5.26 we use the contour shown in Figure 5.5. If we select contour C1 and use the residue theorem, we obtain F(s) ¼
1 2pj
þ
C1
FIGURE 5.4
The contour C2 and the allowed range of s.
¼
1 : sþ3
F2 (z)F1 (s z)dz ¼ 2pj Re s½F2 (z)F1 (s z)jz¼
2
5-9
Laplace Transforms
If f(t) has a discontinuity at the origin, this expression specifies the value of the impulse f(0þ). If f(t) contains an impulse term, then the left-hand side does not exist, and the initial value property does not exist.
Imaginary
C1
C2
–2
s-plane
THEOREM 5.15 Final Value Theorem
Real
–1
Let f(t) and f (1)(t) be Laplace transformable functions, then for t!1 lim f (t) ¼ lim sF(s):
t!1
FIGURE 5.5
(5:29)
s!0
Proof Begin with Equation 5.13 and Let s ! 0. Thus, the expression
The contour for Example 5.11.
The inverse of this transform is exp(3t). If we had selected contour C2, the residue theorem gives
lim s!0
1 ð
df e dt
st
dt ¼ lim ½sF(s) s!0
f (0þ):
0
F(s) ¼
1 2pj
þ
F2 (z)F1 (s z)dz ¼ 2pj Re s[F2 (z)F1 (s z)]jz¼1þs
C2
1 1 ¼ ¼ : sþ3 sþ3
Consider the quantity on the left. Because s and t are independent and because e st ! 1 as s ! 0, then the integral on the left becomes, in the limit 1 ð
The inverse transform of this is also exp(3t), as to be expected.
0
df dt ¼ lim f (t) t!1 dt
f (0þ):
Combine the latter two equations to get lim f (t)
THEOREM 5.14 Initial Value Theorem
t!1
Let f(t) and f (1)(t) be Laplace transformable functions, then for case when lim sF(s) as s ! 1 exists, lim sF(s) ¼ lim f (t):
s!1
Proof
t!0þ
(5:28)
Begin with Equation 5.13 and consider
lim
s!1
1 ð
df st e dt ¼ lim ½sF(s) f (0þ): s!1 dt
0
lim ½sF(s)
f (0þ) ¼ 0:
Furthermore, f(0þ) ¼ limt!0þ f(t) so that lim sF(s) ¼ lim f (t):
s!1
t!0þ
s!1
f (0þ):
It follows from this that the final value of f(t) is given by lim f (t) ¼ lim sF(s):
t!1
s!0
This result applies F(s) possesses a simple pole at the origin, but it does not apply if F(s) has imaginary axis poles, poles in the right half plane, or higher order poles at the origin.
Example 5.12 Apply the final value theorem to the following two functions:
Because f(0þ) is independent of s, and because the integral vanishes for s ! 1, then s!1
f (0þ) ¼ lim sF(s)
F1 (s) ¼
sþa , (s þ a)2 þ b2
F2 (s) ¼
SOLUTION For the first function from sF1(s), lim
s!0
s(s þ a) ¼ 0: (s þ a)2 þ b2
s : s2 þ b2
5-10
Transforms and Applications Handbook
For the second function,
SOLUTION
sF(s) ¼
Observe that the denominator can be factored into the form (s þ 2) (s þ 3). Thus, F(s) can be written in partial fraction form as
s2 : 2 s þ b2
However, this function has singularities on the imaginary axis at s ¼ jb, and the final value theorem does not apply.
The important properties of the Laplace transform are contained in Table A.5.2.
F(s) ¼
A ¼ F(s)(s þ 2)js¼
1
F(s) ¼ +{ f (t)},
f (t) ¼ + 1 {F(s)}:
(5:30)
This correspondence between F(s) and f(t) is called the inverse Laplace transformation of f(t). Reference to Table A.5.1 shows that F(s) is a rational function in s if f(t) is a polynomial or a sum of exponentials. Further, it appears that the product of a polynomial and an exponential might also yield a rational F(s). If the square root of t appears on f(t), we do not get a rational function in s. Note also that a continuous function f(t) may not have a continuous inverse transform. Observe that the F(s) functions have been uniquely determined for the given f(t) function by Equation 5.1. A logical question is whether a given time function in Table A.5.1 is the only t-function that will give the corresponding F(s). Clearly, Table A.5.1 is more useful if there is a unique f(t) for each F(s). This is an important consideration because the solution of practical problems usually provides a known F(s) from which f(t) must be found. This uniqueness condition can be established using the inversion integral. This means that there is a oneto-one correspondence between the direct and the inverse transform. This means that if a given problem yields a function F(s), the corresponding f(t) from Table A.5.1 is the unique result. In the event that the available tables do not include a given F(s), we would seek to resolve the given F(s) into forms that are listed in Table A.5.1. This resolution of F(s) is often accomplished in terms of a partial fraction expansion. A few examples will show the use of the partial fraction form in deducing the f(t) for a given F(s).
s 3 : s2 þ 5s þ 6
s 3 s þ 3s¼
2
¼
5
B ¼ F(s)(s þ 3)js¼
3¼
s 3 s þ 3s¼
3
¼ 6:
The partial fraction form of Equation 5.32 is F(s) ¼
5 6 þ : sþ2 sþ3
The inverse transform is given by f (t) ¼ + 1 {F(s)} ¼ ¼
5e
2t
þ 6e
5+
1
3t
1 1 þ 6+ 1 sþ2 sþ3
where entry 8 in Table A.5.1, is used.
Example 5.14 Find the inverse Laplace transform of the function F(s) ¼
sþ1 : [(s þ 2)2 þ 1](s þ 3)
SOLUTION This function is written in the form A Bs þ C sþ1 ¼ : þ s þ 3 [(s þ 2)2 þ 1] [(s þ 2)2 þ 1](s þ 3)
The value of A is deduced by multiplying both sides of this equation by (s þ 3) and then setting s ¼ 3. This gives
Find the inverse Laplace transform of the function F(s) ¼
2¼
and B(s þ 2)=(s þ 3)js ¼ 2 is identically zero. In the same manner, to find the value of B we multiply both sides of Equation 5.32 by (s þ 3) and get
F(s) ¼
Example 5.13
(5:32)
where A and B are constants that must be determined. To evaluate A, multiply both sides of Equation 5.32 by (s þ 2) and then set s ¼ 2. This gives
5.4 The Inverse Laplace Transform We employ the symbol + {F(s)}, corresponding to the direct Laplace transform defined in Equation 5.1, to denote a function f(t) whose Laplace transform is F(s). Thus, we have the Laplace pair
s 3 A B ¼ þ : (s þ 2)(s þ 3) s þ 2 s þ 3
(5:31)
A ¼ (s þ 3)F(s)js¼ 3 ¼
3þ1 ¼ ( 3 þ 2)2 þ 1
1:
5-11
Laplace Transforms To evaluate B and C, combine the two fractions and equate the coefficients of the powers of s in the numerators. This yields 1[(s þ 2)2 þ 1] þ (s þ 3)(Bs þ C) sþ1 ¼ [(s þ 2)2 þ 1](s þ 3) [(s þ 2)2 þ 1](s þ 3)
where F(s) ¼
s
Ap1 Ap2 A1 A2 Ak þ þ þ þ þ þ s1 s s2 s sk s sp (s sp )2
þ
from which it follows that
(s
Apr : sp )r
To find the constants Ak that are the residues of the function F(s) at the simple poles sk, it is only necessary to note that as s ! sk the term Ak(s sk) will become large compared with all other terms. In the limit
(s2 þ 4s þ 5) þ Bs2 þ (C þ 3B)s þ 2C ¼ s þ 1: Combine like-powered terms to write (1 þ B)s2 þ (4 þ C þ 3B)s þ (5 þ 3C) ¼ s þ 1:
Ak ¼ lim (s s!sk
sk )F(s):
(5:35)
Therefore, 1 þ B ¼ 0,
4 þ C þ 3B ¼ 1,
Upon taking the inverse transform for each simple pole, the result will be a simple exponential of the form
5 þ 3C ¼ 1:
From these equations we obtain B ¼ 1,
C ¼ 2:
1 sþ2 þ : s þ 3 (s þ 2)2 þ 1
+
Now using Table A.5.1, the result is f (t) ¼ e3t þ e2t cos t,
þ
s
F (s)
(B0 þ B1 s þ ) ¼ F(s)
Ak Ak* þ s sk s sk*
¼ Ak esk t :
(5:36)
*
¼ Ak esk t þ Ak* esk t :
response ¼ (ak þ jbk )e(sk þjvk )t þ (ak
jbk )e(sk
jvk )t
¼ esk t ½(ak þ jbk )( cos vk t þ j sin vk t) þ (ak jbk )( cos vk t þ j sin vk t)
¼ 2esk t (ak cos vk t bk sin vk t) ¼ 2Ak esk t cos (vk t þ uk )
(5:37)
where uk ¼ tan 1 (bk=ak) and Ak ¼ ak=cos uk. When the proper fraction contains a multiple pole of order r, the coefficients in the partial-fraction expansion Ap1, Ap2, . . . Apr that are involved in the terms
(s
Ap1 Ap2 Apr þ þ þ sp ) (s sp )2 (s sp )r
(5:33)
This expression has been written in a form to show three types of terms; polynomial, simple partial fraction including all terms with distinct roots, and partial fraction appropriate to multiple roots. To find the constants A1, A2, . . . the polynomial terms are removed, leaving the proper fraction 0
1
Ap1 A1 A2 þ þ þ s1 s s2 s sp
Ap2 Apr : 2 þ þ (s sp )r (s sp )
These can be combined in the following way:
t > 0:
In many cases, F(s) is the quotient of two polynomials with real coefficients. If the numerator polynomials is of the same or higher degree than the denominator polynomial, first divide the numerator polynomial by the denominator polynomial; the division is carried forward until the numerator polynomial of the remainder is one degree less than the denominator. This results in a polynomial in s plus a proper fraction. The proper fraction can be expanded into a partial fraction expansion. The result of such an expansion is an expression of the form F 0 (s) ¼ B0 þ B1 (s) þ þ
Ak s sk
Note also that because F(s) contains only real coefficients, if sk is a complex pole with residue Ak, there will also be a conjugate pole sk* with residue Ak* . For such complex poles
The function F(s) is written in the equivalent form F(s) ¼
1
+
(5:34)
must be evaluated. A simple application of Equation 5.35 is not adequate. Now the procedure is to multiply both sides of Equation 5.34 by (s sp)r, which gives (s
sp )r F(s) ¼ (s
sp )r
þ Ap1 (s
þ Ap(r
A1 A2 Ak þ þ þ s s1 s s2 s sk sp )r
1) (s
1
þ
sp ) þ Apr
(5:38)
5-12
Transforms and Applications Handbook
In the limit as s ¼ sp all terms on the right vanish with the exception of Apr. Suppose now that this equation is differentiated once with respect to s. The constant Apr will vanish in the differentiation but Ap(r1) will be determined by setting s ¼ sp. This procedure will be continued to find each of the coefficients Apk. Specifically, the procedure is specified by
Apk ¼
rk 1 d r F(s)(s s ) , k ¼ 1, 2, . . . , r: p (r k)! dsrk s¼sp
From Table A.5.1 the inverse transform is f (t) ¼ d(t) þ 4 þ t et ,
If the function F(s) exists in proper fractional form as the quotient of two polynomials, we can employ the Heaviside expansion theorem in the determination of f(t) from F(s). This theorem is an efficient method for finding the residues of F(s). Let
(5:39)
Example 5.15 Find the inverse transform of the following function:
F(s) ¼
F(s) ¼
P(s) A1 A2 Ak þ þ þ ¼ s sk Q(s) s s1 s s2
where P(s) and Q(s) are polynomials with no common factors and with the degree of P(s) less than the degree of Q(s). Suppose that the factors of Q(s) are distinct constants. Then, as in Equation 5.35 we find
s3 þ 2s2 þ 3s þ 1 : s2 (s þ 1)
SOLUTION
s sk P(s) : Ak ¼ lim s!sk Q(s) Also, the limit P(s) is P(sk). Now, because
This is not a proper fraction. The numerator polynomial is divided by the denominator polynomial by simple long division. The result is
F(s) ¼ 1 þ
s2 þ 3s þ 1 : s2 (s þ 1)
lim
s!sk
s sk 1 1 ¼ lim (1) ¼ (1) , s!sk Q (s) Q(s) Q (sk )
then Ak ¼
The proper fraction is expanded into partial fraction form
Fp (s) ¼
s2 þ 3s þ 1 A11 A12 A2 ¼ þ 2 þ : s s sþ1 s2 (s þ 1)
The value of A2 is deduced using Equation 5.35
A2 ¼ [(s þ 1)Fp (s)]s¼1 ¼
s2 þ 3s þ 1 ¼ 1: s2 s¼1
To find A11 and A12 we proceed as specified in Equation 5.39 s2 þ 3s þ 1 ¼1 s þ 1 s¼0
1 d 2 d s2 þ 3s þ 1 ¼ ¼ s Fp (s) 1! ds ds sþ1 s¼0 s¼0 s2 þ 3s þ 1 2s þ 3 ¼ 4: þ ¼ s þ 1 s¼0 (s þ 1)2
A12 ¼ [s2 Fp (s)]s¼0 ¼ A11
Therefore,
for t 0:
4 1 1 F(s) ¼ 1 þ þ 2 : s s sþ1
P(sk ) : Q(1) (sk )
Thus,
F(s) ¼
k P(s) X P(sn ) 1 : ¼ Q(s) n¼1 Q(1) (sn ) (s sn )
(5:40)
From this, the inverse transformation becomes
f (t) ¼ +
1
P(s) Q(s)
¼
k X P(sn ) sn t e : Q(1) (sn ) n¼1
This is the Heaviside expansion theorem. It can be written in formal form.
THEOREM 5.16: Heaviside Expansion Theorem If F(s) is the quotient P(s)=Q(s) of two polynomials in s such that Q(s) has the higher degree and contains simple poles the factor s sk, which are not repeated, then the term in f(t) corresponding s t k) to this factor can be written QP(s (1) (s ) e k . k
5-13
Laplace Transforms Then
Example 5.16 Repeat Example 5.13 employing the Heaviside expansion theorem.
SOLUTION We write Equation 5.31 in the form
F(s) ¼
P(s) s3 s3 ¼ ¼ : Q(s) s2 þ 5s þ 6 (s þ 2)(s þ 3)
The derivative of the denominator is Q(1) (s) ¼ 2s þ 5 from which, for the roots of this equation, Q(1) (2) ¼ 1,
Q(1) (3) ¼ 1:
Hence, P(2) ¼ 5,
pffiffiffi pffiffiffi 1 j2 3 (2j2pffiffi3)t 1 þ j2 3 (2þj2pffiffi3)t pffiffiffi e pffiffiffi e þ j2 3 j2 3 pffiffiffi pffiffiffi
ffiffi p 1 j2 3 j2 3t 1 þ j2 3 j2pffiffi3t pffiffiffi e pffiffiffi e ¼ e2t þ j2 3 j2 3 " # pffiffi pffiffi j2 3t pffiffi pffiffi ej2 3t ) j2 3t j2 3t 2t (e pffiffiffi ¼e þe ) þ (e j2 3 pffiffiffiffi pffiffiffiffi 1 ¼ e2t 2 cos 2 3t pffiffiffi sin 2 3t 3
f (t) ¼
5.5 Solution of Ordinary Linear Equations with Constant Coefficients The Laplace transform is used to solve homogeneous and nonhomogeneous ordinary differential equations or systems of such equations. To understand the procedure, we consider a number of examples.
P(3) ¼ 6:
Example 5.18
The final value for f(t) is f (t) ¼ 5e2t þ 6e3t :
Find the solution to the following differential equation subject to prescribed initial conditions: y(0þ); (dy=dt) þ ay ¼ x(t).
SOLUTION Example 5.17 Find the inverse Laplace transform of the following function using the Heaviside expansion theorem: +1
2s þ 3 : s2 þ 4s þ 7
Laplace transform this differential equation. This is accomplished by multiplying each term by estdt and integrating from 0 to 1. The result of this operation is sY(s) y(0þ) þ aY(s) ¼ X(s), from which Y(s) ¼
SOLUTION The roots of the denominator are
If the input x(t) is the unit step function u(t), then X(s) ¼ 1=s and the final expression for Y(s) is
pffiffiffi pffiffiffi s2 þ 4s þ 7 ¼ (s þ 2 þ j 3)(s þ 2 j 3):
That is, the roots of the denominator are complex. The derivative of the denominator is Q(1) (s) ¼ 2s þ 4: We deduce the values P(s)=Q(1)(s) for each root pffiffiffi pffiffiffi pffiffiffi For s1 ¼ 2 j 3 Q(1) (s1 ) ¼ j2 3 P(s1 ) ¼ 1 j2 3 pffiffiffi pffiffiffi For s2 ¼ 2 þ j=3 Q(1) (s2 ) ¼ þj2 3 P(s2 ) ¼ 1 þ j2 3:
X(s) y(0þ) þ : sþa sþa
Y(s) ¼
1 y(0þ) þ : s(s þ a) s þ a
Upon taking the inverse transform of this expression y(t) ¼ +1 {Y(s)} ¼ +1
1 1 1 y(0þ) þ a s sþa sþa
with the result 1 y(t) ¼ (1 eat ) þ y(0þ)eat : a
5-14
Transforms and Applications Handbook
Example 5.19
Example 5.20
Find the general solution to the differential equation
Find the velocity of the system shown in Figure 5.6a when the applied force is f(t) ¼ etu(t). Assume zero initial conditions. Solve the same problem using convolution techniques. The input is the force and the output is the velocity.
d2 y dy þ 5 þ 4y ¼ 10 dt 2 dt subject to zero initial conditions.
SOLUTION
SOLUTION
The controlling equation is, from Figure 5.6b, ðt dv þ 5v þ 4 v dt ¼ et u(t): dt
Laplace transform this differential equation. The result is s2 Y(s) þ 5sY(s) þ 5Y(s) ¼
10 : s
0
Laplace transform this equation and then solve for F(s). We obtain
Solving for Y(s), we get Y(s) ¼
s(s2
10 10 ¼ : þ 5s þ 4) s(s þ 1)(s þ 4)
V(s) ¼
Write this expression in the form
Expand this into partial-fraction form, thus
V(s) ¼
A B C þ þ : Y(s) ¼ sþ1 sþ4 s
s 4 A¼ ¼ 9 (s þ 1)2 s¼4 1 d s 4 ¼ B¼ 1! ds s þ 4 s¼1 9 s 1 ¼ : C¼ s þ 4 s¼1 3
10 10 A ¼ Y(s)(s þ 1)js¼1 ¼ ¼ s(s þ 4)s¼1 3 10 10 ¼ B ¼ Y(s)(s þ 4)js¼4 ¼ s(s þ 1)s¼4 12 10 ¼ 10 C ¼ sY(s)js¼0 ¼ (s þ 1)(s þ 4)s¼0 4
Y(s) ¼ 10
A B C þ þ s þ 4 s þ 1 (s þ 1)2
where
Then
and
s s ¼ : (s þ 1)(s2 þ 5s þ 4) (s þ 1)2 (s þ 4)
The inverse transform of V(s) is given by
4 4 1 v(t) ¼ e4t þ et tet , 9 9 3
1 1 1 þ þ : 3(s þ 1) 12(s þ 4) 4s
t 0:
To find v(t) by the use of the convolution integral, we first find h(t), the impulse response of the system. The quantity h(t) is specified by
The inverse transform is
1 1 1 x(t) ¼ 10 et þ e4t þ : 3 12 4
ð dh þ 5h þ 4 h dt ¼ d(t) dt
V
+
D=5 f
M=1
f
K=4
V
M=1 D=5
K=4 (a)
FIGURE 5.6
The mechanical system and its network equivalent.
(b)
5-15
Laplace Transforms where the system is assumed to be initially relaxed. The Laplace transform of this equation yields s s 4 1 1 1 ¼ ¼ : s2 þ 5s þ 4 (s þ 4)(s þ 1) 3 s þ 4 3 s þ 1
H(s) ¼
The inverse transform of this expression is easily found to be 4 1 h(t) ¼ e4t et , 3 3
All terms in these equations are Laplace transformed. The result is the set of equations (3 þ 2s)I1 (s) (I þ 2s)I2 (s) ¼ V1 (s) þ 2[i1 (0þ) i2 (0þ)] 1 q2 (0þ) (1 þ 2s)I1 (s) þ 3 þ 2s þ I2 (s) ¼ 2[ i1 (0þ) þ i2 (0þ)] s s V2 (s) ¼ 2I2 (s):
The current through the inductor is
t 0:
iL (t) ¼ i1 (t)
The output of the system to the input e tu(t) is written
v(t) ¼
1 ð
ðt
1
(t
0
2
4 ¼ e t4 3 4 e 9
¼
At the instant t ¼ 0þ
t)dt ¼ e
h(t)f (t ðt
e
3t
1 3
dt
0
4t
ðt 0
4 þ e 9
t) 4 e 3
1 t e dt 3
4t
iL (0þ) ¼ i1 (0þ)
i2 (0þ):
Also, because
3
1 t 4 5 dt ¼ e e 3 3
1 t te , 3
t
i2 (t):
t0
t 3t
0
1 : t 3
1 1 q2 (t) ¼ C C
ðt
i2 (t)dt
1
1 ¼ lim C t!0þ
This result is identical with that found using the Laplace transform technique.
ðt
1 i2 (t)dt þ C
0
ð0
1
i2 (t)dt ¼ 0 þ vc (0 ),
then
Example 5.21 Find an expression for the voltage v2(t) for t > 0 in the circuit of Figure 5.7. The source v1(t), the current iL(0 ) through L ¼ 2 H, and the voltage vc(0 ) across the capacitor C ¼ 1 F at the switching instant are all assumed to be known.
SOLUTION After the switch is closed, the circuit is described by the loop equations 2di1 2di2 3i1 þ 1i2 þ ¼ v2 (t) dt dt ð 2di1 2di2 1i1 þ þ i2 dt ¼ 0 þ 3i2 þ dt dt
q2 (0þ) D q2 (0þ) ¼ vc (0þ) ¼ vc (0 ) ¼ i2( 1) (0) ¼ : C 1 The equation set is solved for I2(s), which is written by Cramer’s rule 3 þ 2s V1 (s) þ 2iL (0þ) (1 þ 2s) 2iL (0þ) vc (0þ) s I2 (s) ¼ 3 þ 2s (1 þ 2s) (1 þ 2s) 3 þ 2s þ 1 s þ (1 þ 2s)[V1 (s) þ 2iL (0þ)] (3 þ 2s) 2iL (0þ) vc (0þ) s 2s2 þ3sþ1 ¼ (3 þ 2s) (1 þ 2s)2 s ¼
v2 (t) ¼ 2i2 (t):
(2s2 þ 3s)vc (0þ) 4siL (0þ) þ (2s2 þ s)V1 (s) : 8s2 þ 10s þ 3
Further V2 (s) ¼ 2I2 (s):
1F
2Ω
s
Then, upon taking the inverse transform
+ 1Ω
+ v1 (t)
i1
i2 iL
FIGURE 5.7
2H
The circuit for Example 5.21.
v1 (t) ¼ 2+ 1 {I2 (s)}:
+ υc 2Ω
v2 (t)
If the circuit contains no stored energy at t ¼ 0, then iL(0þ) ¼ vc(0þ) ¼ 0 and now v2 (t) ¼ 2+
1
(2s2 þ s)V1 (s) : 8s2 þ 10s þ 3
5-16
Transforms and Applications Handbook
For the particular case when vl ¼ u(t) so that V1(s) ¼ 1=s
For zero initial current through the inductor, the Laplace transform of the equation is
( ) 2s þ 1 2s þ 1 1
v2 (t) ¼ 2+ ¼ 2+ 8s2 þ 10s þ 3 8 s þ 12 (s þ 3=4) ( ) 1 1 1 1 ¼ + ¼ e3t=4 , t 0: 2 2 s þ 34 1
(s þ 1)I(s) ¼ V(s): Now, from the fact that +{d(t)} ¼ 1 and the shifting property of Laplace transforms, we can write the explicit form for V(s), which is
The validity of this result is readily confirmed because at the instant t ¼ 0þ the inductor behaves as an open circuit and the capacitor behaves as a short circuit. Thus, at this instant, the circuit appears as two equal resistors in a simple series circuit and the voltage is shared equally.
V(s) ¼ 2 þ e
s
þ 2e
2s
¼ (2 þ e s )(1 þ e 2þe ¼ 1 e
Example 5.22
þe 2s
s
2s
3s
þe
þ 2e 4s
4s
þ
þ )
:
Thus, we must evaluate i(t) from
The input to the RL circuit shown in Figure 5.8a is the recurrent series of impulse functions shown in Figure 5.8b. Find the output current.
I(s) ¼
2þe 1 e
s 2s
1 ¼ s þ 1 (1
2 e
2s )(s
þ 1)
þ
e (1
e
Expand these expressions into
SOLUTION The differential equation that characterizes the system is
I(s) ¼
di(t) þ i(t) ¼ v(t): dt
2 1 þ e 2s þ e 4s þ e 6s þ sþ1
1 s þ e þ e 3s þ e 5s þ e 7s þ : sþ1
R=1 Ω v(t)
2
+ L=1 H
i(t)
v(t)
1
t (a)
FIGURE 5.8
0 (b)
1
2
3
4
5
3
4
(a) The circuit, (b) the input pulse train.
i(t)
3
2
1
0
FIGURE 5.9
s
2s )(s
1
2
The response of the RL circuit to the pulse train.
6
5
þ 1)
:
5-17
Laplace Transforms The inverse transform of these expressions yields i(t) ¼ 2et u(t) þ 2e(t2) u(t 2) þ 2e(t4) u(t 4) þ þe
(t 1)
u(t
1) þ e
(t 3)
3) þ e
u(t
(t 5)
u(t
5) þ
The result has been sketched in Figure 5.9.
5.6 The Inversion Integral The discussion in Section 5.4 related the inverse Laplace transform to the direct Laplace transform by the expressions F(s) ¼ +{ f (t)}
(5:41a)
f (t) ¼ + {F(s)}:
(5:41b)
1
The subsequent discussion indicated that the use of Equation 5.41b suggested that the f(t) so deduced was unique; that there was no other f(t) that yielded the specified F(s). We found that although f(t) represents a real function of the positive real variable t, the transform F(s) can assume a complex variable form. What this means, of course, is that a mathematical form for the inverse Laplace transform was not essential for linear functions that satisfied the Dirichlet conditions. In some cases, Table A.5.1 is not adequate for many functions when s is a complex variable and an analytic form for the inversion process of Equation 5.41b is required. To deduce the complex inversion integral, we begin with the Cauchy second integral theorem, which is written þ
F(z) dz ¼ j2pF(s) s z
j2p+ 1 {F(s)} ¼ lim
v!1
f (t) ¼
1 lim 2pj R!1
þ
F(s)est ds
G1
¼
X
residues of F(s)est at the singularities
to the left of ABC; t > 0:
But the contribution to the integral around the circular path with R ! 1 is zero, leaving the desired integral along the path ABC, and 1 lim f (t) ¼ 2pj R!1
þ
F(s)est ds
G2
¼
X
residues of F(s)est at the singularities
to the right of ABC; t < 0:
Use the inversion integral to find f(t) for the function F(s) ¼
1 : s2 þ w 2
Note that by entry 15 of Table A.5.1, this is sin wt=w.
F(z)+
s jv
1
1
s
z
dz:
jω
t>0
C Г1
Г1
1 lim 2pj v!1
s jv
ezt F(z)dz ¼
1 2pj
sþj1 ð
ezt F(z)dz:
t 0, zero for t < 0, or in neither category, must be distinguished. For the one-sided transform, the region of convergence is given by s, where s is the abscissa of absolute convergence. The path of integration in Equation 5.42 is usually taken as shown in Figure 5.10 and consists of the straight line ABC displayed to the right of the origin by s and extending in the limit from j1 to þj1 with connecting semicircles. The evaluation of the integral usually proceeds by using the Cauchy integral theorem, which specifies that
(5:42)
σ
s j1
This equation applies equally well to both the one-sided and the two-sided transforms. It was pointed out in Section 5.1 that the path of integration (Equation 5.42) is restricted to value of s for which the direct transform formula converges. In fact, for the two-sided Laplace transform, the region of convergence must be specified in order to determine uniquely the inverse transform. That is, for the two-sided
B σ
A
FIGURE 5.10
The path of integration in the s-plane.
5-18
Transforms and Applications Handbook
SOLUTION The inversion integral is written in a form that shows the poles of the integrand. f (t) ¼
1 2pj
þ
est ds: (s þ jw)(s jw)
branch point. Because a branch cut can never be crossed, this essentially ensures that F(s) is single valued. Now, however, the inversion integral (Equation 5.43) becomes for t > 0
1 f (t) ¼ lim R!1 2pj
Res (s þ jw)
est s2 þ w 2
est 2 s þ w2
est e jwt ¼ ¼ s þ jw 2wj s¼jw s¼jw
s¼jw
Therefore, f (t) ¼
X
Res ¼
e jwt ejwt sin wt ¼ : 2jw w
SOLUTION
pffiffi The function F(s) ¼ s is a double-valued function because of the square root operation. That is, if s is represented in polar re j(u þ 2p) acceptable representation, form p byffiffi re jup, ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi is apsecond ffiffiffiffiffiffiffi and s ¼ pffiffirej(uþ2p) ¼ reju , thus showing two different values for s. But a double-valued function is not analytic and requires a special procedure in its solution. The procedure is to make the function analytic by restricting the angle of s to the range p < u < p and by excluding the point s ¼ 0. This is done by constructing a branch cut along the negative real axis, as shown in Figure 5.11. The end of the branch cut, which is the origin in this case, is called a
l–
D E
g
p
F
G
pffiffi FIGURE 5.11 The integration contour for + 1 {1= s}.
G3
(5:45)
FG
1
s R
as r ! 0:
The remaining integrals in Equation 5.45 are written 2 3 ð ð 1 4 F(s)est ds þ F(s)est ds5: 2pj
σ
ð
st
F(s)e ds ¼
Along path lþ, s ¼ and ds ¼ du: Then
Г3
‘þ
er( cos uþj sin u)t ju pffiffiffiffi ju=2 jre du ¼ 0 re
‘
σ
g
(5:46)
‘þ
pffiffi pffiffiffi Along path l p,ffiffiffilet s ¼ ue jp ¼ u; s ¼ j u, and ds ¼ du, where u and u are real positive quantities. Then
β A
ðp
F(s)est ds ¼
‘
Г1
γ
l+
ð
B
H
‘
But for small arguments sin 1(s=R) ¼ s=R, and in the limit as R ! 1, I ! 0. By a similar approach, we find that the integral over CD is zero. Thus, the integrals over the contours G2 and G3 are also zero as R ! 1. For evaluating the integral over g, let s ¼ reju ¼ r(cos u þ j sin u) and
f (t) ¼
jω
R
G2
p=2 ð st jvt pffiffiffi ð e e ju st R jRe du ¼ e du jIj 12 ju=2 R e b BC pffiffiffip pffiffiffi s ¼ est R sin ¼ est R cos 1 2 R
pffiffi Evaluate + 1= s .
Г2
sj1
which does not include any singularity. First we will show that for t > 0 the integrals over theÐ contours BC and CD vanish as R ! 1, from which G2 ¼ Ð Ð 1 G3 ¼ GBC ¼ FG ¼ 0. Note from Figure 5.11 that b ¼ cos (s=R) ju so that the integral over the arc BC is, because je j ¼ 1,
Example 5.24
C
F(s)est ds
3 ð ð ð ð ð ð ð 1 6 7 ¼ 4 þ þ þ þ þ þ 5, 2pj BC
est ejwt : ¼ s jw s¼jw 2wj
sþj1 ð
2
¼
st
GAB
The path chosen is G1 in Figure 5.10. Evaluate the residues
Res (s jw)
1 F(s)e ds ¼ 2pj
ð
ð
‘þ
F(s)est ds ¼
ð0
1
e ut 1 pffiffiffi du ¼ j j u
ue j2p ¼ 1 ð 0
u,
1 ð 0
pffiffi s¼
e ut 1 pffiffiffi du ¼ j j u
e ut pffiffiffi du: j u
pffiffiffi pffiffiffi j u (not þ j u),
1 ð 0
e ut pffiffiffi du: j u
5-19
Laplace Transforms The problem we now face in this evaluation is that
Combine these results to find
f (t) ¼
2
1 42 2pj j
1 ð 0
3
1
u2 eut du5 ¼
1 p
1 ð
Res (s
1
u2 eut du,
0
d[d(s)] d(s) ¼ lim ds s¼a s!a s
t > 0:
1 : s(1 þ es )
(
st
e The integrand in the inversion integral s(1þe s ) possesses simple poles at: s ¼ 0 and s ¼ jnp, n ¼ 1, 3, 1 . . . (odd values). These are illustrated in Figure 5.12. We see that the function est=s(1 þ e s) is analytic in the s-plane except at the simple poles at s ¼ 0 and s ¼ jnp. Hence, the integral is specified in terms of the residues in the various poles. We have, specifically
) est Res d s ds (1 þ e s )
for s ¼ 0 (5:47)
e jnpt n odd: jnp
This can be rewritten as follows
1 e j3pt e jpt e jpt e j3pt þ þ þ þ f (t) ¼ þ þ j3p jp jp j3p 2 1 1 X 2j sin npt ¼ þ : 2 jnp n¼1 n odd
This assumes the form 1 1 2 X sin (2k f (t) ¼ þ 2 p k¼1 2k
(k + 1)th pole k th pole
C
σ
–k th pole (–k + 1)th pole
FIGURE 5.12 The pole distribution of the given function.
1)pt : 1
(5:49)
As a second approach to a solution to this problem, we will show the details in carrying out the contour integration for this problem. We choose the path shown in Figure 5.12 that includes semicircular hooks around each pole, the vertical connecting line from hook to hook, and the semicircular path at R ! 1. Thus, we examine f (t) ¼
A
s¼jnp
¼
1 X 1 e jnpt : f (t) ¼ þ 2 n¼ 1 jnp
for s ¼ jnp:
s-plane
R
(5:48)
We obtain, by adding all of residues,
jω
B
: s¼a
By combining Equation 5.48 with Equation 5.47, we obtain
SOLUTION
sest ¼1 s(1 þ e s ) s¼0 2 (s jn)est 0 Res ¼ s(1 þ e s ) s¼jn 0
n(s) n(s) a) ¼ d(s) s¼a dsd [d(s)]
Res (s
Find the inverse Laplace transform of the function
Res
d(a) d(s) ¼ lim s!a s a a
because d(a) ¼ 0. Combine this result with the above equation to obtain
Example 5.25
F(s) ¼
n(s) 0 ¼ d(s) s¼a 0
where the roots of d(s) are such that s ¼ a cannot be factored. However, we know from complex function theory that
which is a standard form integral with the value rffiffiffiffi 1 p 1 ¼ pffiffiffiffiffi , f (t) ¼ p t pt
a)
¼
1 2pj
est ds s(1 þ e s )
þ 2
ð
1 6 6 þ 2pj 4 BCA I1
ð
vertical connecting lines
I2
þ
X ð
Hooks I3
X
3
7 Res7 5:
We consider the several integrals in this equation.
(5:50)
5-20
Transforms and Applications Handbook
Integral I1. By setting s ¼ re ju and taking into consideration that cos u ¼ cos u for u > p=2, the integral I1 ! 0 as r ! 1. Integral I2. Along the Y-axis, s ¼ jy and I2 ¼ j
1 ð
1
r!0
e jyt dy: jy(1 þ ejy )
+{ f1 (t)} ¼ +{ f2 (t)} ¼ F(s):
Note that the integrand is an odd function, whence I2 ¼ 0. Integral I3. Consider a typical hook at s ¼ jnp. The result is lim r!0
s!jnp
st
(s jn)e 0 ¼ s(1 þ es ) 0
I3 ¼
1 2pj
ð
p 2
2
3
1 est jp 4 X e jnpt 15 ds ¼ þ s s(1 þ e ) 2pj n¼ 1 jnp 2
f(t) ¼ f1 (t)
f2 (t)
+{f(t)} ¼ F(s)
F(s) ¼ 0:
Additionally, f(t) ¼ +t 1 {0} ¼ 0,
t > 0:
n odd
3
2
The difference between the two functions is written f(t)
where f(t) is a transformable function. Thus,
This expression is evaluated (as for Equation 5.47) and yields e jnpt=jnp. Thus, for all poles p 2
In addition, the function f(t) is continuous for t > 0 and f(0) ¼ 0, and f(t) is of the order O(ect) for all t > 0. Suppose that there are two transformable functions f1(t) and f2(t) that have the same transforms
1 1 61 2 X sin npt 7 ¼ 4 þ 5: 2 2 p n¼1 n n odd
Therefore, this requires that f1(t) ¼ f2(t). The result shows that it is not possible to find two different functions by using two different values of s in the inversion integral. This conclusion can be expressed as follows:
Finally, the residues enclosed within the contour are Res
1 1 est 1 X ejnpt 1 2 X sin npt ¼ ¼ þ þ , s(1 þ e s ) 2 n¼ 1 jnp 2 p n¼1 n n odd
n odd
which is seen to be twice the value around the hooks. Then when all terms are included in Equation 5.50, the final result is 1 1 1 2 X sin npt 1 2 X sin (2k f (t) ¼ þ ¼ þ 2 p n¼1 n 2 p k¼1 2k
1)pt 1
n odd
We now shall show that the direct and inverse transforms specified by Equation 5.30 and listed in Table A.5.1 constitute unique pairs. In this connection, we see that Equation 5.42 can be considered as proof of the following theorem:
THEOREM 5.17 Let F(s) be a function of a complex variable s that is analytic and of order O(s k) in the half-plane Re(s) c, where c and k are real constants, with k > 1. The inversion integral (Equation 5.42) written +t 1{F{(s)} along any line x ¼ s, with s c converges to the function f(t) that is independent of s, f (t) ¼ +t 1 {F(s)}
THEOREM 5.18 Only a single function f(t) that is sectionally continuous, of exponential order, and with a mean value at each point of discontinuity, corresponds to a given transform F(s).
5.7 Applications to Partial Differential Equations The Laplace transformations can be very useful in the solution of partial differential equations. A basic class of partial differential equations is applicable to a wide range of problems. However, the form of the solution in a given case is critically dependent on the boundary conditions that apply in any particular case. In consequence, the steps in the solution often will call on many different mathematical techniques. Generally, in such problems the resulting inverse transforms of more complicated functions of s occur than those for most linear systems problems. Often the inversion integral is useful in the solution of such problems. The following examples will demonstrate the approach to typical problems.
Example 5.26 Solve the typical heat conduction equation
whose Laplace transform is F(s), F(s) ¼ +{ f (t)},
Re(s) c:
q2 w qw ¼ , qx 2 qt
0 < x < 1, t 0
(5:51)
5-21
Laplace Transforms Also write
subject to the conditions w(x, 0) ¼ f (x), t ¼ 0 qw ¼ 0, w(x, t) ¼ 0 x ¼ 0: qx
C-1: C-2:
pffiffi (x l) s t pffiffi ¼ u: 2 t Then
SOLUTION Multiply both sides of Equation 5.51 by esx dx and integrate from 0 to 1. F(s, t) ¼
1 ð
w(x, t) ¼
1 2pj
1
f (l) exp
esx w(x, t)dx:
1 ð (x l)2 2 du dl eu pffiffi : 4t t 0
But the integral
0
Also 1 ð
1 ð
1 ð
2
q w sx qw e dx ¼ s2 F(s, t) sw(0þ) (0þ): qx 2 qx
2
eu du ¼
0
pffiffiffiffi p:
0
Thus, the final solution is Equation 5.51 thus transforms, subject to C-2 and zero boundary conditions, to 1 w(x, t) ¼ pffiffiffiffiffi 2 pt
dF s2 F ¼ 0: dt The solution to this equation is
1 ð
(xl)2 4t
f (l)e
dl:
1
Example 5.27 s2 t
F ¼ Ae : By an application of condition C-1, in transformed form, we have
F¼A¼
1 ð
f (l)esl dl:
A semi-infinite medium, initially at temperature w ¼ 0 throughout the medium, has the face x ¼ 0 maintained at temperature w0. Determine the temperature at any point of the medium at any subsequent time.
SOLUTION
0
The controlling equation for this problem is The solution, subject to C-1, is then
2
F(s, t) ¼ eþs t
1 ð
f (l)esl dl:
0
w(x, t) ¼ ¼
1 2pj 1 2pj
1 1 ð 1
21 3 ð sl þs2 t 4 f (l)e dl5esx ds e f (l)dl
0 1 ð
2
es tslþsx ds:
0
Note that we can write s2 t s(x l) ¼
(5:52)
with the boundary conditions:
Now apply the inversion integral to write the function in terms of x from s, 1 ð
q2 w 1 qw ¼ qx 2 K qt
a. w ¼ w0 at x ¼ 0, t > 0 b. w ¼ 0 at t ¼ 0, x > 0.
To proceed, multiply both sides of Equation 5.52 by est dt and integrate from 0 to 1. The transformed form of Equation 5.52 is d2 F s F ¼ 0, dx 2 K
K > 0:
The solution of this differential equation is pffiffi (x l) 2 (x l)2 : s t pffiffi 4t 2 t
F ¼ Aex
pffiffiffiffiffi s=k
þ Bex
pffiffiffiffiffi s=k
:
5-22
Transforms and Applications Handbook
But F must be finite or zero for infinite x; therefore, B ¼ 0 and
As in Example 5.24
pffis F(s, x) ¼ Ae K x :
ð
ð
¼
ð
¼
G3
G2
¼
BC
ð
¼ 0:
FG
Apply boundary condition (a) in transformed form, namely For the segments 1 ð
F(0, s) ¼
w0 s
est w0 dt ¼
for x ¼ 0:
ð
0
, let s ¼ rejp
and
ð
for
‘
, let s ¼ rejp :
‘þ
Therefore, Then for ‘ and ‘þ, writing this sum I‘,
w0 s
A¼
1 I‘ ¼ 2pj
and the solution in Laplace transformed form is
1 ð 0
pffis w F(s, x) ¼ 0 e K x : s
(5:53)
To find w(x, t) requires that we find the inverse transform of this expression. This requires evaluating the inversion integral w w(x, t) ¼ 0 2pj
sþj1 ð
ex
pffis K
est
s
sj1
ds:
¼
1 ð
1 p
est sin x
0
rffiffiffi s ds : K s
Write rffiffiffi s u¼ , s ¼ ku2 , ds ¼ 2ku du: K
(5:54)
This integral has a branch point at the origin (see Figure 5.13). To carry out the integration, we select a path such as that shown (see also Figure 5.11). The integral in Equation 5.54 is written
h pffiffiffiffiffi pffiffiffiffiffi i ds est ejx s=K ejx s=K s
Then we have
I‘ ¼
2 p
1 ð
2
eKu t sin ux
du : u
0
w(x, t) ¼
2
w0 6 4 þ 2pj ð
BC
ð
G2
þ
ð
ð
þ þ g
l
ð
lþ
þ
ð
G3
ð
3
This is a known integral that can be written
7 þ 5: FG
px 2 Kt
jω
C
B
Г2
E
1 Iy ¼ 2pj
γ A
2
eu du:
0
Finally, consider the integral over the hook,
Г1 l–
D
ffiffi ð
2 Il ¼ pffiffiffiffi p
ð
e
st
ex
g
pffiffiffiffiffi s=K
s
ds:
σ
Let us write
l+ σ
s ¼ re ju ,
Г3 F
FIGURE 5.13
G
The path of integration.
ds ¼ jre ju du,
ds ¼ ju, s
then Ig ¼
j 2pj
ð
ju
etre ex
pffiffiffiffiffi
r =K e ju=2
du:
5-23
Laplace Transforms 2p ¼ 2pj For r ! 0, Ig ¼ j2pj 2pj , then Ig ¼ 1. Hence, the sum of the integrals in Equation 5.53 becomes px 2 Kt
2
2 6 w(t) ¼ w0 41 pffiffiffiffi p
ffiffi ð
2
eu
0
By condition c dF ¼0 dx
3
x 7 du ¼ w0 1 erf pffiffiffiffi 5: 2 Kt
x ¼ 0 t > 0:
This imposes the requirement that B ¼ 0, so that
(5:55) F ¼ A cosh x
Example 5.28
rffiffiffi s : k
Now condition b is imposed. This requires that
A finite medium of length l is at initial temperature w0. There is no heat flow across the boundary at x ¼ 0, and the face at x ¼ l is then kept at w1 (see Figure 5.14). Determine the temperature w(t).
w1 ¼ A cosh s
rffiffiffi s : k
Thus, by b and c
SOLUTION Here we have to solve
F ¼ w1 q2 w 1 qw ¼ qx 2 k qt
Now, to satisfy c we have
subject to the boundary conditions: a: w ¼ w0
t¼0
b: w ¼ w1
t>0
qw ¼0 c: qx
F¼
0xl
t>0
x¼1
x ¼ 0:
pffi w0 w1 w0 cosh x ks pffis : þ F¼ s s cosh k
To find the expression for w(x, t), we must invert this expression. That is,
s F ¼ 0: k
The solution is F ¼ A0 e
x
pffis k
þ B0 ex
pffis k
w(x, t) ¼ w0 þ
rffiffiffi rffiffiffi s s ¼ A cosh x þ B sinh x : k k
Insulator
(t)
w1 w0 2pj
1 2pj
1
Insulator
FIGURE 5.14
Details for Example 5.28.
est
s j1
sþj1 ð
s j1
l
sþj1 ð
pffiffi cosh x ks ds pffis : s cosh k
(5:56)
The integrand is a single function of s with poles
2 valued 2 at s ¼ 0 and s ¼ k 2n2 1 pl2 , n ¼ 1, 2, . . . : We select the path of integration that is shown in Figure 5.15. But the inversion integral over the path BCA( ¼ G) ¼ 0. Thus, the inversion integral becomes
Insulator
0
pffi w0 cosh x ks pffiffis : s cosh k
w0 s
Thus, the final form of the Laplace transformed equation that satisfies all conditions of the problem is
Upon Laplace transforming the controlling differential equation, we obtain d2 F dx 2
pffiffi cosh x ks pffiffis : s cosh k
x
est
pffiffi cosh x ks pffis ds cosh k s :
By an application of the Cauchy integral theorem, we require the residues of the integrand at its poles. There results Resjs¼0 ¼ 1
5-24
Transforms and Applications Handbook Then Equation 5.57 transforms to
jω B
2 d F 1 dF þ k dr 2 r dr
Г
sF ¼ 0,
which we write in the form
R C
d2 F 1 dF þ dr 2 r dr
σ σ
mF ¼ 0,
m¼
rffiffiffi s : k
This is the Bessel equation of order 0 and has the solution F ¼ AI0 ðmr Þ þ BN0 ðmr Þ: A
However, the Laplace transformed form of C-1 when z ¼ 0 imposes the condition B ¼ 0 because N0(0) is not zero. Thus,
FIGURE 5.15 The path of integration for Example 5.28.
F ¼ AI0 (mr): Resjs¼k
2
ðn12Þ pl22
The boundary condition C-2 requires F(r, a) ¼ ws0 when r ¼ a, hence,
1 2 p2 ekðn2Þ l2 cosh j n 12 pxl : ¼
pffiffi s dsd cosh l ks s¼k n1 2 p2 ð 2Þ l 2
A¼
Combine these with Equation 5.55 to write finally 1 4(w1 w0 ) X (1)n kðn12Þ2 p2 =l2 l w(x, t) ¼ w0 þ 2n 1 p n¼1
1 cos n px=l : 2
so that
SOLUTION The heat conduction equation in radial form is
(5:58)
And for this problem the system is subject to the boundary conditions
w w(r, t) ¼ 0 2pj
To proceed, we multiply each term in the partial differential equation by e st dt and integrate. We write
0
we
st
dt ¼ F(r, s)
sþj1 ð
s j1
lt I0 (jr)
dl e , I0 (ja) l
rffiffiffi l : j¼ k
(5:59)
Note that I0(jr)=I0(ja) is a single-valued function of l. To evaluate this integral, we choose as the path for this integration that shown in Figure 5.16. The poles of this function are at l ¼ 0 and at the roots of the Bessel function J0(ja) (¼ I0(jja)); these occur when J0(ja) ¼ 0, with the roots for J0(ja) ¼ 0, namely l ¼ kj21 , kj22 , . . .. The approximations for I0(jr) and I0(ja) show that when n ! 1 the integral over the path BCA tends to zero. The resultant value of the integral is written in terms of the residues at zero and when l ¼ kj2n . These are
C-1. w ¼ 0 t ¼ 0 0 r < a C-2. w ¼ w0 t > 0 r ¼ a.
1 ð
w0 I0 (mr) : s I0 (ma)
To find the function w(r, t) requires that we invert this function. By an application of the inversion integral, we write
A circular cylinder of radius a is initially at temperature zero. The surface is then maintained at temperature w0. Determine the temperature of the cylinder at any subsequent time t.
0 r < a, t > 0:
F¼
(5:57)
Example 5.29
q2 w 1 qw 1 qw þ ¼ , qr 2 r qr k qt
w0 1 s I(ma)
Therefore,
Resj¼0 ¼ 1 ldI0 (ja) Reskj2n ¼ : dl kj2n "
w(r, t) ¼ w0 1 þ
X n
e
kj2n t
# J0 (jn r) : d I0 (ja)jl¼kj2n l dl
5-25
Laplace Transforms
Apply this transformation to both members of Equation 5.62 subject to F(0, s) ¼ 0. The result is
jω B
qF s2 N(z, s) sF(z) ¼ a2 z 2 N(z, s) (0, s) , qx C
σ
0
F(z) ¼ +{w0 }:
We denote qF qx (0, s) by C. Then the solution of this equation is N(z, s) ¼
C s 1 F(z) 2 s2 : 2 z 2 as 2 a2 z a2
The inverse transformation with respect to z is, employing convolution.
A
FIGURE 5.16 The path of integration for Example 5.29.
aC sx 1 F(x, s) ¼ sinh s a a
ðx
s w(j) sinh (x j)dj: a
0
Further,
d I0 (jn a) l dl
¼
(1) 1 2 jaI0 (ja).
Hence, finally,
"
# 1 2X kj2n t J0 (jn r) : e w(t) ¼ w0 1 þ a n¼1 jn J0(1) (jn a)
(5:60)
To satisfy the condition limx!1 F(x, s) ¼ 0 requires that the sinh terms be replaced by their exponential forms. Thus, the factors
sinh
Example 5.30 A semi-infinite stretched string is fixed at each end. It is given an initial transverse displacement and then released. Determine the subsequent motion of the string.
s esj=a , sinh (x j) ! 2 a
sx 1 ! , a 2
Then we have the expression
F(x, s) ¼
SOLUTION
x ! 1:
aC 1 2s 2a
ðx
w(j)esj=a dj:
0
This requires solving the wave equation a2
q2 w q2 w ¼ 2 qx 2 qt
(5:61)
aC 1 ¼ s a
subject to the conditions C-1. w(x, 0) ¼ f(x) t ¼ 0, w(0, t) ¼ 0 C-2. limx!1 w(x, t) ¼ 0.
d2 F ¼ s2 F sw(0þ), dx 2
x > 0:
(5:62)
w(j)esj=a dj,
x ! 1:
Combine this result with F(x, s) to get
2aF(x, s) ¼
1 ð
w(j)e
s(jx)=a
dj
1 ð
w(j)es(xþj)=a dj
0
0
ðx
þ w(j)es(xj)=a dj:
C-1. F(0, s) ¼ 0 C-2. limx!1 F(x, s) ¼ 0.
0
To solve Equation 5.62 we will carry out a second Laplace transform, but this with respect to x, that is +{F (x, s)} ¼ N(z, s). Thus,
N(z, s) ¼
1 ð 0
t>0
To proceed, multiply both sides of Equation 5.61 by est dt and integrate. The result is the Laplace-transformed equation a2
But for this function to be zero for x ! 1 requires that
1 ð 0
F(x, s)ezx dx:
Each integral in this expression is integrated by parts. Here we write u ¼ w(j), du ¼ w(1) (j)dj;
s(jx) a
dv ¼ e
dj,
a v ¼ es(jk)=a : s
5-26
Transforms and Applications Handbook
The resulting integrations lead to
1 1 F(x, s) ¼ w(x) þ s 2s 1 2s
1 ð
1 ð
Example 5.31
s(jx) a
(1)
w (j)e
x
1 ð
1 dj 2s
(1)
w (j)e
s(xþj) a
dj
0
s(xj) a
w(1) (j)e
A stretched string of length l is fixed at each end as shown in Figure 5.17. It is plucked at the midpoint and then released at t ¼ 0. The displacement is b. Find the subsequent motion.
SOLUTION
dj:
This problem requires the solution of
0
2 q2 y 2q y ¼ c , qy 2 qt 2
We note by entry 61, Table A.5.1 that
+1
1 s(jx) e a s
¼1
when at > j x
¼0
when at < j x:
+
1
:2
w(1) (j)e
s(jþx)=a
dj
9 = ;
x
¼
1 2
xþat ð
w(1) (j)dj
x
1 ¼ w(x þ at) 2
1 w(x): 2
+
1
:2
w(1) (j)e
s(xþj)=a
dj
9 = ;
0
1:
y¼
2bx l
2:
y¼
2b (l l
3:
qy ¼0 qt
4:
y¼0
+
8 1 x þ j
¼0
when at < x þ j:
dj
;
x
¼
1 2
xþat ð x
1 ¼ w(x 2
sy(0) ¼ c2
t > 0: st
dt and integrate in t.
d2 Y dx 2
d2 Y dx 2
s2 Y ¼ sy(0) ¼
sf (x)
(5:65)
at):
2
s N(z, s)
sY(0) ¼ c z 2 N(z, s) 2
y(0, s) : x
This equation yields, writing sY(0) as F(x, s),
1 2 w(x)
þ
x¼l
subject to Y(0, s) ¼ Y(l, s) ¼ 0. To solve this equation, we proceed as in Example 5.30; that is, we apply a transformation on x, namely +{Y(x, s)} ¼ N(z, s). Thus,
w(1) (j)dj
The final term becomes
1 2 w(x)
t¼0
0 = t¼0 > 1 > 0 we close the contour to the left and we obtain
Example 5.32 Find the bilateral Laplace transform of the signals f(t) ¼ eatu(t) and f(t) ¼ eatu(t) and specify their regions of convergence.
f (t) ¼
SOLUTION Using the basic definition of the transform (Equation 5.66), we obtain
F2 (s) ¼
ð0
(5:66)
If the function f(t) is of exponential order (es1 t), then the region of convergence for t > 0 is Re(s) > s1. If the function f(t) for t < 0 is of exponential order exp(s2t), then the region of convergence is Re(s) < s2. Hence, the function F2(s) exists and its analytic in the vertical strip defined by s1 < Re(s) < s2, provided, of course, that s1 < s2. If s2 > s1, no region of convergence would exist and the inversion process could not be performed. This region of convergence is shown in Figure 5.19.
1 ð
eat u(t)est dt ¼
1
eat u(t)est dt ¼
1 ð
e(sþa)t dt ¼
0
t > 0:
For t < 0, the contour closes to the right and now
f (t) ¼
1 sþa
3est 1 ¼ e2t , (s 4)(s þ 1)s¼2 2
4t 3est 3est ¼ 3 et þ e , þ (s 4)(s þ 2)s¼1 (s þ 1)(s þ 2)s¼4 10 5
t < 0:
and its region of convergence is Re(s) > a. jω
jω σ + jω t>0
t > eat þ 2 < a (b a)(c a) (abc)2 > 1 1 > þ : ebt þ 2 ect b2 (a b)(c b) c (a c)(b c) 8 ab þ ac þ bc 1 1 2 > > (ab þ ac þ bc)2 abc(a þ b þ c) tþ t < 2abc (abc)3 (abc)2 > 1 1 1 > : eat 3 ebt 3 ect a3 (b a)(c a) b (a b)(c b) c (a c)(b c) 1 sin at a cos at 1 sinh at a cosh at 1 (1 cos at) a2 1 (at sin at) a3 1 ( sin at at cos at) 2a3 t sin at 2a 1 ( sin at þ at cos at) 2a t cos at cos at cos bt b2 a2 1 at e sin bt b eat cos bt n eat X dr 2n r 1 (2t)r1 r [ cos (bt)] n1 4n1 b2n r¼1 dt 8 ! ( n X > 2n r 1 eat dr > > (2t)r1 r [a cos (bt) þ b sin (bt)] > > n1 2n
r X > 2n r 2 > r1 d > (2t) 2b r [ sin (bt)] > : dt r n1 r¼1 pffiffiffi pffiffiffi at 3 pffiffiffi at 3 eat e(at)=2 cos 3 sin 2 2 sin at cosh at cos at sinh at
s s4 þ 4a4
1 ( sin at sinh at) 2a2
s s4 a4
1 ( cosh at cos at) 2a2
1 s4 a4
8a3 s2 (s2 þ a2 )3 1 s1 n s s
1 ( sinh at sin at) 2a3
(1 þ a2 t 2 ) sin at cos at Ln (t) ¼
et dn n t (t e ) n! dt n
5-31
Laplace Transforms TABLE A.5.1 (continued)
Laplace Transform Pairs F(s)
f(t) [Ln (t) is the Laguerre polynomial of degree n]
52 53 54 55 56 57
1 (s þ a)n 1 s(s þ a)2 1 s2 (s þ a)2 1 s(s þ a)3 1 (s þ a)(s þ b)2 1 s(s þ a)(s þ b)2
58
1 s2 (s þ a)(s þ b)2
59
1 (s þ a)(s þ b)(s þ c)2
60
1 (s þ a)(s2 þ v2 )
61
1 s(s þ a)(s2 þ v2 )
62
1 s2 (s þ a)(s2 þ v2 )
63 64 65 66 67 68 69
1 2 (s þ a)2 þ v2 1 s2 a2 1 1 s3 (s2
at
[ sin vt
1 s4 þ 4a4
1 sinh at a 1 t a2
1 ( sin at cosh at 4a3
1
1 ( sinh at 2a3
a4
70
1 [(s þ a)2
71
sþa s[(s þ a)2 þ v2 ]
72
sþa s2 [(s þ b)2 þ v2 ]
v2 ]
bt
vt cos vt]
1 1 2 ( cosh at 1) t a4 2a2
pffiffiffi 1 3 a e at e2t cos at 3a2 2
a2 )
1 s3 þ a3
s4
1 e 2v3
1 sinh at a3
a2 )
s2 (s2
t (n1) eat where n is a positive integer (n 1)! 1 [1 eat ateat ] a2 1 [at 2 þ ateat þ 2eat ] a3
1 1 2 2 at 1 t þ at þ 1 e a a3 2 at 1 e þ ½(a b)t 1e bt (a b)2
1 1 1 a 2b at e bt e t þ ab2 a(a b)2 b(a b) b2 (a b)2
1 1 1 2 1 2(a b) b at e t þ e þ t þ ab2 a b b2 (a b) a2 (a b)2 b3 (a b)2 8
1 2c a b > > e ct t þ < (c b)(c a) (c a)2 (c b)2 > 1 1 > þ : e at þ e bt : (b a)(c a)2 (a b)(c b)2 v 1 1 e at þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sin (vt f); f ¼ tan 1 2 2 a2 þ v2 a v a þv 1 1 1 a 1 sin vt þ 2 cos vt þ e at av2 a2 þ v2 v v a 8 1 1 1 at > > < av2 t a2 v2 þ a2 (a2 þ v2 ) e a 1 > > : þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi cos (vt þ f); f ¼ tan 1 3 2 2 v v a þv
pffiffiffi pffiffiffi 3 3 sin at 2
cos at sinh at)
sin at)
1 at e sinh vt v 8 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > > 1 (a b)2 þ v2 bt > < a e sin (vt þ f); þ b2 þ v2 b2 þ v2 v > > v > : f ¼ tan 1 v þ tan 1 b a b pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 8 > (a b)2 þ v2 1 2ab > < e [1 þ at] þ 2 2 2 v(b2 þ v2 ) b2 þ v2 (b þ v ) > v v > : f ¼ tan 1 þ 2 tan 1 a b b
bt
sin (vt þ f)
(continued)
5-32
Transforms and Applications Handbook
TABLE A.5.1 (continued)
Laplace Transform Pairs F(s)
73
sþa (s þ c)[(s þ b)2 þ v2 ]
74
sþa s(s þ c)[(s þ b)2 þ v2 ]
75
sþa s2 (s þ b)3
76
sþa (s þ c)(s þ b)3
77
s2 (s þ a)(s þ b)(s þ c)
78
s2 (s þ a)(s þ b)2
79
s2 (s þ a)3
80
s2 (s þ a)(s2 þ v2 )
81
s2 (s þ a) (s2 þ v2 )
82
s2 (s þ a)(s þ b)(s2 þ v2 )
83
s2 (s2 þ a2 )(s2 þ v2 )
84
2
(s2
s2 þ v2 )2
85
s2 (s þ a)[(s þ b)2 þ v2 )]
86
s2 (s þ a) [(s þ b)2 þ v2 ]
87 88 89
2
s2 þ a þ b)
s2 (s
s2 þ a þ b)
s3 (s
s2 þ a s(s þ b)(s þ c)
f(t) sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ac 1 (a b)2 þ v2 bt ect þ þ e sin (vt þ f) 2 2 v (c b)2 þ v2 (c b) þ v > > v v > : f ¼ tan1 tan1 ab cb 8 a (c a) > > þ ect > > > c(b2 þ v2 ) c[(b c)2 þ v2 ] > > sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > < 1 (a b)2 þ v2 bt pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi e sin (vt þ f) > 2 2 > (b c)2 þ v2 v b þv > > > v > > v v > : f ¼ tan1 þ tan1 tan1 b ab cb
a b 3a 3a b a b 2 2a b bt tþ þ þ t þ t e b3 b4 b4 2b2 b3
a c ct ab 2 ca a c bt þ t þ 3e 2tþ 3 e 2(c b) (b c) (c b) (c b) 8 > > >
< eat 2 t 2 sin (vt þ f); (a2 þ v2 ) (a þ v2 ) (a þ v2 )2
> : f ¼ 2 tan1 va 8 a2 b2 > < eat þ ebt 2 2 (b a)(a þ v ) (a b)(b2 þ v2 ) h v vi v > : pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sin (vt þ f); f ¼ tan1 þ tan1 a b (a2 þ v2 )(b2 þ v2 )
a v sin (at) 2 sin (vt) (v2 a2 ) (a v2 )
1 ( sin vt þ vt cos vt) 2v 8 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > > a2 1 (b2 v2 )2 þ 4b2 v2 bt > at > e þ e sin (vt þ f) < v (a b)2 þ v2 (a b)2 þ v2 > v > 2bv > > tan1 : f ¼ tan1 2 2 b v ab 8
2 2 > a a[(b a) þ v2 ] þ a2 (b a) at at > > e te 2 > 2 2 2 > (a b) þ v2 > [(b a) þ v2 ] > > > ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s > < (b2 v2 )2 þ 4b2 v2 bt e sin (vt þ f) > þ > v[(a b)2 þ v2 ] > > > > v > > 2bv > 1 > 2 tan1 : f ¼ tan 2 2 b v ab b2 þ a bt a a e þ t 2 b2 b b
a 2 a 1 t 2 t þ 3 b2 þ a (a þ b2 )ebt 2b b b a (b2 þ a) bt (c2 þ a) ct þ e e bc b(b c) c(b c)
5-33
Laplace Transforms TABLE A.5.1 (continued)
Laplace Transform Pairs F(s) 2
90
s þa s2 (s þ b)(s þ c)
91
s2 þ a (s þ b)(s þ c)(s þ d)
92
93
94 95
s2 þ a s(s þ b)(s þ c)(s þ d)
s2 (s
s2 þ a þ b)(s þ c)(s þ d)
s2 þ a þ v2 )2
(s2
s2 v2 (s2 þ v2 )2 2
96
s þa s(s2 þ v2 )2
97
s(s þ a) (s þ b)(s þ c)2
98
s(s þ a) (s þ b)(s þ c)(s þ d)2
99
s2 þ a1 s þ a0 s2 (s þ b)
100 101 102 103 104 105 106
s2 þ a1 s þ a0 s3 (s þ b)
s2 þ a1 s þ a0 s(s þ b)(s þ c)
s2 þ a1 s þ a0 þ b)(s þ c)
s2 (s
s2 þ a1 s þ a0 (s þ b)(s þ c)(s þ d)
s2 þ a1 s þ a0 s(s þ b)(s þ c)(s þ d) s2 þ a1 s þ a0 s(s þ b)2 s2 þ a1 s þ a0 s2 (s þ b)2
107
s2 þ a1 s þ a0 (s þ b)(s þ c)2
108
s3 (s þ b)(s þ c)(s þ d)2
109
s3 (s þ b)(s þ c)(s þ d)(s þ f )2
f(t) b2 þ a bt c2 þ a ct a a(b þ c) e þ 2 e þ t 2 2 b2 (c b) c (b c) bc bc
b2 þ a c2 þ a d2 þ a ebt þ ect þ edt (c b)(d b) (b c)(d c) (b d)(c d)
a b2 þ a c2 þ a d2 þ a þ ebt þ ect þ edt bcd b(b c)(d b) c(b c)(c d) d(b d)(d c) 8 a a b2 þ a > > > ebt t 2 2 2 (bc þ cd þ db) þ 2 < bcd bc d b (b c)(b d) > c2 þ a d2 þ a > > ect þ 2 edt : þ 2 c (c b)(c d) d (d b)(d c) 1 1 (a þ v2 ) sin vt 2 (a v2 )t cos vt 2v3 2v
t cos vt a (a v2 ) a t sin vt 4 cos vt v4 2v3 v
2 b2 ab bt c ac c2 2bc þ ab ct e þ tþ 2e 2 bc (c b) (b c) 8 b2 ab c2 ac d2 ad > > > ebt þ ect þ tedt < (b d)(c d) (c b)(d b)2 (b c)(d c)2 2 > > > þ a(bc d ) þ d(db þ dc 2bc) edt : : (b d)2 (c d)2 b2 a1 b þ a0 bt a0 a1 b a0 e þ tþ b2 b b2
a1 b b2 a0 bt a0 2 a1 b a0 b2 a1 b þ a0 e þ t þ tþ 3 2 b 2b b b3 a0 b2 a1 b þ a0 bt c2 a1 c þ a0 ct þ e þ e bc b(b c) c(c b)
a0 a1 bc a0 (b þ c) b2 a1 b þ a0 bt c2 a1 c þ a0 ct tþ e þ 2 e þ bc b2 (c b) c (b c) b2 c2 b2 a1 b þ a0 bt c2 a1 c þ a0 ct d 2 a1 d þ a0 dt e þ e þ e (c b)(d b) (b c)(d c) (b d)(c d)
a0 b2 a1 b þ a0 bt c2 a1 c þ a0 ct d2 a1 d þ a0 dt e e e bcd b(c b)(d b) c(b c)(d c) d(b d)(c d) a0 b2 a1 b þ a0 bt b2 a0 bt te þ e b2 b b2
a0 a1 b 2a0 b2 a1 b þ a0 bt 2a0 a1 b bt tþ þ te þ e 2 b b3 b2 b3 b2 a1 b þ a0 bt c2 a1 c þ a0 ct c2 2bc þ a1 b a0 ct te þ e þ e (b c) (c b)2 (b c)2 8 b3 c3 d3 > bt ct dt > > < (b c)(d b)2 e þ (c b)(d c)2 e þ (d b)(c d) te > d2 ½d 2 2d(b þ c) þ 3bc dt > > e : þ (b d)2 (c d)2 8 b3 c3 > bt > þ e ct > 2e > > (b c)(d b)(f b) (c b)(d c)(f c)2 > > > > > d3 f3 > dt ft > > < þ (d b)(c d)(f d)2 e þ (f b)(c f )(d f ) te
> 3f 2 > > þ > > > (b f )(c f )(d f ) > > > > 3 > f ½ (b f )(c f ) þ (b f )(d f ) þ (c f )(d f ) dt > > : þ e : 2 2 2 (b f ) (c f ) (d f )
(continued)
5-34 TABLE A.5.1 (continued)
Transforms and Applications Handbook Laplace Transform Pairs F(s)
f(t)
3
110
s (s þ b)2 (s þ c)2
111
s3 (s þ d)(s þ b)2 (s þ c)2
112
s3 (s þ b)(s þ c)(s2 þ v2 )
113
s3 (s þ b)(s þ c)(s þ d)(s2 þ v2 )
114
s3 (s þ b)2 (s2 þ v2 )
115
s3 s4 þ 4v4
116
s3 s4 v4
117
s3 þ a2 s2 þ a1 s þ a0 s2 (s þ b)(s þ c)
118
s3 þ a2 s2 þ a1 s þ a0 s(s þ b)(s þ c)(s þ d)
119
s3 þ a2 s2 þ a1 s þ a0 þ b)(s þ c)(s þ d)
s2 (s
120
s3 þ a2 s2 þ a1 s þ a0 (s þ b)(s þ c)(s þ d)(s þ f )
121
s3 þ a2 s2 þ a1 s þ a0 s(s þ b)(s þ c)(s þ d)(s þ f )
b3 b2 (3c b) bt c3 c2 (3b c) ct bt ct þ þ e 2 te 3 e 2 te (c b) (b c)3 (c b) (b c)
8 d3 b3 > dt > > þ tebt > 2 2e > (b d) (c d) (c b)2 (b d) > > >
< 3b2 b3 (c þ 2d 3b) bt c3 þ þ þ tect 2 3 2 e 2 > (c b) (d b) (d b) (b c) (c d) (c b) > > >
> > 3c2 c3 (b þ 2d 3c) ct > > e þ : þ (b c)2 (d c) (b c)3 (d c)2 8 b3 c3 > > > ebt þ ect > 2 þ v2 ) > (b c)(b2 þ v2 ) (c b)(c > > < v2 ffi sin (vt þ f) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > 2 > (b þ v2 )(c2 þ v2 ) > > v > > c > : f ¼ tan1 tan1 v b 8 3 b c3 > bt ct > > > (b c)(d b)(b2 þ v2 ) e þ (c b)(d c)(c2 þ v2 ) e > > > > > > d3 > > edt < þ (d b)(c d)(d 2 þ v2 ) > > v2 > > ffi cos (vt f) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > > > (b2 þ v2 )(c2 þ v2 )(d2 þ v2 ) > > v v v > > > : f ¼ tan1 þ tan1 þ tan1 b c d 8 3 2 2 2 b b (b þ 3v ) v2 > > > 2 sin (vt þ f) t ebt þ ebt 2 < b þ v2 (b þ v2 ) (b2 þ v2 )2 > b v > > tan1 : f ¼ tan1 v b cos (vt) cosh (vt)
1 [ cosh (vt) þ cos (vt)] 2 8 a a (b þ c) a1 bc b3 þ a2 b2 a1 b þ a0 bt > > > 0t 0 e þ < bc b2 (c b) b2 c2 3 2 > c þ a2 c a1 c þ a0 ct > > e : þ c2 (b c) 8 a0 b3 þ a2 b2 a1 b þ a0 bt c3 þ a2 c2 a1 c þ a0 ct > > > e e < bcd b(c b)(d b) c(b c)(d c) > d3 þ a2 d2 a1 d þ a0 dt > > e : d(b d)(c d) 8
a a a (bc þ bd þ cd) b3 þ a2 b2 a1 b þ a0 bt > > > 0 tþ 1 0 þ e < 2 2 2 bcd bcd b2 (c b)(d b) bcd 3 2 3 2 > c þ a2 c a1 c þ a0 ct d þ a2 d a1 d þ a0 dt > > : þ e þ e c2 (b c)(d c) d2 (b d)(c d) 8 3 b þ a2 b2 a1 b þ a0 bt c3 þ a2 c2 a1 c þ a0 ct > > > < (c b)(d b)(f b) e þ (b c)(d c)(f c) e > d3 þ a2 d2 a1 d þ a0 dt f 3 þ a2 f 2 a1 f þ a0 ft > > e þ e : þ (b d)(c d)(f d) (b f )(c f )(d f ) 8 a0 b3 þ a2 b2 a1 b þ a0 bt c3 þ a2 c2 a1 c þ a0 ct > > e e > < bcdf b(c b)(d b)(f b) c(b c)(d c)(f c) 3 2 3 2 > d þ a2 d a1 d þ a0 dt f þ a2 f a1 f þ a0 ft > > e e : d(b d)(c d)(f d) f (b f )(c f )(d f )
5-35
Laplace Transforms TABLE A.5.1 (continued)
Laplace Transform Pairs F(s)
122
s3 þ a2 s2 þ a1 s þ a0 (s þ b)(s þ c)(s þ d)(s þ f )(s þ g)
123
s3 þ a2 s2 þ a1 s þ a0 (s þ b)(s þ c)(s þ d)2
124
s3 þ a2 s2 þ a1 s þ a0 s(s þ b)(s þ c)(s þ d)2
125
s3 þ a2 s2 þ a1 s þ a0 (s þ b)(s þ c)(s þ d)(s þ f )2
126 127 128 129 130 131 132 133 134 135 136
137
s (s a)3=2 pffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffi s a s b 1 pffiffi sþa pffiffi s s a2 pffiffi s s þ a2
pffiffi sðs
1
a2 Þ
1 pffiffi sðs þ a2 Þ (s
b2 a2 pffiffi a2 )(b þ s)
1 pffiffi pffiffi sð s þ aÞ
1 pffiffiffiffiffiffiffiffiffiffi (s þ a) s þ b
pffiffi s(s
b2 a2 pffiffi a2 )( s þ b)
(1 s)n snþ(1=2)
f(t) 8 3 2 b þ a2 b a1 b þ a0 c3 þ a2 c2 a1 c þ a0 > > > ebt þ ect > > (c b)(d b)(f b)(g b) (b c)(d c)(f c)(g c) > > > < d3 þ a2 d2 a1 d þ a0 f 3 þ a2 f 2 a1 f þ a0 þ edt þ eft > (b d)(c d)(f d)(g d) (b f )(c f )(d f )(g f ) > > > > > g 3 þ a2 g 2 a1 g þ a0 > > egt : þ (b g)(c g)(d g)(f g) 8 3 b þ a2 b2 a1 b þ a0 bt c3 þ a2 c2 a1 c þ a0 ct > > e þ e > > > (c b)(d b)2 (b c)(d c)2 > > > 3 2 > > < þ d þ a2 d a1 d þ a0 tedt (b d)(c d) > > > a0 (2d b c) þ a1 (bc d2 ) > > > > > þa d(db þ dc 2bc) þ d2 (d2 2db 2dc þ 3bc) dt > > : þ 2 e (b d)2 (c d)2 8 a0 b3 þ a2 b2 a1 b þ a0 bt c3 þ a2 c2 a1 c þ a0 ct > > e e > 2 > bcd > b(c b)(d b)2 c(b c)(d c)2 > < 3 2 2 d þ a2 d a1 d þ a0 dt 3d 2a2 d þ a1 dt te e > d(b d)(c d) d(b d)(c d) > > 3 2 > > (d þ a2 d a1 d þ a0 )½(b d)(c d) d(b d) d(c d) dt > : e d 2 (b d)2 (c d)2 8 3 b þ a2 b2 a1 b þ a0 bt c3 þ a2 c2 a1 c þ a0 ct > > þ e > > 2 e > (c b)(d b)(f b) (b c)(d c)(f c)2 > > > > 3 2 > a1 d þ a0 dt f 3 þ a2 f 2 a1 f þ a0 ft > < þ d þ a2 d te e þ (b f )(c f )(d f ) (b d)(c d)(f d)2 > > > > ( f 3 þ a2 f 2 a1 f þ a0 )[(b f )(c f ): > > > > > 3f 2 2a2 f þ a1 þ(b f )(d f ) þ (c f )(d f )] ft > > e ft e : þ (b f )(c f )(d f ) (b f )2 (c f )2 (d f )2
1 pffiffiffiffiffi eat (1 þ 2at) pt
1 pffiffiffiffiffiffiffi (ebt eat ) 2 pt 3 pffiffi 1 2 pffiffiffiffiffi aea t erfc(a t ) pt pffiffi 1 2 pffiffiffiffiffi þ aea t erf a t pt pffi að t 1 2a a2 t 2 pffiffiffiffiffi pffiffiffiffi e el dl p pt 0 1 a2 t pffiffi e erf a t a pffi að t 2 2 a2 t pffiffiffiffi e el dp a p 0
2
ea t [b
pffiffi a erf (a t )]
pffiffi 2 ea t erfc(a t )
pffiffi 2 beb t erfc(b t )
pffiffiffiffiffiffiffiffiffiffiffipffiffi 1 pffiffiffiffiffiffiffiffiffiffiffi e at erf b a t b a
pffiffi pffiffi 2 a2 t b e erf (a t ) 1 þ eb t erfc(b t ) a 8 pffiffi n! > > pffiffiffiffiffi H2n t > < (2n)! pt
n > 2 d > > (e Hn (t) ¼ Hermite polynomial ¼ ex : dxn
x2
) (continued)
5-36
Transforms and Applications Handbook
TABLE A.5.1 (continued)
Laplace Transform Pairs F(s)
138 139 140 141 142 143
f(t) pffiffi n! H2nþ1 t pffiffiffiffi p(2n þ 1)! ( aeat [I1 (at) þ I0 (at)]
n
(1 s) snþ(3=2) pffiffiffiffiffiffiffiffiffiffiffiffiffi s þ 2a pffiffi 1 s
1 pffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffi sþa sþb G(k)
(s þ a)k (s þ b)k
[In (t) ¼ jn Jn (jt) where Jn is Bessel0 s function of the first kind] ab e(1=2)(aþb)t I0 t 2 k (1=2) pffiffiffiffi t a b p e (1=2)(aþb)t Ik (1=2) t a b 2
a b a b te (1=2)(aþb)t I0 t þ I1 t 2 2
(k 0)
1 (s þ a)1=2 (s þ b)3=2 pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffi s þ 2a s pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffi s þ 2a þ s
1 e t
k
144
145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160
161
162
(a b) pffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffi 2k (k > 0) sþaþ sþb pffiffiffiffiffiffiffiffiffiffi pffiffi 2n sþaþ s pffiffipffiffiffiffiffiffiffiffiffiffi s sþa
1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 þ a2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n s2 þ a2 s pffiffiffiffiffiffiffiffiffiffiffiffiffiffi (n > s2 þ a2 1
(s2
a2 )k
1)
(k > 0)
þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
k s2 þ a2 s (k > 0) pffiffiffiffiffiffiffiffiffiffiffiffiffiffin s s2 a 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi (n > 1) 2 a2 s 1 (k > 0) ðs2 a2 Þk 1 pffiffiffiffiffiffiffiffiffiffi s sþ1 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 þ a2
1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 þ a2 þ s
1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi N s2 þ a2 þ s 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi N s s2 þ a 2 þ s 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
s2 þ a2 s2 þ a2 þ s
1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffi N s2 þ a2 s2 þ a2 þ s 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 a2 e ks s e
ks
e ks sm
k e t
(m > 0)
I1 (at)
(1=2)(aþb)t
Ik
a
b 2
t
1 (1=2)(at) 1 e I at n an 2 J0 (at) an Jn (at) pffiffiffiffi k p t G(k) 2a
(1=2)
Jk
(1=2) (at)
Ik
(1=2) (at)
kak Jk (at) t an In (at)
pffiffiffiffi k p t G(k) 2a erf
(1=2)
pffiffi 2 t ; erf (y)D the error function ¼ pffiffiffiffi p
ðy
e
u2
du
0
J0 (at); Bessel function of 1st kind, zero order J1 (at) ; J1 is the Bessel function of 1st kind, 1st order at N JN (at) ; N ¼ 1, 2, 3, . . . , JN is the Bessel function of 1st kind, Nth order aN t t ð N JN (au) du; N ¼ 1, 2, 3, . . . , JN is the Bessel function of 1st kind, Nth order N a u 0
1 J1 (at); J1 is the Bessel function of 1st kind, 1st order a 1 JN (at); N ¼ 1, 2, 3, . . . , JN is the Bessel function of 1st kind, Nth order aN I0 (at); I0 is the modified Bessel function of 1st kind, zero order Sk (t) ¼
s2
at
8 k
when 0 < t < k
0 t
k
when t > k when 0 < t < k
k)m G(m)
1
when t > k
5-37
Laplace Transforms TABLE A.5.1 (continued)
Laplace Transform Pairs F(s) ks
163
1e s
164
1 þ coth 12 ks 1 ¼ ks 2s s(1 e )
165
1 sðeþks aÞ
166
1 tanh ks s
167
1 s(1 þ e
168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186
187
ks )
1 k=s e s 1 pffiffi e k=s s 1 k=s pffiffi e s 1 e k= s s3=2 1 k=s e s3=2 1 k=s e (m > 0) sm 1 k=s e (m > 0) sm pffi e k s (k > 0) pffi k s
1 pffiffi e s s
3=2
(k 0)
pffi k s
e
pffi k s
1 when 0 < t < k 0 when t > k
S(k, t) ¼ {n when (n 1)k < t < nk (n ¼ 1, 2, . . . ): 8 > < 0 when 0 < t < k
Sk (t) ¼
1 tanh ks s2 1 s sinh ks 1 s cosh ks 1 coth ks s k ps coth s 2 þ k2 2k 1 (s2 þ 1)(1 e ps )
1 e s
f(t)
(k 0) (k 0)
pffi ae k s pffiffi (k 0) s(a þ s) pffi e k s pffiffi pffiffi (k 0) sða þ sÞ pffiffiffiffiffiffiffiffiffi e k s(sþa) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s(s þ a)
> :
1 þ a þ a2 þ þ an
1
when nk < t < (n þ 1)k (n ¼ 1, 2, . . . ) 8 M(2k, t) ¼ ( 1)n 1 > < when 2k(n 1) < t < 2nk > : (n ¼ 1, 2, . . . ) 1 1 1 ( 1)n when (n 1)k < t < nk M(k, t) þ ¼ 2 2 2 H(2k, t) [H(2k, t) ¼ k þ (r k)( 1)n where t ¼ 2kn þ r; 0 r 2k; n ¼ 0, 1, 2, . . . ] f2S(2k, t þ k)
2 ¼ 2(n
1) when (2n
fM(2k, t þ 3k) þ 1 ¼ 1 þ ( 1)n f2S(2k, t) jsin kt j sin t
3)k < t < (2n
when (2n
3)k < t < (2n
1 ¼ 2n
1 when 2k(n
1) < t < 2kn
when (2n
2)p < t < (2n
1)p
0 when (2n pffiffiffiffi J0 2 kt
1)k (t > 0) 1)k (t > 0)
1)p < t < 2np
pffiffiffiffi 1 pffiffiffiffiffi cos 2 kt pt pffiffiffiffi 1 pffiffiffiffiffi cosh 2 kt pt pffiffiffiffi 1 pffiffiffiffiffiffi sin 2 kt pk pffiffiffiffi 1 pffiffiffiffiffiffi sinh 2 kt pk pffiffiffiffi t (m 1)=2 Jm 1 2 kt k pffiffiffiffi t (m 1)=2 Im 1 2 kt k 2 k k pffiffiffiffiffiffiffi exp 4t 2 pt 3 erfc pk ffi 2 t
1 pffiffiffiffiffi exp pt rffiffiffiffi t 2 exp p 2
eak ea t
k2 4t
k2 k k erfc pffiffi 4t 2 t pffiffi kffi þ erfc 2pk ffit erfc a t þ 2p t
pffiffi 2 kffi eak ea t erfc a t þ 2p t (
0 e
(1=2)at
I0
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 k2 2a t
when 0 < t < k when t > k
(continued)
5-38 TABLE A.5.1 (continued)
Transforms and Applications Handbook Laplace Transform Pairs F(s)
188
189 190 191
192
193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213
f(t)
pffiffiffiffiffiffiffiffiffi k s2 þa2
(
e pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (s2 þ a2 ) pffiffiffiffiffiffiffiffiffi 2 2 ek s a pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 (s a ) pffiffiffiffiffiffiffiffiffi 2 2 ekð s þa sÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (k 0) (s2 þ a2 ) pffiffiffiffiffiffiffiffiffi 2 2 e ks e k s þa e
pffiffiffiffiffiffiffiffiffi k s2 þa2
e
(
ks
pffiffiffiffiffiffiffiffiffi 2 2 an e k s a pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (s2 þ a2 ) s2 þ a2 þ s
1 log s s
1 log s (k > 0) sk log s (a > 0) s a log s s2 þ 1 s log s s2 þ 1 1 log (1 þ ks) (k > 0) s s a log s b 1 log (1 þ k2 s2 ) s 1 log (s2 þ a2 ) (a > 0) s 1 log (s2 þ a2 ) (a > 0) s2 s2 þ a2 log s2 s2 a2 log s2 k arctan s 1 k arctan s s 2 2
ek s erfc(ks) (k > 0) 1 k2 s2 e erfc(ks) (k > 0) s pffiffiffiffi eks erfc ks (k > 0)
pffiffiffiffi 1 pffiffi erfc( ks) s pffiffiffiffi 1 pffiffi eks erfc( ks) (k > 0) s k erf pffiffi s
(v >
1)
when 0 < t < k
0
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi J0 a t 2 k2
when t > k when 0 < t < k
0
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi I0 a t 2 k2
when t > k
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi J0 a t 2 þ 2kt 8 when 0 < t < k k k J1 a t : pffiffiffiffiffiffiffiffiffiffiffiffiffiffi t 2 k2 8 when 0 < t < k k I : 2 1 t k2 8 when 0 < t < k < 0 t k (1=2)n pffiffiffiffiffiffiffiffiffiffiffiffiffiffi : Jn a t 2 k2 when t > k tþk G0 (1)
tk
1
log t
[G0 (1) ¼
G0 (k) log t [G(k)]2 G(k)
eat ½log a
0:5772]
Ei( at)
cos tSi(t)
sin tCi(t)
sin t Si(t)
cos t Ci(t)
t Ei k
1 bt (e t
2Ci
eat )
t k
2 log a
2Ci(at)
2 ½at log a þ sin at a 2 (1 cos at) t 2 (1 cosh at) t 1 sin kt t
atCi(at)
Si(kt) 1 t2 pffiffiffiffi exp 2 4k k p t erf 2k pffiffiffi k pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p t(t þ k) 0 when 0 < t < k (pt)
1=2
1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p(t þ k) pffiffi 1 sin 2k t pt
when t > k
5-39
Laplace Transforms TABLE A.5.1 (continued)
214 215 216 217 218 219 220 221 222 223 224 225
Laplace Transform Pairs F(s) 1 2 k pffiffi ek =s erfc pffiffi s s
eas Ei(as)
1 þ seas Ei(as) a hp i Si(s) cos s þ Ci(s) sin s 2 K0 (ks) pffiffi K0 ðk sÞ
1 ks e K1 (ks) s pffiffi 1 pffiffi K1 (k s) s 1 k pffiffi ek=s K0 s s
peks I0 (ks) e
ks
I1 (ks)
1 s sinh (as)
f(t) pffi 1 pffiffiffiffiffi e2k t pt 1 ; (a > 0) tþa 1 ; (a > 0) (t þ a)2
1 t2 þ 1 0
when 0 < t < k [Kn (t)is Bessel function of the
(t 2 k2 )1=2 when t > k second kind of imaginary argument] 2 1 k exp 4t 2t 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi t(t þ 2k) k 2 1 k exp 4t k pffiffiffi 2 pffiffiffiffiffi K0 (2 2kt) pt ( ½t(2k t) 1=2 when 0 < t < 2k when t > 2k when 0 < t < 2k
0
( 2
t pkffiffiffiffiffiffiffiffiffiffi ffi t(2k t)
pk
0 1 P
u[t
when t > 2k
(2k þ 1)a]
k¼0
8 f(t) 6 4 2 0 0 226
1 s cosh s
2
a
1 P
3a k
( 1) u(t
2k
5a
7a
t
7
t
1)
k¼0
f(t)
2 0 0
227
as 1 tanh s 2
1 2 3 1 P k ( 1) u(t u(t) þ 2
4
5
6
ak)
k¼1
Square wave
f (t)
1 0
a
2a
3a
4a
5a
t
–1 228
1 as 1 þ coth 2s 2
1 P
u(t
ak)
k¼0
4 3 2 1 0
0
Stepped function
f(t)
a
2a
3a
4a
t (continued)
5-40 TABLE A.5.1 (continued)
229
Transforms and Applications Handbook Laplace Transform Pairs F(s) m ma as coth 1 s2 2s 2
f(t) mt ma
1 P
k¼1
u(t ka)
Sawtooth function
f (t)
0 230
as 1 tanh s2 2
SLOPE = m
0 "
a
2a
3a #
1 X
1 (1)k (t ka) u(t tþ2 a k¼1
t
ka)
Triangular wave f(t) 1 0 231
0 1 P
1 s(1 þ e s )
a
2a
k
k)
( 1) u(t
3a
4a
5a
6a
t
7
t
k¼0
f(t)
1 0 232
a (s2 þ a2 )(1
e
p as
)
0
1
1 h X k¼0
2
sin a t
3 4 i p k u t a
5 k
6
p a
Half-wave rectification of sine wave f(t)
1 0
π a
0 233
ps a coth 2 2 (s þ a ) 2a
½sin (at) u(t) þ 2
2π a 1 h X k¼1
3π a sin a t
4π a
t
pi k u t a
k
p a
Full-wave rectification of sine wave f(t) 1 0 234
1 e s
as
0
u(t
a)
π a
2π a
3π a
4π a
t
f(t) ∞
1 0
0
a
t
5-41
Laplace Transforms TABLE A.5.1 (continued)
Laplace Transform Pairs F(s)
235
f(t) u(t a) u(t b)
1 as (e ebs ) s
f(t) 1
0 236
0 m (t
m as e s2
a a) u(t
b
t
a)
f(t) SLOPE = m 0
0
a
mt u(t
237
hma s
þ
mi e s2
t
a)
Or as
[ma þ m(t
a)] u(t
f(t) E
SLOPE = m
0 238
2 e s3
a)
as
0
a a)2 u(t
(t
t a)
f (t)
0 239
2
2 2a a e þ 2 þ 3 s s s
as
0
a
t 2 u(t
t
a)
f (t)
t2
a2 0 240
m s2
m e s2
as
0
a
mt u(t)
m(t
t a) u(t
a)
f(t) ma SLOPE = m 0 241
m s2
2m e s2
as
m þ 2e s
2as
0
mt
a 2m(t
a) u(t
t a) þ m(t
2a) u(t
2a)
f (t) ma
0
SLOPE =m 0
SLOPE =–m a
2a
t (continued)
5-42
Transforms and Applications Handbook
TABLE A.5.1 (continued)
Laplace Transform Pairs F(s)
f(t)
m ma m as þ 2 e s2 s s
242
mt [ma þ m(t a)] u(t
a)
f(t) ma
SlOPE = m
0 0 e s )2
(1
243
a
t
0.5t2 for 0 t < 1
s3
1
0.5(t
1 for 2 t
2)2 for 0 t < 2
f(t)
1
0
(1
244
s
e s) 3
1
0
2
3
t
2
0.5t for 0 t < 1
0.75
0.5(t
(t
2
1.5)2 for 1 t < 2
3) for 2 t < 3
0 for 3 < t
f(t)
1
0 0 b s(s "
ba
b)
1 sþb
245
þ (e
1)
# s þ ebab 1 e s(s b)
1
2 bt
1) u(t)
(e
where K ¼ (eba
1)
(e
as
bt
1) u(t
t
3 a) þ Ke
b(t a)
u(t
a)
f(t) K
0
0
a
t
TABLE A.5.2 Properties of Laplace Transforms F(s) 1
1 Ð
e
st
f(t) f(t)
f (t)dt
0
2
AF(s) þ BG(s)
3
sF(s)
4
sn F(s)
5 6
1 F(s) s 1 F(s) s2
Af(t) þ Bg(t)
f 0 (t)
f(þ0)
sn 1 f (þ0)
sn 2 f (1) (þ0)
f (n
1)
(þ0)
f (n)(t) Ðt f (t)dt 0
Ðt Ðt 0 0
f (l)dldt
5-43
Laplace Transforms TABLE A.5.2 Properties of Laplace Transforms F(s) 7
f(t)
F1(s)F2(s)
Ðt 0
8 9 10
F 0 (s)
f1 (t t)f2 (t)dt ¼ f1 * f2
tf (t)
(1)nF(n)(s) 1 Ð F(x)dx
tnf(t) 1 f (t) t at e f(t) f(t b), where f(t) ¼ 0; t < 0 1 t f c c 1 (bt)=c t f e c c
s
11 12
F(s a) ebsF(s)
13
F(cs)
14
F(cs b)
15
Ða
f(t þ a) ¼ f(t) periodic signal
16
Ða
f(t þ a) ¼ f(t)
17
F(s) 1 eas
f1(t), the half-wave rectification of f(t) in No. 16.
18 19 20
0
0
est f (t)dt 1 eas est f (t)dt 1 þ eas
as F(s) coth 2 p(s) , q(s) ¼ (s a1 )(s a2 ) (s q(s)
am )
p(s) f(s) ¼ q(s) (s a)r
f2(t), the full-wave rectification of f(t) in No. 16. m X p(an ) an t e 0 (a ) q n 1 r (r X f n) (a) t n 1 þ eat (r n) (n 1) n¼1
Sources: Campbell, G.A. and Foster, R.M., Fourier Integrals for Practical Applications, Van Nostrand, Princeton, NY, 1948; McLachlan, N.W. and Humbert, P., Formulaire pour le calcul symbolique, GauthierVillars, Paris, TX, 1947; A. Erdélyi and W. Magnus, Eds., Tables of Integral Transforms, Bateman Manuscript Project, California Institute of Technology, McGraw-Hill, New York, 1954; based on notes left by Harry Bateman. Note: In these tables, only those entries containing the condition 0 < g or k < g, where g is our t, are Laplace transforms. Several additional transforms, especially those involving other Bessel functions, can be found in sources.
References 1. R.V. Churchill, Modern Operational Mathematics in Engineering, McGraw-Hill, New York, 1944. 2. J. Irving and N. Mullineux, Mathematics in Physics and Engineering, Academic Press, New York, 1959. 3. H.S. Carslaw and J.C. Jaeger, Operational Methods in Applied Mathematics, Dover Publications, Dover, NH, 1963. 4. W.R. LePage, Complex Variables and the Laplace Transform for Engineers, McGraw-Hill, New York, 1961. 5. R.E. Bolz and G.L. Turve, Eds., CRC Handbook of Tables for Applied Engineering Science, 2nd edn., CRC Press, Boca Raton, FL, 1973.
6. A.D. Poularikas and S. Seeley, Signals and Systems, corrected 2nd edn., Krieger Publishing Co., Melbourne, FL, 1994. 7. G.A. Campbell and R.M. Foster, Fourier Integrals for Practical Applications, Van Nostrand, Princeton, NJ, 1948. 8. N.W. McLachlan and P. Humbert, Formulaire pour le calcul symbolique, Gauthier–Villars, Paris, 1947. 9. A. Erdélyi and W. Magnus, Eds., Tables of Integral Transforms, Bateman Manuscript Project, California Institute of Technology, McGraw-Hill, New York, 1954; based on notes left by Harry Bateman.
6 Z-Transform 6.1
Introduction................................................................................................................................... 6-1 One-Sided Z-Transform
Alexander D. Poularikas
.
Two-Sided Z-Transform
.
Applications
Appendix: Tables ................................................................................................................................... 6-36 Bibliography ............................................................................................................................................ 6-44
University of Alabama in Huntsville
6.1 Introduction
The root test states that if p ffiffiffiffiffiffiffi n jan j ¼ A
(6:3)
p ffiffiffiffiffiffiffi n jan j < 1
(6:4)
p ffiffiffiffiffiffiffi n ja n j > 1
(6:5)
The Z-transform is a powerful method for solving difference equations and, in general, to represent discrete systems. Although applications of Z-transforms are relatively new, the essential features of this mathematical technique date back to the early 1730s when DeMoivre introduced the concept of a generating function that is identical with that for the Z-transform. Recently, the development and extensive applications of the Z-transform are much enhanced as a result of the use of the digital computers.
then the series converges absolutely if A < 1, and diverges if A > 1, and may converge or diverge if A ¼ 1. More generally, the series converges absolutely if
6.1.1 One-Sided Z-Transform
where lim denotes the greatest limit points of limn!1 j f (nT)j1=n , and diverges if
lim
n!1
lim
n!1
6.1.1.1 The Z-Transform and Discrete Functions
lim
Let f(t) be defined for t 0. The Z-transform of the sequence { f(nT)} is given by Z f f (nT)g ¼_ F(z) ¼
1 X
f (nT)z
n
If we apply the root test in Equation 6.1 we obtain the convergence condition
(6:1)
n¼0
lim
n!1
where T, the sampling time, is a positive number.* To find the values of z for which the series converges, we use the ratio test or the root test. The ratio test states that a series of complex numbers 1 X
n!1
n¼0
with limit
jzj > lim
n!1
p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n j f (nT)j ¼ R
(6:6)
(6:2)
converges absolutely if A < 1 and diverges if A > 1 the series may or may not converge.
* The symbol ¼_ means equal by definition.
n!1
where R is known as the radius of convergence for the series. Therefore, the series will converge absolutely for all points in the z-plane that lie outside the circle of radius R, and is centered at the origin (with the possible exception of the point at infinity). This region is called the region of convergence (ROC).
an
anþ1 ¼A lim n!1 an
or
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n n j f (nT)z n j ¼ lim j f (nT)jjz 1 jn < 1
Example The radius of convergence of f(nT) ¼ e number, is 1 z e
aT
e
u(nT), a positive
aT
6-1
6-2
Transforms and Applications Handbook anT
The Z-transform of f(nT) ¼ e 1 X
F(z) ¼
n
f (nT )z
n¼0
¼
1 X
u(nT) is aT
(e
n¼0
Example 1
z 1 )n ¼
1
e
To find the Z-transform of y(nT) we proceed as follows: aT z 1
d2 y(t) y(nT ) 2y(nT T ) þ y(nT 2T ) ¼ x(t), ¼ x(nT ), dt 2 T2 Y(z) 2 z 1 Y(z) þ y( T )z 0 þ z 2 Y(z) þ y( T )z 1
If a ¼ 0 F(z) ¼
1 X
n
u(nT )z
¼
n¼0
1 z
1
1
¼
z z
þ y( 2T )z
1
0
¼ X(z)T 2
or
Example The function f(nT) ¼ anT cos nTv u(nT) has the Z-transform F(z) ¼ ¼ ¼
1 X
anT
n¼0
e jnT v þ e 2
1 aT e jTv z T
¼
1
1
y( T )z 1 y( 2T ) þ X(z)T 2 1 2z 1 þ z 2
2y( T )
jnTv
z
n
6.1.1.2.3 Time Scaling
1 1 1X 1X (aT e jT v z 1 )n þ (aT e 2 n¼0 2 n¼0
1 21
Y(z) ¼
þ
1 21
1 aT e
jTv
z 1 )n
Z anT f (nT) ¼ F(a
jT v z 1
T
z) ¼
1 X
f (nT)(a
T
z)
n
(6:11)
n¼0
1
1 a z cos T v 2aT z 1 cos T v þ a2T z
2
:
Example
The ROC is given by the relations jaT e jT v z 1 j < 1
jaT e
jT v
or
z 1j < 1
Z fsin vnTu(nT )g ¼
jzj > jaT j
or jzj > jaT j
Z fe
Therefore, the ROC is jzj > jaTj.
Z f f (nT)g ¼
6.1.1.2.1 Linearity If there exists transforms of sequences Z fci fi (nT)g ¼ ci Fi (z), ci are complex constants, with radii of convergence Ri > 0 for i ¼ 0, 1, 2, . . . , ‘(‘ finite), then Z
i¼0
)
ci fi (nT)
¼
‘ X i¼0
ci Fi (z) jzj > max Ri
(6:7)
k
kT)g ¼ z F(z), f ( nT) ¼ 0 n ¼ 1, 2, . . . (6:8)
Z f f (nT
kT)g ¼ z k F(z) þ
Z f f (nT þ kT)g ¼ z k F(z)
jzj > 1,
eþ1 z sin vT 2eþ1 z cos vT þ 1
eþ2 z 2
zN zN
1
zN
Z f f1 (nT)g ¼
zN
1
jzj > e
1
k X
f ( nT)z
(k n)
(6:9)
n¼1
k 1 X
f (nT)zk
n
f (0)
(6:12)
where N is the number of the time units in a period, jzj > R R is the radius of convergence of F1(z) Proof
þ Z f f1 (nT ¼ F1 (z) þ z ¼ F1 (z)
N
1 1 z
NT)g
2NT)g þ F1 (z) þ z N
¼
2N
zN
zN
1
F1 (z) þ
F1 (z)
For finite sequence of K terms (6:10)
n¼0
Z f f (nT þ T)g ¼ z ½F(z)
F1 (z),
f1 (nT) ¼ first period
Z f f (nT)g ¼ Z f f1 (nT)g þ Z f f1 (nT
6.1.1.2.2 Shifting Property Z f f (nT
sin vnTu(nT )g ¼
z sin vT 2z cos vT þ 1
6.1.1.2.4 Periodic Sequence
6.1.1.2 Properties of the Z-Transform
( ‘ X
n
z2
(6:10a)
F(z) ¼ F1 (z)
z
1 1
N(Kþ1)
z
N
(6:12a)
6-3
Z-Transform
6.1.1.2.5 Multiplication by n and nT
Additional relations of convolution are
R is the radius of convergence of F(z) Z fnf (nT)g ¼
z
dF(z) dz
dF(z) Tz dz
Z fnTf (nT)g ¼
(6:13)
Z f f (nT) * h(nT)g ¼ F(z)H(z) ¼ Z fh(nT) * f (nT)g ¼ F(z)H(z)
(6:14a)
Z ff f (nT) þ h(nT)g * f g(nT)gg ¼ Z f f (nT) * g(nT)g þ Z fh(nT) * g(nT)g
jzj > R
¼ F(z)G(z) þ H(z)G(z) (6:14b)
Proof
Z f f (nT) * h(nT) * g(nT)g: ¼ Z f f (nT) * fh(nT) * g(nT)gg 1 X
nT(nT)z
n
n¼0
¼ Tz
1 X
f (nT)
n¼0
d z dz
n
¼ F(z)H(z)G(z)
¼
" # 1 d X n f (nT)z Tz dz n¼0
¼
Tz
Example The Z-transform of the output of the discrete system y(n) ¼ 1=2y(n 1) þ 1=2x(n), when the input is the unit step function u(n) given by Y(z) ¼ H(z)U(z). The Z-transform of the difference equation with a delta function input d(n) is
dF(z) dz
Example Z fu(n)g ¼
H(z) z z
Z n2 u(n) ¼
1 z
d z z , z ¼ dz z 1 (z 1)2
, Z fnu(n)g ¼
d z dz (z 1)2
¼
z(z (z
2
(6:14c)
1 1 1 z H(z) ¼ 2 2
H(z) ¼
or
1 21
1 1 1 2z
¼
1 z 2z
1 2
Therefore, the output is given by
1) 1)4
Y(z) ¼
1 z 2z
z 1 2
z
1
6.1.1.2.6 Convolution If Z f f (nT)g ¼ F(z)jzj > R1 and Z fh(nT)g ¼ H(z)jvj > R2 , then
Z f f (nT) * h(nT)g ¼ Z
(
1 X
f (mT)h(nT
mT)
m¼0
Example Find the f(n) if
)
F(z) ¼
¼ F(z)H(z) jzj > max (R1 , R2 )
Z f f (nT) * h(nT)g ¼ ¼
1 X
¼ ¼
n¼0
"
1 X
f (mT)h(nT
m¼0
1 X
mT) z
m¼0
f (mT)
m¼0
1 X
#
1 X
h(nT
mT)z
1 X
h(rT)z r z
f1 (n) ¼ Z n
m
m¼0
The value of h(nT) for n < 0 is zero.
1 X r¼0
h(rT)z
m
r
1
n
z o ¼e z e a
¼ F(z)H(z):
an
f2 (n) ¼ Z
,
1
z (z e b )
¼e
bn
Therefore,
n
r¼ m
f (mT)z
a, b are constants:
e b)
From this equation we obtain
n¼0
f (mT)
(z
(6:14)
Proof 1 X
z2 e a )(z
f (n) ¼ f1 (n) * f2 (n) ¼ ¼e
bn
1
e 1
n X
e
am
e
b(n m)
m¼0
(a b)(nþ1)
e
¼e
bn
n X
e
(a b)m
m¼0
(a b)
6.1.1.2.7 Initial Value f (0) ¼ lim F(z) z!1
(6:15)
6-4
Transforms and Applications Handbook
The above value is obtained from the definition of the Z-transform. If f(0) ¼ 0, we obtain f(1) as the limit lim zF(z)
The following relations are also true: Z ( 1)k n(k) f (n
(6:15a)
z!1
Z fn(n þ 1)(n þ 2) (n þ k
6.1.1.2.8 Final Value lim f (n) ¼ lim (z
n!1
dk F(z) k þ 1) ¼ z dz k
1)F(z) if f (1) exists
z!1
¼ ( 1)k z k
(6:16)
Proof
(6:17b)
1)f (n)g
d k F(z) dz k
(6:17c)
Example
Z f f (k þ 1)
f (k)g ¼ lim
n!1
zF(z)
zf (0) F(z) ¼ (z n X ¼ lim ½ f ½(k þ 1) n!1
n X k¼0
½ f ½(k þ 1)
Z{n} ¼
zf (0)
1)F(z) k
f (k)z
k¼0
k
f (k)z
By taking the limit as z ! 1, the above equation becomes lim (z
f (0) ¼ lim
1)F(z)
z!1
n!1
n X k¼0
½ f ½(k þ 1)
¼ lim f f (1)
f (1) þ
þ f (n) f (n 1) þ f (n þ 1) ¼ lim f f (0) þ f (n þ 1)g
f (n)g
n!1
z
d Z{n} ¼ dz
Z{n3 } ¼
z
d z(z þ 1) z(z 2 þ 4z þ 1) ¼ dz (z 1)3 (z 1)4
z
Z f f (nT)g ¼ f (0T) þ f (T)z 1 þ f (2T)z 2 þ ¼ F(z) jzj > R f (0T) ¼ lim F(z) z!1
(6:18)
6.1.1.2.11 Final Value for f(nT)
f (0) þ f (1)
¼
d z z(z þ 1) ¼ , dz (z 1)2 (z 1)3
Z{n2 } ¼
6.1.1.2.10 Initial Value of f(nT)
f (k)
f (0) þ f (2)
n!1
d z z ¼ , dz z 1 (z 1)2
z
lim f (nT) ¼ lim (z
which is the required result.
n!1
Example
z!1
1)F(z) f (1T) exists
(6:19)
Example z 1)(1
If F(z) ¼ 1=[(1
e 1z 1)] with jzj > 1 then
f (0) ¼ lim F(z) ¼ z!1
lim f (n) ¼ lim (z
n!1
z!1
¼
(1
1)
(1
1
z
1 1
1 1 )(1
1 1
e
e
1z 1)
For the function
¼1
F(z) ¼
1 1 1
¼ lim
z!1
z2 (z e 1 )
f (0T ) ¼ lim F(z) ¼ z!1
lim f (nT ) ¼ lim (z
6.1.1.2.9 Multiplication by (nT)k Z nk T k f (nT) ¼
n!1
d Z (nT)k 1 f (nT) dz k > 0 and is an integer Tz
d k F(z) , d(z 1 )k ¼ n(n 1)(n 2) (n
n
jzj > 1
z!1
1)
1 z
z
1 1
1
1 z 11 e
e T 1 T
¼
¼1 1
1 e
F(z) ¼
1 X
f (nT)z
n
n¼0
jzj > R or F(z*) ¼
1 X n¼0
or
k
(6:17a) k þ 1)
T
6.1.1.2.12 Complex Conjugate Signal (6:17)
As a corollary to this theorem, we can deduce
(k)
e T z 1)
we obtain
1 e 1)
Z n(k) f (n) ¼ z
1 z 1 )(1
(1
F*(z*) ¼
1 X n¼0
f *(nT)z
n
¼ Z f f *(nT)g
f (nT)(z*)
n
6-5
Z-Transform
which converges uniformly for some choice of contour C and values of z. From Equation 6.22, we must have
Hence, Z f *(nT) ¼ F*(z*)
jzj > R
(6:20) Rh z t
6.1.1.2.13 Transform of Product If
Rh t
1
Rf < jtj < : Z f g(nT)g ¼ Z f f (nT)h(nT)g
¼
f (nT)h(nT)z
1 2pj
F(t)H
C
t
jzj > Rj Rh
jzj Rf < jtj < Rh
G(z) ¼
1 2pj
C
Figure 6.1 shows the ROC. The integral is solved with the aid of the residue theorem, which yields in this case
G(z) ¼
(6:21a)
Proof The integration is performed in the positive sense along the circle, inside which lie all the singular points of the function F(t) and outside which lie all the singular points of the function H(z=t). From Equation 6.21, we write 1 X
(6:25)
Rf Rh < jzj:
(6:21)
where C is a simple contour encircling counterclockwise the origin with (see Figure 6.1)
þ
jzj Rh
and also
z dt t
(6:24)
n
n¼0
þ
(6:23)
jtj > Rf
then
¼
jzj Rh
so that the sum in Equation 6.22 converges. Because jzj > Rf and t takes the place of z, then Equation 6.22 implies that
Z f f (nT)g ¼ F(z) jzj > Rf Z fh(nT)g ¼ H(z) jzj > Rh
1 X
or jtj <
z F(t) h(nT) t n¼0
n
dt t
K X
rest¼ti
i¼1
F(t)H ðz=tÞ t
(6:26)
where K is the number of different poles ti(i ¼ 1, 2, . . . , K) of the function F(t)=t. For the residue at the pole ti of multiplicity m of the function F(t)=t, we have rest¼ti
F(t)H ðz=tÞ t
¼
(6:22)
1
lim
dm dtm
1
1 (m 1)! t!ti z m F(t)H t (t ti ) t
(6:27)
Hence, for a simple pole, m ¼ 1, we obtain Im{τ}
rest¼ti ROC F(z) and H(z/τ)
F(t)H tz ti ) t
F(t)H ðz=tÞ ¼ lim (t t!ti t
(6:28)
C
Example |τ|=Rf Re{τ} |τ| =
|z| Rh
See Figure 6.2 for graphical representation of the complex integration. : Z{nT } ¼ H(z) ¼
z 1)2
(z
T jzj > 1, Z{e
nT
: } ¼ F(z) ¼
z
z e
Hence, Z{nTe FIGURE 6.1
nT
}¼
1 2pj
þ C
T
z t(t
e
T) z t
2 dt: 1
T
jzj > e
T
6-6
Transforms and Applications Handbook T
From Equation 6.29 and with C a unit Circle (Rf ¼ e j Im τ |z| =|z| =1 Rh
1 f (nT )f (nT ) ¼ 2pj n¼0
þ
1
1 2pj
þ
1 z e
1 X
¼
1 e Tz
1
< 1)
1 dz e Tz z
1
C
eT T
eT
z
dz
C
–T
e
2pj X eT residues ¼ T ¼ e e 2pj i
Re τ
1
T
¼
1
1 e
2T
6.1.1.2.15 Correlation Let the Z-transform of the two consequences Z f f (nT)g ¼ F(z) and Z fh(nT)g ¼ H(z) exist for jzj ¼ 1. Then the cross correlation is given by
C
1
X : f (mT)h(mT g(nT) ¼ f (nT) h(nT) ¼
nT)
m¼0
¼ lim
FIGURE 6.2
z!1þ
The contour must have a radius jtj of the value e T < jtj < jzj ¼ 1 and we have from Equation 6.28 zt Z{nTe nT } ¼ rest¼e T (t e T )T (t e T )(z t)2 ¼T
e
(z
1 g(nT) ¼ lim z!1þ 2pj 1 ¼ 2pj
Z{nTe
}¼
1 e Tz
1
¼T
T
ze (z
e
If Z f f (nT)g ¼ F(z), jzj > Rf and Z fh(nT)g ¼ H(z), jzj > Rh with jzj ¼ 1 > Rf Rh, then þ C
dz F(z)H(z ) z 1
1 n 1 F(t)H t dt t
n1
(6:30)
(6:31)
: g(nT) ¼ f (nT) h(nT)
(6:29)
¼
1 X
f (mT)f (mT
nT)
m¼0
1 ¼ 2pj
Proof From Equation 6.21 set z ¼ 1 and change the dummy variable t to z.
þ C
1 n 1 F(t)F t dt t
(6:32)
and, hence,
Example nT
C
C
z n z dt F(t) H t t t
If f (nT) ¼ h(nT) for n 0 the autocorrelation sequence is
where the contour is taken counterclockwise.
f(nT) ¼ e
þ
þ
: Z f g(nT)g ¼ Z f f (nT) h(nT)g 1 for jzj ¼ 1 ¼ F(z)H z
6.1.1.2.14 Parseval’s Theorem
1 f (nT)h(nT) ¼ 2pj n¼0
nT)g
This relation is the inverse Z-transform of g(nT) and, hence, T )2
and verifies the complex integration approach.
1 X
m
But Z {h(mT nT)} ¼ z n H(z) and, therefore, (see Equation 6.21)
T )2
d Tz dz 1
nT)z
m¼0
z!1þ
From Equation 6.17 nT
f (mT)h(mT
¼ lim Z f f (mT)h(mT
T
ze
1 X
u(nT) has the following Z-transform: F(z) ¼
1
1 e Tz
1
jzj > e
T
G(z) ¼ Z f g(nT)g ¼ Z f f (nT) h(nT)g ¼ F(z)F
1 (6:33) z
If we set n ¼ 0, we obtain the Parseval’s theorem in the same form it was developed above.
6-7
Z-Transform
Example nT
The sequence f(t) ¼ e nT
Z{e
Z
, n 0, has the Z-transform
}¼
z
z e
Z
T
jzj > e
T
The autocorrelation is given by Equation 6.32 in the form
T 1 z
T
T
9 a = ð1 f (nT, a)da ¼ F(z, a)da ;
1 rest¼ti F(t)H tn t
K X i¼1
1
(6:34)
where ti are all poles of the integrand inside the circle jtj ¼ 1. Similarly from Equation 6.33 g(nT ) ¼
K X i¼1
1 n 1 rest¼ti F(t)F t t
(6:35)
where ti are the poles included inside the unit circle.
From the previous example we obtain (only the root inside the unit circle) þ
eT
z z e
T
z
e
z n 1 dz ¼ T
resz¼e
T
C
e e2T
¼
2T
1
e
zeT n z z eT
1 X
e
mT
u(mT )e
T(m n)
u(mT
1
¼e ¼e ¼e
Tn
1 X
e
1. Use tables 2. Decompose the expression into simpler partial forms, which are included in the tables 3. If the transform is decomposed into a product of partial sums, the resulting object function is obtained as the convolution of the partial object function 4. Use the inversion integral
When F(z) is analytic for jzj > R (and at z ¼ 1), the value f(nT) is obtained as the coefficient of z n in the power series expansion (Taylor’s series of F(z) as a function of z 1). For example, if F(z) is the ratio of two polynomials in z 1, the coefficients f(0T), . . . , f(nT) are obtained as follows: F(z) ¼
p0 þ p 1 z q0 þ q 1 z
1 1
þ p2 z þ q2 z
¼ f (0T) þ f (T)z nT
(6:39)
To find the inverse transform, we may proceed as follows:
u(nT).
1
2 2
þ þ pn z þ þ qn z
þ f (2T)z
nT )
2mT
2
n n
þ
2)T q2 þ þ f (0T)qn
m¼n
e nT
nT
2nT
þe
1þe
1 1 e
2T
2T
2nT
e
þ (e ¼e
2T
þe
2nT
2T 2
) þ
nT
e2T e2T
e
4T
þ
The same can be accomplished by synthetic division.
Example
1
6.1.1.2.16 Z-Transforms with Parameters q q f (nT, a) ¼ F(z, a) Z qa qa
(6:36)
(6:40)
where p0 ¼ f (0T)q0 p1 ¼ f (1T)q0 þ f (0T)q1 .. . pn ¼ f (nT)q0 þ f ½(n 1)T q1 þ f ½(n
m¼0
¼ eTn
f (nT) ¼ Z 1 fF(z)g
Tn
which is equal to the autocorrelation of f(nT) ¼ e Using the summation definitions, we obtain
(6:38)
6.1.1.3.1 Power Series Method
Example
1 2pj
finite integral
a0
The function is regular in the region e T < jzj < eT. Using the residue theorem from Equation 6.30, we obtain g(nT ) ¼
(6:37)
a!a0
The inverse Z-transform provides the object function from its given transform. We use the symbolic solution
eT
z
a0
a!a0
6.1.1.3 Inverse Z-Transform
eT
z z e
¼
e
:
lim f (nT, a) ¼ lim F(z, a)
Table A.6.1 contains the Z-transform properties for positive-time sequences.
1 z
z : G(z) ¼ Z ff (nT ) f (nT )g ¼ z e
8a < ð1
1þz 1 z2 þ z ¼ 2 1 2 1 þ 2z þ 3z z þ 2z þ 3 pffiffiffi ¼ 1 z 1 z 2 þ 5z 3 þ jzj > 6
F(z) ¼
(6:41)
6-8
Transforms and Applications Handbook
From Equation 6.41: 1 ¼ f(0T)1 or f(0T) ¼ 1, 1 ¼ f(1T)1 þ 12 or f(1T) ¼ 1, 0 ¼ f(2T)1 þ f(1T)2 þ f(0T)3 or f(2T) ¼ þ 2 3 ¼ 1, 0 ¼ f(3T)1 þ f(2T)2 þ f(1T)3 þ f(0T)0 or f(3T) ¼ 2 þ 3 ¼ 5, and so forth.
6.1.1.3.2 Partial Fraction Expansion If F(z) is a rational function of z and analytic at infinity, it can be expressed as follows: F(z) ¼ F1 (z) þ F2 (z) þ F3 (z) þ
and
Hence,
F(z) ¼ 1 þ
(6:42)
f (nT) ¼ Z 1 fF1 (z)g þ Z 1 fF2 (z)g þ Z 1 fF3 (z)g þ
(6:43)
For an expansion of the form (6:44)
An
1
k
p)n F(z)jz¼p
1 (n
p) F(z)jz¼p
(6:45)
p)n F(z)jz¼p
n 1
d 1)! dz n
and
1
p)n F(z)jz¼p
½(z
F(z) ¼
Let 1 þ 2z 1 þ z 1 32 z 1 þ 12 z 1
þ
2 2
23 z 4
¼ 2
þ
1 2
2
z z
1
þ
5 z 2z 2
jzj > 1
F(z) ¼
2u(nT ) þ 52 (2)n u(nT ).
zþ1 A B ¼ þ 1)(z 2) z 1 z 2
(z
then we obtain 7 2z
(z
þ1 2 1) z
from which we find that A¼
(z (z
jzj > 2
(0
and its inverse is f (nT ) ¼ 12 d(nT ) (b) If
z 2 þ 2z þ 1 z 2 32 z þ 12
Also, F(z) ¼ 1 þ
1 2
1 z 2 þ 1 5 ¼ C¼ z (z 1)z¼2 2
Example
7 ¼1þ z 2
z
0þ1 1 ¼ , 1)(0 2) 2 1 z 2 þ 1 B¼ ¼ 2, z (z 2) z¼1
A¼
Hence,
F(z) ¼
z
1
then we obtain
.. . A1 ¼
9 z 2
1
z2 þ 1 Bz Cz ¼Aþ þ (z 1)(z 2) z 1 z 2
n
1 dk ½(z k! dz k
¼
z
(a) If
.. . An
8z
1
1 2
and, therefore, its inverse transform is f (nT) ¼ d(nT) þ n 1 8u(nT T ) 92 12 u(nT T) with ROCjzj > 1.
F(z) ¼
the constants Ai are given by
d ¼ ½(z dz
9 1 2z
1
9 2
Example
F1 (z) A1 A2 An þ ¼ þ þ (z p)n (z p)n z p (z p)2
An ¼ (z
8 z
¼1þz
and therefore,
F(z) ¼
7 z þ 12 2 ¼ 1) z 12 z¼1=2
1 2
z B¼ (z
¼1þ
1 2
A z
1
þ
B z
1 2
and 1) 72 z þ 12 1) z 12
z¼1
¼8
z þ 1 ¼ A¼ (z 2) z¼1
B¼
z þ 1 ¼3 (z 1)z¼2
2
6-9
Z-Transform Hence, j Im z
F(z) ¼
2
1 (z
1)
þ3
1 (z
z-Plane
2)
and
ROC
f (nT ) ¼
2u(nT
T ) þ 3(2)n 1 u(nT
T) Re z
with ROC jzj > 2.
Example
Contour of integration C
Poles of F(z) 2
If F(z) ¼ (z þz1)(zþ 1 1)2 ¼ z þA 1 þ z B 1 þ (z we find
C
with jzj > 1, then
1)2
FIGURE 6.3
z 2 þ 1 1 ¼ , A¼ (z 1)2 z¼ 1 2 z 2 þ 1 ¼ 1: C¼ z þ 1 z¼1
6.1.1.3.3 Inverse Transform by Integration
To find B we set any value of z (small for convenience) in the equality. Hence, with say z ¼ 2, we obtain z2 þ 1 ¼ 1 1 þB 1 þ 1 2 2 z 1 z¼2 (z 1) z¼2 (z þ 1)(z 1) z¼2 2 z þ 1 z¼2
or B ¼ 1=2. Therefore, F(z) ¼ 12
1 zþ1
þ 12
If F(z) is a regular function in the region jzj > R, then there exists a single sequence { f(nT)} for which Z{f(nT)} ¼ F(z), namely f (nT) ¼
1 2pj
þ C
F(z)z n 1 dz ¼
k X i¼1
n ¼ 0, 1, 2, . . .
1 1 z 1 þ (z 1)2 and its n 1 1 u(nT T ) þ 2 ( 1)
inverse transform is f (nT ) ¼ 1 u(nT T ) þ (nT T )u(nT T ) with ROC jzj > 1. 2
resz¼zi F(z)z n 1
(6:46)
The contour C encloses all the singularities of F(z) as shown in Figure 6.3 and it is taken in a counterclockwise direction. 6.1.1.3.4 Simple Poles If F(z) ¼ H(z)=G(z), then the residue at the singularity z ¼ a is given by
Example The function F(z) ¼ z3=(z 1)2 with jzj > 1 can be expanded as follows: F(z) ¼ z þ 2 þ (z3z 1)22 or F(z) ¼ z þ 2 þ (z3z 1)22 ¼ z þ 2 2 þ z A 1 þ (z B1)2 . Therefore, we obtain B ¼ (3z (z2)(z1)2 1) ¼ 1.
lim (z
z!a
a)F(z)z
n 1
¼ lim (z z!a
H(z) n a) z G(z)
1
(6:47)
z¼1
Set any value of z (e.g., z ¼ 2) in the above equality we obtain 2þ2þ
32 2 1 1 ¼2þ2þA þ 2 1 (2 1)2 (2 1)2
or A ¼ 3
Hence, F(z) ¼ z þ 2 þ
3 z
1
þ
1)2
and its inverse transform is f (nT ) ¼ d(nT þ T ) þ 2d(nT ) þ 3u(nT
The residue at the pole zi with multiplicity m of the function F(z)zn 1 is given by resz¼zi F(z)z n 1 ¼
1 (z
6.1.1.3.5 Multiple Poles
1
lim
dm dz m
1
1 (m 1)! z!zi m n 1 (z zi ) F(z)z
(6:48)
6.1.1.3.6 Simple Poles Not Factorable T ) þ (nT
T )u(nT
T)
with ROC jzj > 1. Tables A.6.3 and A.6.4 are useful for finding the inverse transforms.
The residue at the singularity am is F(z)z
n 1
jz¼am ¼
H(z) dG(z) dz
z
n 1
(6:49) z¼am
6-10
Transforms and Applications Handbook
6.1.1.3.7 F(z) Is Irrational Function of z
and, hence,
a
Let F(z) ¼ [(z þ 1)=z] , where a is a real noninteger. By Equation 6.46 we write. f (nT) ¼
1 2pj
þ zþ1 a n 1 z dz z
f (nT) ¼
sin½(n
a)p G(n
p
a)G(a þ 1) G(n þ 1)
But,
C
m) ¼
G(m)G(1 where the closed contour C is that shown in Figure 6.4. It can easily be shown that at the limit as z ! 0 the integral around the small circle BCD is zero (set z ¼ re ju and take the limit r ! 0). Also, the integral along EA is also zero. Because along AB z ¼ xe jp and along DE z ¼ xe jp, which implies that x is positive, we obtain 20
a ð 1 4 xe jp þ 1 xn 1 e f (nT) ¼ 2pj xe jp
jpn
(6:52)
dx
p sin pm
(6:53)
and, therefore, a)G(a þ 1) G(n þ 1) G(n G(a þ 1) ¼ G(n þ 1)G(a n þ 1)
f (nT) ¼
G(n
1 a)G(a
n þ 1) (6:54)
The Taylor’s expansion of F(z) is given as follows:
1
ð1
þ
xe jp þ 1 xe jp
0
2 1 ð 1 4 ¼ (1 2pj
a
3
F(z) ¼
xn 1 e jpn dx5
a n 1 a
x) x
e
jp(n a)
1 a jp(n a)
e
0
¼
sin½(n
p
a
1 X 1 d n (1 þ z 1 )a ¼ (1 þ z ) ¼ n! (dz 1 )n n¼0 1 a
dx
n þ 1)z
2) (a
1)(a
z
n
z 1 ¼0
n
(6:55)
But,
x)a xn
þ (1
zþ1 z
1 X 1 ¼ a(a n! n¼0
0
ð1
a)p
ð1
xn
1 a
(1
3
dx5
G(a þ 1) ¼ a(a G(n þ 1) ¼ n!
x)a dx
(6:50)
2) (a
1)(a
n þ 1)G(a
n þ 1),
(6:56)
and, therefore, Equation 6.55 becomes
0
F(z) ¼
But the beta function is given by
B(m, k) ¼
ð1 G(m)G(k) ¼ xm 1 (1 G(m þ k)
x)k 1 dx
(6:51)
1 X n¼0
G(a þ 1) z G(n þ 1)G(a n þ 1)
n
(6:57)
The above equation is a Z-transform expansion and, hence, the function F(nT) is that given in Equation 6.54.
0
Example To find the inverse of the transform
jy z-Plane
D
E –1
A
B z = xe–jπ
FIGURE 6.4
F(z) ¼
Branch cut
z = xe jπ
(z 1) (z þ 2) z 12
jzj > 2
we proceed with the following approaches: x
C Branch point
1. By fraction expansion (z 1) A B ¼ , þ (z þ 2) z 12 (z þ 2) z 12 (z 1) 6 (z 1) A¼ ¼ ¼ , B ¼ 5 (z þ 2)z¼1 z 12 z¼ 2
2
1 5
6-11
Z-Transform ( 6 1 f (nT ) ¼ Z 5 zþ2
)
1 1 5z
1
1 2
6 ¼ ( 2)n 5
1
1 1 n 5 2
1
n1
3. By integration 1 (2
d2 1)! dz 2
¼ n5n ,
2. By integration (
) z 1 n 1 z f (nT ) ¼ resz¼ 2 (z þ 2) (z þ 2) z 12 ( )
1 z 1 n 1 z z þ resz¼12 2 (z þ 2) z 12 6 ¼ ( 2)n 5
1 1 n 5 2
1
1
n1
5 z 2
1
hence, ff (nT )g ¼
1
5z zn 5) (z 5)2 2
(z
1
z¼5
n 0:
¼ 5nz n 1 jz¼5
Figure 6.5 shows the relation between pole location and type of poles and the behavior of causal signals; m stands for pole multiplicity. Table A.6.5 gives the Z-transform of a number of sequences.
6.1.2 Two-Sided Z-Transform If a function f (z) is defined by 1 < t < 1, then the Z-transform of its discrete representation f(nT) is given by
¼z 1 1
The multiplier z
1
6.1.2.1 The Z-Transform
3. By power expansion z 1 ¼z z 2 þ 32 z 1
1
2
þ
5 z 2
19 z 4
1
3
þ
19 þ z 4
2
þ
indicates one time-unit shift and, 5 19 1, , , . . . n ¼ 1, 2 . . . : 2 4
1 X : f (nT)z Z II f f (nT)g ¼ F(z) ¼
n
n¼ 1
Rþ > jzj < R
(6:58)
where Rþ is the radius of convergence for the positive time of the sequence R is the radius of convergence for the negative time of the sequence
Example Example
1. By expansion By F(z) has the ROC jzj > 5, then F(z) ¼
5z ¼ (z 5)2 z 2 þ 375z
¼ 0 50 z
3 0
þ
5z ¼ 5z 10z þ 25
þ 1 5z
1
þ 2 52 z
1
2
þ 50z
þ 3 53 z
n F(z) ¼ Z II e
2
3
¼
þ
Hence, f(nT) ¼ n5n n ¼ 0, 1, 2, . . . , which sometimes is difficult to recognize using the expansion method. 2. By fraction expansion 2
F(z) ¼
5z Az Bz , ¼ þ (z 5)2 z 5 (z 5)2 5 B ¼ ¼ 1, z z¼5
56 A6 62 þ 2 ¼ 6 5 (6 1) (6 5)2
or
A¼
0 X
jnT j
o
enT z
¼
n
n¼ 1
¼
1 X
¼
1
e
nT n
n¼0
enT z
n
n¼ 1
1þ 1þ
z
1 e Tz
1 X
1þ
1 X
e
þ
nT
z
1 X
e
nT
z
n
n¼0 n
n¼0
1 X
e
nT
z
n
n¼0
1
1 e Tz
1
The first sum (negative time) converges if je Tzj < 1 or jzj < eT. The second sum (positive time) converges if je Tz 1j < 1 or e T < jzj. Hence, the ROC is Rþ ¼ e T < jzj < R ¼ eT. The two poles of F(z) are z ¼ eT and z ¼ e T. 1:
Example Hence, The Z-transform of the functions of u(nT) and F(z) ¼ and f (nT ) ¼
z z
5
þ
u( nT
T) are
z2 (z
5)2
(5)n þ (n þ 1)5n ¼ n5n , n 0:
Z II fu(nT )g ¼
1 X n¼0
u(nT )z
n
¼
1
1 z
1
¼
z z
1
jzj > 1
6-12
Transforms and Applications Handbook
Single real poles—Causal signals
f (n) z-Plane m=1 n
1
z-Plane
f (n)
m=1 n
1
f (n) z-Plane m=1 n
1
f (n) z-Plane m=1 n
1
f (n) z-Plane m=1 n
1
z-Plane
f (n)
m=1 1
FIGURE 6.5
n
6-13
Z-Transform
Double real poles—Causal signals
f (n) z-Plane m=2 n
1
f (n) z-Plane m=2 n
1
f (n) z-Plane m=2 n
1
f (n) z-Plane m=2 n
1
f (n) z-Plane m=2 n
1
f (n) z-Plane m=2 1
n
FIGURE 6.5 (continued) (continued)
6-14
Transforms and Applications Handbook
Complex-conjugate poles—Causal signals f (n)
z-Plane m=1 r ω 0 –ω0
rn
n
1
f (n)
z-Plane
rn= 1
m=1 r=1 ω0 –ω0 1
n
r=1
f (n)
z-Plane m=1
r ω0 –ω0
rn
n
1
f (n)
z-Plane m=2 r=1 ω0 –ω0 1
n
FIGURE 6.5 (continued)
Z II f u( nT
T )g ¼ ¼ ¼1
1 X
u( nT
T )z
Assuming that the algebraic expression for the Z-transform F(z) is a rational function and that f(nT) has finite amplitude, except possibly at infinities, the properties of the ROC are
n
n¼ 1
"
0 X
z
n
n¼ 1 1 X n¼0
zn ¼ 1
#
1
1 1
z
¼
z z
1
jzj < 1
Although their Z-transform is identical their ROC is different. Therefore, to find the inverse Z-transform the ROC must also be given.
Figure 6.6 shows signal characteristics and their corresponding ROC.
1. The ROC is a ring or disc in the z-plane and centered at the origin, and 0 Rþ < jzj < R 1. 2. The Fourier transform converges also absolutely if and only if the ROC of the Z-transform of f(nT) includes the unit circle. 3. No poles exist in the ROC. 4. The ROC of a finite sequence { f(nT)} is the entire z-plane except possibly for z ¼ 0 or z ¼ 1. 5. If f(nT) is left handed, 0 n < 1, the ROC extends inward from the innermost pole of F(z) to infinity. 6. If f(nT) is left handed, 1 < n < 0, the ROC extends inward from the innermost pole of F(z) to zero.
6-15
Z-Transform
Finite-duration signals j Imz
Causal
Entire z-plane except z = 0 Rez
n
j Imz
Anticausal
Entire z-plane except z = ∞ Rez
n
j Imz
Two-sided
Entire z-plane except z = 0 and z = ∞ Rez
n
Infinite-duration signals j Imz
Causal
|z| > R+
R+
Rez
n
j Imz
Anticausal
R– n
Rez
j Imz
Two-sided
R+
R+ < |z| < R– Rez
n R–
FIGURE 6.6
|z| < R–
6-16
Transforms and Applications Handbook which indicates that it is a shifted function (because of the n 1 multiplier z 1). Hence, the inverse transform is f (n) ¼ 12 u(n 1) because the inverse transform of 1 1 12 z 1 is 1n equal to 2 .
7. An infinite-duration two-sided sequence { f(nT)} has a ring as its ROC, bounded on the interior and exterior by a pole. The ring contains no poles. 8. The ROC must be a connected region. 6.1.2.2 Properties 6.1.2.2.1 Linearity
6.1.2.2.3 Scaling
The proof is similar to the one-sided Z-transform.
If
6.1.2.2.2 Shifting
Z II f f (nT)g ¼ F(z) Z II f f (nT kT)g ¼ z k F(z)
(6:59)
then Z II anT f (nT) ¼ F(a
Proof Z II f f (nT
kT)g ¼
1 X
f (nT
n
kT)z
¼z
1 X
f (mT)z
jaT jRþ < jzj < jaT jR
z)
1 X anT f (nT)z Z II anT f (nT) ¼
(6:60)
1 X
T
z)
n
Because the ROC of F(z) is Rþ < jzj < R , the ROC of F(a
T
z) is
m
n
n¼ 1
m¼ 1
The last step results from setting m ¼ n k. Proceed similarly for the positive sign. The ROC of the shifted functions is the same as that of the unfinished function except at z ¼ 0 for k > 0 and z ¼ 1 for k < 0.
¼ F(a
T
Rþ < ja
Example
T
¼
f (nT)(a
n¼ 1
z)
or Rþ jaT j < jzj < jaT jR
zj < R
Example
To find the transfer function of the system y(nT) y(nT T) þ 2y(nT 2T) ¼ x(nT) þ 4x(nT T), we take the Z-transform of both sides of the equation. Hence, we find Y(z)
T
Proof
n¼ 1 k
Rþ < jzj < R
If the Z-transform of f(nT) ¼ exp ( jnTj) is F(z) ¼
z 1 Y(z) þ 2z 2 Y(z) ¼ X(z) þ 4z 1 X(z)
Y(z) 1 þ 4z 1 ¼ X(z) 1 z 1 þ 2z
nT z
þ
1 1
e
nT z 1
1
e
T
< jzj < eT
then the Z-transform of g(nT) ¼ anT f(nT) is
or
H(z) ¼
1 e
1
G(z) ¼
1 1
e
nT a T z
þ
1 1
e
nT aT z 1
1
aT e
T
< jzj < eT aT
2
6.1.2.2.4 Time Reversal If
Example Z II f f (nT)g ¼ F(z)
Consider the Z-transform
Rþ < jzj < R
then F(z) ¼
1 z
jzj >
1 2
1 2
Because the pole is inside the ROC, it implies that the function is causal. We next write the function in the form F(z) ¼ z
z
1
z
1 2
¼z
1
1
1
1 1 2z
1 1 < jzj < R Rþ
Z II f f ( nT)g ¼ F(z 1 )
jzj >
1 2
(6:61)
Proof Z II f f ( nT)g ¼
1 X
n¼ 1
f (nT)z
n
¼
1 X
n¼ 1
f (nT)(z 1 )
n
¼ F(z 1 )
6-17
Z-Transform
a( a)n 1 u(n 1). From the differentiation property (with T ¼ 1), we obtain
and 1 (n
1)!
1)ak , k 1 and jzj
1
6.1.2.2.6 Convolution If z
Z II f f1 (nT)g ¼ F1 (z) and
Also, from the definition of the Z-transform, we write 0 X
an u(n n
If f(nT) ¼ au(nT) then its Z-transform is F(z) ¼ a=(1 jzj > 1. Therefore,
1) for jzj > 1. Therefore,
1 1
1
Example
Example The Z-transform of f(n) ¼ u(n) is z=(z the Z-transform of f( n) ¼ u( n) is
or f (n) ¼ ( 1)n
1)
¼
1 X n¼0
zn ¼
then
1 1
Z II f f2 (nT)g ¼ F2 (z)
F(z) ¼ Z II ff1 (nT) * f2 (nT)g ¼ F1 (z)F2 (z)
z
(6:63)
6.1.2.2.5 Multiplication by nT
The ROC of F(z) is, at least, the intersection of that for F1(z) and F2(z).
If
Proof Z II f f (nT)g ¼ F(z) Rþ < jzj < R
F(z) ¼
then Z II fnTf (nT)g ¼
zT
dF(z) dz
Rþ < jzj < R
(6:62)
Proof A Laurent series can be differentiated term-by-term in its ROC and the resulting series has the same ROC. Therefore, we have 1 dF(z) d X f (nT)z ¼ dz dz n¼ 1
n
¼
for Rþ < jzj < R
Multiply both sides by
1 X
nf (nT)z
n 1
n¼ 1
¼ ¼
1 X
f (nT)z
n
n¼ 1 1 X
m¼ 1 1 X
m¼ 1
f1 (mT)
"
¼
1 X
n¼ 1
1 X
"
1 X
f1 (mT)f2 (nT
m
mT) z
m¼ 1
f2 (nT
mT)z
n¼ 1
f1 (mT)z
#
n
#
F2 (z) ¼ F1 (z)F2 (z)
where the shifting property was invoked.
Example The Z-transform of the convolution of e nu(n) and u(n) is ( ) n X n m Z II fðe u(n)Þ * u(n)g ¼ Z e u(n m)
zT
m¼0
zT
1 X
dF(z) ¼ dz n¼
nTf (nT)z
1
n
¼ Z fnTf (nT)g for Rþ < jzj < R
Example If F(z) ¼ log(1 þ az 1) jzj > jaj, then dF(z) az 2 or ¼ 1 þ az 1 dz
z
dF(z) ¼ az dz
1
1
1 ( a)z
1
jzj > jaj
The z 1 implies a time shift, and the inverse transform of the fraction is ( a)n. Hence, the inverse transform is
¼ Z{e n }Z fu(n)g ¼
z
z e
z
1
z
1
Also, from the convolution, definition we find ( ) n X 1 e n 1 e m u(n m) ¼ Z Z 1 e 1 m¼0 1 e 1 n ¼Z e 1 e 1 1 e 1 1 z z ¼ e 1 1 1 e z 1 z e 1 2 z ¼ (z 1)(z e 1 )
n
6-18
Transforms and Applications Handbook
which verifies the convolution property. The ROC for e nu(n) is jzj > e 1 and the ROC of u(n) is jzj > 1. The ROC of e nu(n) * u (n) is the intersection of these two ROCs and, hence, the ROC is jzj > 1.
Hence,
Example
Because the ROC of Rff(z) is a ring, it implies that rff(‘T) is a twosided signal. We proceed to find the autocorrelation first
Rff (z) ¼
The convolution of f1(n) ¼ {2, 1, 3} for n ¼ 0, 1, and 2, and f2(n) ¼ {1, 1, 1, 1} for n ¼ 0, 1, 2, and 3 is G(z) ¼ F1 (z)F2 (z) ¼ (2 þ z ¼ 2 þ 3z
1
2z
4
1
3z
3z 2 )(1 þ z
5
1
2
þz
rff (nT ) ¼
þ z 3)
1 X
rff (nT ) ¼
amT a(m
n)T
m¼n
¼a
which indicates that the output is g(n) ¼ {2, 3, 0, 0, 2, 3} which can easily be found by simply convoluting f1(n) and f2(n).
1 aT (z þ z 1 ) þ a2T
1
nT
1 X
1 1 a2T
amT a(m
m¼0
1 X
nT
¼a
a2Tm
a
nT
m¼0 2Tn
1 1
1 jajT
n 1 X
a2Tm
m¼0
a anT ¼ 2T a 1 a2T 1 ¼ a nT n0 1 a2T nT
a
n)T
ROCjajT < jzj
jajT causal signal
and
G(v) ¼ F(v
1 1 aT z
jzj
Rþf Rþh
lim ðj f (nT)jÞ1=n lim ðjh(nT)jÞ1=n
0 X
jzj jzj < jtj < R h Rþh
or equivalently
n!1
F(z) ¼
or
h
the series in the integrand of Equation 6.74 will converge uniformly to H(z=t), and otherwise will diverge. Figure 6.7 shows the ROC for F(t) and H(z=t). Form Equations 6.75 and 6.76 we obtain
Proof The series in Equation 6.68 will converge to an analytic function G(z) for Rþg < jzj < R g. Using the root test (see Section 6.2), we obtain
n!1
(6:75)
f
f ( nT)z n
(6:71)
n¼ 0
and this series converges if
(6:77)
h
When z satisfies the above equation, the intersection of the domain identified by Equations 6.75 and 6.76 is
jzj jzj < jtj < R h Rþh
jzj jzj ¼ max Rþf , < jtj < min R f , R h Rþh
Rþf < jtj < R f \
(6:78)
The contour must be located inside the intersection. jzj
e
(6:80) which has the inverse function g(nT) ¼ e
Hence, all of the poles of F(t) lie inside the contour and all the poles of H(z=t) lie outside the contour.
T
nT
u(nT), as expected.
6.1.2.2.11 Parseval’s Theorem If
Example Z II f f (nT)g ¼ F(z) Rþf < jzj < R
The Z-transform of u(nT) is
F(z) ¼
1
1 z
1
f
Z II fh(nT)g ¼ H(z) Rþh < jzj < R jzj > 1 ¼ Rþf , R
f
(6:81) h
with
¼1
Rþf Rþh < jzj ¼ 1 < R f R
and the Z-transform of h(nT) ¼ exp( jnTj) is
(6:82)
h
then we have H(z) ¼
e 2T e T z 1 )(1 e T z) 1
(1
Rþh ¼ e
T
T
< jzj < e ¼ R
h
But R f ¼ 1 and, hence, from Equation 6.68 1 exp( T) < jzj < 1. The contour must lie in the region max (1, jzje T) < jtj < min( 1, jzje T) as given by Equation 6.78. The pole-zero configuration and the contour are shown in Figure 6.8. If we choose jzj > eT, then the contour is that shown in the figure. Therefore, Equation 6.68 becomes 1 : Z II fu(nT )h(nT )g ¼ G(z) ¼ 2pj
þ
1 t
1
1 1
e
T t z
e
2T
1
e
T z t
dt t
1 < jzj < min R max Rþf , R h
1 f (nT)h * (nT) ¼ 2pj 1
|τ| = 1 1 X
n¼ 1
Re{τ}
f (nT)h * (nT) ¼
1 vs
(6:83)
f,
(6:84)
1 Rþh
1 dz F(z)H * z* z
þ C
vð s =2
(6:85)
F(e jvT )H * (e jvT )dv
vs =2
vs ¼
FIGURE 6.8
C
dz z
If f(nT) and h(nT) converge on the unit circle, we can use the unit circle as the contour. We then obtain
ze–T
C
F(z)H(z 1 )
where the contour encircles the origin with
n¼
|τ| = |z|e–T
1
þ
1 2pj
1 X
j Im{τ} zeT
n¼ 1
f (nT)h(nT) ¼
Proof In Equations 6.68 and 6.69 set z ¼ 1 and replace the dummy variable t and z to obtain Equations 6.83 and 6.84. For complex signals Parseval’s relation 6.83 is modified as follows:
1
C
1 X
2p T
(6:86)
T
|τ| = |z|e
where we set z ¼ e jvT. If f(nT) ¼ h(nT) then 1 X
n¼
1 j f (nT)j ¼ v s 1 2
vð s =2
vs =2
jvT 2 F(e ) dv
(6:87)
6-21
Z-Transform
Example
Example e Tz 1)
The Z-transform of f(nT) ¼ exp( nT)u(nT) is F(z) ¼ 1=(1 for jzj > e T. From Equation 6.83 we obtain 1 X
n¼ 1
f 2 (nT ) ¼
1 X n¼0
f 2 (nT ) ¼
1 2pj
þ
1 e Tz
1
1
If F(z) ¼ [z(z þ 1)]=(z2 2z þ 1) ¼ (1 þ z 1)=(1 the ROC is jzj > 1, then 1
1 dz e Tz z
1
1
1
2z
C
2
þz
f 2 (nT ) ¼ res (z n¼0
1
z e e T )(1
T
e T z)
z¼e
T
¼
1
1 e
n¼0
e
nT
e
nT
¼
1 X
¼
1
2nT
n¼0
1 e
¼ 1þe
2T
þ (e
3
2
þz z 2
1
6z
5z
2
2
þ 3z 3z 3
3
... and by continuing the division we recognize that
) þ
2T 2
n M, the function F(z) b0 zN 1 þ b1 z N 2 þ þ bM z N ¼ z N þ a1 zN 1 þ þ aN z
n0
(b) If F(z) ¼ z(z þ 3)=(z2 3z þ 2) with 1 < jzj > 2, then following exactly the same procedure
aN 6¼ 0, M < N
N(z) b0 z N þ b1 z N 1 þ þ bM zN F(z) ¼ ¼ zN þ a1 z N 1 þ þ aN D(z)
4(1)n
However, the pole at z ¼ 2 belongs to the negative-time sequence and the pole at z ¼ 1 belongs to the positivetime sequence. Hence,
M 1
f (nT ) ¼
(6:91)
4(1)n 5(2)n
n0 n 1
is always a proper function.
Example 6.1.2.3.2 Partial Fraction Expansion Distinct poles If the poles p1, p2, . . . , pN on a proper function F(z) are all different, then we expand it in the form F(z) A1 A2 AN þ þ þ ¼ z p1 z p2 z pN z
(6:92)
To determine the inverse Z-transform of F(z) ¼ 1=(1 1.5z 1 þ 0.5z 2) if (a) ROC: jzj > 1, (b) ROC: jzj < 0.5, and (c) ROC: 0.5 < jzj < 1, we proceed as follows: F(z) ¼
z2 ¼ 1:5z þ 0:5 (z
z2
z2 1) z
¼Aþ
1 2
Bz z
1
þ
Cz z
1 2
or
where all Ai are unknown constants to be determined. The inverse Z-transform of the kth term of Equation 6.92 is given by Z ¼
1
1
(
1 pk z
1
(pk )n u(nT) n
(pk ) u( nT
if ROC: jzj > jpk j(causal signal) T) if ROC: jzj < jpk j(anticausal signal)
(6:93)
If the signal is causal, the ROC is jzj > pmax, where pmax ¼ max {jp1j, jp2j, . . . , jpNj}. In this case, all terms in Equation 6.92 result in causal signal components.
F(z) ¼ 2
z z
z 1
z
1 2
(a) f(nT) ¼ 2(1)n (1=2)n, n 0 because both poles are outside the ROC jzj > 1 (inside the unit circle). (b) f(nT) ¼ 2(1)n u( nT T) þ (1=2)n u( nT T), n 1 because both poles are outside the ROC (outside the circle jzj ¼ 0.5). (c) Pole at 1=2 provides the causal part and the pole at 1 provides the anticausal. Hence, f (nT ) ¼
2(1)n u( nT
T)
n 1 u(nT ) 2
1 1, we obtain F(z) ¼
1 F(z) ¼ 1 z ¼ A0 þ
1
1
z
1 1 2 2z
1
¼
z3 1) z
(z 2
A1 z A2 z A3 z þ þ 2 1 z 12 z 1 2
1 2 2
¼1þ
jzj > 1
z z
A3 ¼
z 2 (z
1 2 2
1) z
and then we write
(z
z3 1) z
1 2 2
¼
A1 z A2 z þ z 1 z 12
¼
A1 z z
¼
1 2 2
z¼12
1 2
¼
1 2
1
¼
z
1 2 2 þ A2 z(z
(A1 þ A2
1 ¼ 1, 1
1 2 2
1) z
1 2
z 2 (z
1)
3 A2 ¼ 0, A1 ¼ 4 and A2 ¼ 2 2
A1
(z
2
z z
1
2
z2
z z
1 2
z
f (nT ) ¼
: 4(1)n
A1 pi
n n 1 1 (n þ 1) 2 2 2
þ
(z
A3 z
1 2 2
z
1 , 2
z¼2
A2 z A3 z(z þ pi ) þ pi )2 (z pi )3
3 1 n 2 2
1
1 1 n n 2 2
1
1
n¼0 n1
Example Now let us assume the same example but with jzj < 1=2. This indicates that the output signal is anticausal. Hence, from
f (nT ) ¼
z z
1
2
z2
z z
1 2
z
1 2 2
n n 1 1 þ (n þ 1) 4(1) þ 2 2 2 n
n
1
Similarly from
n0
Another form of expansion of a proper function (the degree of the denominator is one less than the numerator) is of the form
z
þ
1 2
and Table A.6.3, we obtain
1 2 2
and the output is
f (nT ) ¼ 4(1)n
A2
z
3 2
F(z) ¼ 4 ¼4 1 2
þ
5 4z
2z 2
8 < d(n)
Hence, z3 1) z
1
where A2 was found by setting an arbitrary value of z, that is, z ¼ 1, in both sides of the equation. Therefore, the inverse Z-transform is given by
Equating coefficients of equal powers, we obtain the system
A1 þ A2
A1
z
(z
5 zþ1 4 14 2 1) z 2
þ 14 (z 1) ¼ 4, 2 (z 1) z 12 z¼1 2 1 2z 2 54 z þ 14 z 12 A3 ¼ ¼ 2 1 z (z 1) z 1
A1 ¼
A2 ¼
2 1) z 12 1)z 3 þ 1 32 A2 A1 z 2 þ 14 A1 þ 12 A2 z 2 (z 1) z 12 (z
2
2z 2
2
1
z2
¼1þ 1 2
Hence,
If we set z ¼ 0 in both sides, we find that A0 ¼ 0. Next the find A3 by multiplying both sides by (z 1=2)2 and setting z ¼ 1=2. Hence. 3
(z
z3 1) z
F(z) ¼ 1 þ 4
and Table A.6.4, we obtain
(6:95)
and the following example explains its use (see Table A.6.4).
1 z
f (nT ) ¼
8 < d(n)
1
3 1 2z
1 2
1 z 2 z 1 2 2
3 1 n 1 1 1 n : 4(1)n 1 þ þ n 2 2 2 2
1
n¼0 n
1
6-24
Transforms and Applications Handbook The function F(z)zn 1 ¼ zn þ 1=(z enclosed by C for n 0. Hence,
6.1.2.3.3 Integral Inversion Formula
f (nT ) ¼ Res F(z)z n 1 , 1 þ Res F(z)z n 1 , a
THEOREM 6.1
¼
If F(z) ¼
1 X
f (mT)z
m
(6:96)
m¼ 1
converges to an analytic function in the annular domain Rþ < jzj < R , then 1 f (nT) ¼ 2pj
þ C
dz F(z)zn z
(6:97)
where C is any simple closed curve separating jzj ¼ Rþ from jzj ¼ R and it is traced in the counterclockwise direction. n 1
and integrate around C.
Proof Multiply Equation 6.96 by z Then 1 2pj
þ
F(z)z n
C
1 X dz 1 f (mT) ¼ z 2pj m¼ 1
þ
zn
m
C
dz z
(6:98)
1 2pj
zn
m
C
dz 1 ¼ z 2pj
Rn
e
¼
f (nT) ¼ f (nT) ¼
k
2p ð 1 k e juk du R 2p 0 1 k¼0
k
(1
0:8 < jzj < 0:8
1
For n 0 the contour C encloses only the pole z ¼ 0.8 of the function F(z)zn 1. Therefore, (1 0:82 )z n (z 0:8) f (nT ) ¼ Res F(z)z n 1 jz¼0:8 ¼ (1 0:8z)(z 0:8) z¼0:8 n0
For n < 0 only the pole z ¼ 1=0.8 is outside C. Hence, Res F(z)z n 1 jz¼1=0:8
(1
0:82 )0:8 1 z n (z 0:8 1 ) ¼ 0:8 (1 0:8 1 )(z 0:8) z¼0:8 1
z!z0
Res F(z)z
n 1
, bk
n0 n 1
ak y(n
k) ¼
using the Z-transform approach.
L X k¼0
bk f (n
k)
(6:104)
6-25
Z-Transform
6.1.3.2 Analysis of Linear Discrete Systems
Example To find the solution to y(n) ¼ y(n 1) þ 2y(n 2) with initial conditions y(0) ¼ 1 and y(1) ¼ 2, we proceed as follows: From the difference equation y(0) ¼ y( 1) þ 2y( 2) ¼ 1 y(1) ¼ y(0) þ 2y( 1) ¼ 2 Hence, y( 1) ¼ 12 and y( 2) ¼ 14. The Z-transform of the difference equation is given by
Y(z) ¼
1 X
(‘þ1)
y(‘)z
‘¼ 1
1 X
1
þ z Y(z) þ 2
y(‘)z
‘¼ 2
¼ y( 1) þ z 1 Y(z) þ 2 y( 2) þ y( 1)z
1 1 ¼ þ z 1 Y(z) þ þ z 2 2 ¼1þz
1
1
(‘þ2)
1
þ 2z 2 Y(z)
!
2
þ z Y(z)
þ z 2 Y(z)
Hence,
1
¼
z2
1 þ z 1 2z 2 1 2 z z þ z 2 z2 z
z z
2z
2
2
and : Z 1 fY(z)g ¼ y(n) ¼ Z
1
z2 z
z2
2
þZ
1
n
z z
z2
PL Y(z) bk z ¼ PNk¼0 H(z) ¼ F(z) k¼0 ak z
2
k k
¼ transfer function
(6:105)
where H(z) is the transform of the impulse response of a discrete system. 6.1.3.2.2 Stability Using the convolution relation between input and output of a discrete systems, we obtain 1 X k) M jh(k)j < 1 k¼0
(6:106)
where M is the maximum value of f(n). The above inequality specifies that a discrete system is stable if to a finite input the absolute sum of its impulse response is finite. From the properties of the Z-transform, the ROC of the impulse response satisfying Equation 6.106 is jzj > 1. Hence, all the poles of H(z) of a stable system lie inside the unit circle. The modified Schur–Cohn criterion establishes if the zeros of the denominator of the rational transfer function H(z) ¼ N(z)=D(z) are inside or outside the unit circle. The first step is to form the polynomial
1
1
From Equation 6.104 we obtain the transfer function by ignoring initial conditions. The result is
X n h(k)f (n j y(n)j ¼ k¼0
þ z 1 Y(z) þ 2z 2 Y(z)
Y(z) ¼
6.1.3.2.1 Transfer Function
o
Drp (z) ¼ z N D(z 1 ) ¼ d0 zN þ þ dN 1 z þ dN
Example The solution of the difference equation y(n) ay(n 1) ¼ u(n) with initial condition y( 1) ¼ 2 and jaj < 1 proceeds as follows: Y(z)
2a Y(z) ¼ 1 az ¼
1
az 1 Y(z) ¼
ay( 1)
2a az
z
þ 1
z
þ
1
1
11
1 az
a1
1 z
1
z z
1 1
1
¼
þ
2a 1 az a
a
11
þ 1 1 az
(z 1
Hence, the inverse Z-transform gives 1 a y(n) ¼ 2a an þ u(n) þ an |fflffl{zfflffl} 1|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} a a 1 zero input zero state
¼
1 2a 1 nþ1 u(n) þ a 1|fflfflfflfflfflffl{zfflfflfflfflffl a ffl} |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} a 1 steady state
transient
n0
z2 1)(z
a)
where D(z 1) ¼ d0 þ þ dN 1zN 1 þ dNzN. This Drp(z) is called the reciprocal polynomial associated with D(z). The roots of Drp(z) are the reciprocals of the roots of D(z) and jDrp(z)j ¼ jD(z)j on the unit circle. Next, we must divide Drp(z) by D(z) starting at the high power and obtain the quotient a0 ¼ d0=dN and the remainder D1rp(z) of degree N 1 or less, so that Drp (z) D1rp (z) ¼ a0 þ D(z) D(z) The division is repeated with D1rp(z) and its reciprocal polynomial D1(z) and the sequence a0, a1, . . . , aN 2 is generated according to the rule Dkrp (z) D(kþ1)rp (z) ¼ ak þ Dk (z) Dk (z)
for k ¼ 0, 1, 2, . . . , N
2
The zeros of D(z) are all inside the unit circle (stable system) if and only if the following three conditions are satisfied:
6-26
Transforms and Applications Handbook
1. D(1) > 0 < 0 N odd 2. D( 1) > 0 N even 3. jakj < 1 for k ¼ 0, 1, . . . , N
Conversely, if jH(v)j is square integrable and if the above integral is finite, then we can associate with jH(v)j a phase response with w(v) so that the resulting filter with frequency response 2
Check conditions (1) and (2) before proceeding to (3). If they are not satisfied, the system is unstable.
Example D(z) ¼ z 3
0:2z 2 þ z
0:2, Drp (z) ¼
0:2z 3 þ z 2 0:2z þ 1 a0 ¼ 3 ¼ z 0:2z 2 þ z 0:2 a1 ¼
0:2z 3 þ z 2
is causal. The relationship between the real and imaginary parts of an absolutely summable, causal, and real sequence is given by the relation
0:2z þ 1
0:8z 2 þ 0:96 0:2 þ , D(z)
0:96z 2 þ 0:96 ¼1 0:96z 2 þ 0:96
Because ja1j ¼ 1, condition (3) is not satisfied and the system is unstable.
The transfer function of a feedback system with forward (open-loop) gain D(z)G(z) and unit feedback gain is given by H(z) ¼
H(v) ¼ jH(v)je jw(v)
D(z)G(z) 1 þ D(z)G(z)
Assuming that all the individual systems are causal and have rational transfer function, the open-loop gain D(z)G(z) can be written as A(z) D(z)G(z) ¼ B(z)
Hi (v) ¼
A(z) ¼ aL z þ þ a0 , B(z) ¼ z þ bM 1 z
M 1
A(z) H(z) ¼ B(z) þ A(z) which indicates that the system will be stable if B(z) þ A(z) or 1 þ D(z)G(z) has zeros inside the unit circle. 6.1.3.2.3 Causality A system is causal if h(n) ¼ 0 for n < 0. From the properties of the Z-transform, H(z) is regular in the ROC and at the infinity point. For rational functions the numerator polynomial has to be at most of the same degree as the polynomial in the denominator. The Paley–Wiener theorem provides the necessary and sufficient conditions that a frequency response characteristic H(v) must satisfy in order for the resulting filter to be causal.
p
v
l 2
dl
6.1.3.2.5 Frequency Characteristics With input f(n) ¼ ejvn, the output is 1 X
h(k)e jv(n
k¼0
¼ e jvn H(e jv )
þ þ b0 , L M
Hence, the total transfer function becomes
Hr (l)cot
1. H(v) cannot be zero except at a finite set of points. 2. jH(v)j cannot be constant in any finite range of frequencies. 3. The transition from pass band to stop band cannot be infinitely sharp. 4. The real and imaginary parts of H(v) are independent and are related by the discrete Hilbert transform. 5. jH(v)j and w(v) cannot be chosen arbitrarily.
where M
ðp
which is known as the discrete Hilbert transform. Summary of causality
y(n) ¼ L
1 2p
k)
¼ e jvn
1 X
h(k)e
jvk
k¼0
(6:107)
where H(e jv ) ¼ H(z)jz¼e jv ¼ Hr (e jv ) þ jHi (e jv ) ¼ A(v)e jw(v)
(6:108)
1=2 A(v) ¼ Hr2 (e jv ) þ Hi2 (e jv ) ¼ amplitude response (6:109) w(v) ¼ tan 1 Hi (e jv ) Hr (e jv ) ¼ phase response (6:110) dw(v) d t(v) ¼ ¼ Re z ‘nH(z) dv dz z¼e jv ¼
group delay characteristic
(6:111)
6.1.3.2.4 Paley–Wiener Theorem
Because H(ejv) ¼ H(ej(v þ 2pk)) it implies that the frequency characteristics of discrete systems are periodic with period 2p.
If h(n) has finite energy and h(n) ¼ 0 for n < 0, then
6.1.3.2.6 Z-Transform and Discrete Fourier Transform (DFT)
ðp
p
j‘njH(v)kdv < 1
If x(n) has a finite duration of length N or less, the sequence can be recovered from its N-point DFT. Hence, its Z-transform is uniquely determined by its N-point DFT. Hence, we find
6-27
Z-Transform
X(z) ¼ ¼ ¼
N 1 X
x(n)z
n
n¼0
" # N 1 N 1 X 1 X j2pkn=N z ¼ X(k)e N k¼0 n¼0
n
y(n)
z N
N 1 N X k¼0
1
X(k) e j2pk=N z
e N
(6:112)
1
N 1 jvN X
e
H(z) ¼
6.1.3.3.1 Infinite Impulse Response (IIR) Filters A discrete, linear, and time invariant system can be described by a higher-order difference equation of the form
b0
x(n)
+
u(n)
Σ
y(n) ¼
+
(6:114)
PM
k¼0 P N
bk z
k¼1
N X
bk x(n
ak y(n
k¼1
ak z
k
(6:115)
k)
(6:116)
k) þ y(n)
(6:117)
y(n) + z–1 y(n – 1)
x(n – 1) b1
+
Σ
+
Σ
+
a1
+
z–1
z–1
x(n – 2)
y(n – 2) bM – 1
+
+
Σ
Σ
aN – 1
+
+ z–1
z–1
x(n – M)
aN
bM
y(n) + x(n)
k
k¼0
Σ
z–1
Summing element
k)
is shown in Figure 6.9. Each appropriate rearrangement of the block diagram represents a different computational algorithm for implementing the same system.
+
FIGURE 6.9
M X
y(n) ¼
6.1.3.3 Digital Filters
+ Σ + x(n)
bk x(n
k¼0
Y(z) ¼ X(z) 1
(6:113)
j(v 2pk=N)
X(v) is the Fourier transform of the finite-duration sequence in terms of its DFT.
y(n)
N X
The block diagram representation of Equation 6.114, in the form of the following pair of equations:
X(k) 1
k¼0
k) ¼
Taking the Z-transform of the above equation and solving for the ratio Y(z)=X(z), we obtain
Set z ¼ ejv (evaluated on the unit circle) to find 1 : X(e jv ) ¼ X(v) ¼
ak y(n
k¼1
N 1 N 1 X n 1 X X(k) e j2pk=N z 1 N k¼0 n¼0
1
N X
y(n)
y(n)
y(n)
z–1
y(n – 1)
y(n – N)
y(n)
a
ay(n)
y(n) Pickoff point
Delay element
Product
6-28
Transforms and Applications Handbook
x(n)
b0
w(n)
w(n)
+
Σ +
+ z–1
z–1 w(n – 1)
w(n – 1)
a1
+
Σ
y(n)
+
Σ
b1
+
Σ +
+ z–1
z–1 w(n – 2)
aN – 1
+
Σ
bN – 1
w(n – N + 1)
+
Σ +
+ z–1
z–1 w(n – N)
w(n – N )
aN
bN
FIGURE 6.10
Figure 6.9 can be viewed as an implementation of H(z) through the decomposition H(z) ¼ H2 (z)H1 (z) ¼
1 1
PN
k
k¼1 ak z
!
M X
bk z
!
k
k¼0
(6:118)
Y(z) ¼ H2 (z)V(z) ¼
M X
bk z
k
k¼0
!
X(z)
(6:119)
!
(6:120)
1 1
PN
k¼1
ak z
k
V(z)
!
(6:121)
W(z)
(6:122)
1 1
Y(z) ¼ H1 (z)W(z) ¼
N X
PN
M X k¼1
k k¼1 ak z !
bk z
ak w(n
k¼1
M X
k) þ x(n)
bk w(n
k)
(6:123)
(6:124)
k¼0
Because the two internal branches of Figure 6.10 are identical, they can be combined in one branch so that Figure 6.11. Figure 6.9 represents the direct form I of the general Nth-order system and Figure 6.11 is often referred to as the direct form II or canonical direct form implementation. 6.1.3.3.2 Finite Impulse Response (FIR) Filters
If we arrange Equation 6.118, we can create the following two equations: W(z) ¼ H2 (z)X(z) ¼
w(n) ¼
y(n) ¼
or through the pair of equations
V(z) ¼ H1 (z)X(z) ¼
The last two equations are presented graphically in Figure 6.10 (M ¼ N). The time domain the Figure 6.10 is the pair of equations
k
X(z)
For causal FIR systems, the difference equation describing such a system is given by y(n) ¼
M X
bk x(n
k)
(6:125)
k¼0
which is recognized as the discrete convolution of x(n) with the impulse response
6-29
Z-Transform
x(n)
+
b0
w(n)
Σ
+
+
the initial conditions at t ¼ t0, their behavior can be uniquely determined for t t0. To see how to develop a dynamic, let us consider the example below.
y(n)
Σ +
z–1
Example a1
+
Σ
b1
+
Σ
Let a discrete system with input v(n) and output y(n) be described by the difference equation
+
+ z–1
+
Σ
y(n) þ 2y(n
aN – 1
bN – 1
+
(6:127)
If y(n0 1) and y(n0 2) are the initial conditions for n > n0, then y(n) can be found recursively from Equation 6.127. Let us take the pair y(n 1) and y(n 2) as the state of the system at time n. Let us call the vector
Σ +
+
2) ¼ y(n)
1) þ y(n
z–1 aN
x(n) ¼
bN
FIGURE 6.11
x1 (n) y(n ¼ x2 (n) y(n
2) 1)
(6:128)
the state vector for the system. From the definition above, we obtain
h(n) ¼
n
bn 0
n ¼ 0, 1, . . . , M otherwise
(6:126)
x1 (n þ 1) ¼ y(n þ 1
The direct form I and direct form II structures are shown in Figures 6.12 and 6.13. Because of the chain of delay elements across the top of the diagram, this structure is also referred to as a tapped delay line structure or a transversal filter structure.
x2 (n þ 1) ¼ y(n) ¼ y(n)
z–1
2)
2y(n
x1 (n)
2x2 (n)
z–1
h(1)
h(2)
h(M – 1)
+
+ +
y(n
x2 (n þ 1) ¼ y(n)
z–1
h(0)
1)
+
Σ
h(M)
+ +
Σ
+ +
Σ
y(n)
Σ
FIGURE 6.12
z–1
+
Σ
z–1
+ h(M) x(n)
+
+
Σ +
h(M – 1)
(6:129)
1)
(6:130)
or
The mathematical models describing dynamical systems are almost always of finite-order difference equations. If we know x(n)
2) ¼ y(n
and
6.1.3.4 Linear, Time-Invariant, Discrete-Time, Dynamical Systems
FIGURE 6.13
z–1
Σ +
h(M – 2)
+
z–1
Σ +
h(2)
y(n)
+
Σ +
h(1)
h(0)
(6:131)
6-30
Transforms and Applications Handbook This is a function only of the time difference n2T fore, it is customary to name the matrix
Equations 6.129 and 6.131 can be written in the form
x1 (n þ 1) 0 ¼ x2 (n þ 1) 1
1 2
x1 (n) 0 þ y(n) x2 (n) 1
(6:132)
or x(n þ 1) ¼ A x(n) þ By(n)
(6:133)
w(nT ) ¼ An
x1 (n)
2x2 (n) ¼ [ 1
2]
(6:140)
the state transition matrix with the understanding that n ¼ n2 n1. It follows that the system states at two times, n2T and n1T, are related by the relation
But Equation 6.130 can be written in the form y(n) ¼ y(n)
n1T. There-
x(n2 T ) ¼ w(n2 T , n1 T )x(n1 T )
x1 (n) þ y(n) x2 (n)
(6:141)
when the input is zero. From Equation 6.139 we obtain the following relationships:
or
(a) w(nT , nT ) ¼ I ¼ identity matrix
(6:142)
1
y(n) ¼ Cx þ y(n)
(b) w(n2 T , n1 T ) ¼ w (n1 T , n2 T )
(6:134)
Hence, the system can be described by vector–matrix difference equations (6.133) and an output equation (6.134) rather than by the second-order difference equation (6.127). A time-invariant, linear, and discrete dynamic system is described by the state equation x(nT þ T ) ¼ Ax(nT ) þ By(nT )
(6:135)
(c) w(n3 T , n2 T )w(n2 T , n1 T ) ¼ w(n3 T , n1 T )
(6:144)
If the input is not identically zero and x(nT) is known, then the progress (later states) of the system can be found recursively from Equation 6.135. Proceeding with the recursion, we obtain x(nT þ 2T ) ¼ A x(nT , þ T ) þ B y(nT þ T ) ¼ A A x(nT ) þ A B y(nT ) þ B y(nT þ T )
and the output equation is of the form y(nT ) ¼ Cx(nT ) þ Dy(nT )
(6:143)
¼ w(nT þ 2T , nT )x(nT ) þ w(nT þ 2T , nT þ T )B y(nT )
(6:136)
where
þ B y(nT þ T ) In general, for k > 0 we have the solution
x(nT ) ¼ N-dimensional column vector
x(nT þ kT ) ¼ w(nT þ kT , nT )x(nT )
y(nT ) ¼ M-dimensional column vector y(nT ) ¼ R-dimensional column vector
þ
A ¼ N N nonsingular matrix B ¼ N M matrix C ¼ R N matrix D ¼ R M matrix
i¼n
w(nT þ kT , iT þ T)B y(iT)
(6:145)
From Equation 6.141, when the input is zero, we obtain the relation x(n2 T ) ¼ w(n2 T
When the input is identically zero, Equation 6.135 reduces to x(nT þ T ) ¼ A x(nT )
nþk X1
(6:137)
n1 T )x(n1 T ) ¼ An2
n1
x(n1 T )
(6:146)
According to Equation 6.145, the solution to the dynamic system when the input is not zero is given by
so that
x(nT þ kT ) ¼ w(nT þ kT x(nT þ 2T ) ¼ A x(nT þ T ) ¼ A A x(nT ) ¼ A2 x(nT )
þ
and so on. In general we have
nþk X1 i¼n
nT )x(nT )
w½(n þ k
i
1)T ÞBy(iT )
(6:147)
or x(nT þ kT ) ¼ Ak x(nT )
(6:138)
The state transition matrix from n1T to n2T (n2 > n1) is given by w(n2 T , n1 T ) ¼ An2
n1
(6:139)
x(nT þ kT ) ¼ w(kT )x(nT ) þ B y(iT)
k>0
nþk X1 i¼n
w½(n þ k
i
1)T Þ (6:148)
6-31
Z-Transform To find the solution using the Z-transform method, we define the one-sided Z-transform of an R 3 S matrix function f (nT) as the R 3 S matrix 1 X
F(z) ¼
f (nT )z
n
The above equation is identical to Equation 6.148 with n ¼ 0. The behavior of the system with zero input depends on the location of the poles of
Because
zx(0) ¼ A X(z) þ B V(z)
A) 1 zx(0) þ (zI
A) 1 B V(z)
(6:151)
The state of the system x(nT) and its output y(nT) can be found for n 0 by taking the inverse Z-transform of Equations 6.150 and 6.151 For a zero input, Equation 6.150 becomes A) 1 zx(0)
X(z) ¼ (zI
(6:152)
so that x(nT ) ¼ Z
1
(zI
A) 1 z x(0)
(6:153)
(6:154)
Comparing Equations 6.153 and 6.154 we observe that w(nT ) ¼ An ¼ Z or equivalently,
1
(zI
A) 1 z
n
F(z) ¼ Z{A } ¼ (zI
n0
1
A) z
(6:155)
(6:156)
The Z-transform provides straightforward method for calculating the state transition matrix. Next combine Equations 6.150 and 6.156 to find X(z) ¼ F(z)x(0) þ F(z)z 1 B V(z)
(6:157)
By applying the convolution theorem and the fact that Z
1
F(z)z
1
¼ w(nT
T )u(nT
T)
(6:158)
the inverse Z-transform of Equation 6.157 is given by
x(kT ) ¼ w(kT )x(0) þ
k 1 X i¼0
w½(k
i
1)T ÞB y(iT)
1
¼
adj(zI det(zI
A) A)
(6:161)
(6:159)
A)
(6:162)
D(z) is known as the characteristic polynomial for A (for the system) and its roots are known as the characteristic values or eigenvalues of A. If all roots are inside the unit circle, the system is stable. If even one root is outside the unit circle, the system is unstable.
Example Consider the system
0 2 x1 (nT ) y(nT ) þ 1 2 x2 (nT ) x1 (nT ) y(nT ) ¼ [0:22 2] þ y(nT ) x2 (nT )
x1 (nT þ T ) 0 ¼ x2 (nT þ T ) 0:22
If we let n1 ¼ 0 and n2 ¼ n, then Equation 6.146 becomes x(nT ) ¼ w(nT )x(0) ¼ An x(0)
A)
D(z) ¼ det(zI
(6:150)
From the output Equation 6.136, we see that Y(z) ¼ C X(z) þ D V(z)
(zI
where adj() denotes the regular adjoint in matrix theory, these poles can only occur at the roots of the polynomial
or X(z) ¼ (zI
(6:160)
n¼0
The elements of F(z) are the transforms of the corresponding elements of f (nT). Taking the Z-transform of both sides of the state equation (6.135), we find zX(z)
A) 1 z
F(z) ¼ (zI
(6:149)
For this system we have 0 0 2 , , B¼ A¼ 1 0:22 2
C ¼ [0:22 2], D ¼ [1]
The characteristic polynomial is ""
D(z) ¼ det(zI ¼ det ¼ z(z
A] ¼ det
"
z
2
0:22
z
2
z
0
0 #
z
0:44 ¼ z 2
2)
2z
#
"
0
2
0:22
2
0:44 ¼ (z
##
2:2) þ (z þ 0:2)
Hence, we obtain (see Equation 6.160)
F(z) ¼
" # z 2 2 z 2:2)(z þ 0:2) 0:22 z
(z 2
6 (z ¼6 4 (z
z(z 2) 2:2)(z þ 0:2) 0:22z 2:2)(z þ 0:2)
(z (z
3 2z 2:2)(z þ 0:2) 7 7 5 z2
2:2)(z þ 0:2)
6-32
Transforms and Applications Handbook
Because D(z) has a root outside the unit circle at 2.2, the system is unstable. Taking the inverse transform we find that 2 1 11 (2:2)n þ ( 0:2)n 6 12 12 w(nT ) ¼ 6 4 11 11 (2:2)n ( 0:2)n 120 120 n0
5 (2:2)n 6
3 5 ( 0:2)n 7 6 7 5 11 1 n n (2:2) þ ( 0:2) 12 12
To check, set n ¼ 0 to find w(0) ¼ I and w(T) ¼ A. Let x(0) ¼ 0 and the input be, the unit impulse y(nT) ¼ d(nT) so that V(z) ¼ 1. Hence, according to Equation 6.157 X(z) ¼ F(z)z 1 B V(z) ¼ ¼
(z
"
z 2 1 (z 2:2)(z þ 0:2) 0:22 " # 2
#" # 2 0 z
Because Sxx(v) is real, nonnegative, and even, it follows from Equation 6.165 that Sxx(ejvT) is also real, nonnegative, and even. If the envelope of Rxx(t) decays exponentially for jtj > 0, then the ROC for Sxx(z) includes the unit circle. If Rxx(t) has undamped periodic components the series in Equation 6.164 converges in the distribution sense that contains impulse function. The average power in x(nT) is 1 E x2 (nT) ¼ Rxx (0) ¼ 2pj
C
dz z
(6:166)
1
vð s =2
1 Rxx (0) ¼ vs
The inverse Z-transform gives
Sxx (e jvT ) 3 ( 0:2)n 1 5 1 ( 0:2)n 2
Sxx (z)
where C is a simple, closed contour lying in the ROC and the integration is taken in counterclockwise sense. If C is the unit circle, then
1 2:2)(z þ 0:2) z
2 n 1 5 4 (2:2) x(nT ) ¼ 1 6 (2:2)n 2
þ
Sxx (e jvT )dv vs ¼
2p T
(6:167)
vs =2
dv ¼ average power in dv vs
(6:168)
Sxy(z) is called the cross power spectral density for two jointly wide-sense stationary processes x(t) and y(t). It is defined by the relation
n>0
and the output is given by y(nT ) ¼ C x(nT ) þ Dy(nT ) 8 0
Sxy (z) ¼ Syx (z 1 ), Sxx (z) ¼ Sxx (z 1 )
(6:170)
6.1.3.5 Z-Transform and Random Processes Equivalently, we have
6.1.3.5.1 Power Spectral Densities The Z-transform of the autocorrelation function Rxx(t) ¼ E{x(t þ t)x(t)} sampled uniformly at nT times is given by Sxx (z) ¼
1 X
Rxx (nT)z
n
(6:163) Sxx (z) ¼
Sxx (e
) ¼ Sxx (z)jz¼e jvT ¼
1 X
Rxx (nT)e
jvnT
(6:164)
n¼ 1
Sxx (e
1 1 X )¼ Sxx (v T n¼ 1
)
(6:171)
N(z) ¼ g2 G(z)G(z 1 ) D(z)
(6:172)
where QL (1 G(z) ¼ Qk¼1 M k¼1 (1
PL ak z 1 ) ak z ¼ Pk¼0 M 1 bk z ) k¼0 bk z
k k
g2 > 0, jakj < 1, jbkj < 1, ak and bk are real
However, from the sampling theorem we have jvT
jvT
If Sxx(z) is a rational polynomial, it can be factored in the form
n¼ 1
where the Fourier transform of Rxx(t) is designated by Sxx(v). The sampled power spectral density for x(nT) is defined to be jvT
Sxx (e jvT ) ¼ Sxx (e
6.1.3.5.2 Linear Discrete-Time Filters nvs ), vs ¼ 2p=T
(6:165)
Let Rxx(nT), Ryy(nT), and Rxy(nT) be known. Let two systems have transfer functions H1(z) and H2(z), respectively. The output
6-33
Z-Transform
and x(nT)
υ(nT) H1(z)
y(nT)
Syy (e jvT ) ¼ H(e jvT )H(e jvT )Sxx (e jvT ) 2 ¼ H(e jvT ) Sxx (e jvT ) ω(nT )
H2(z)
FIGURE 6.14
of these filters, when the inputs are x(nT) and y(nT) (see Figure 6.14), are y(nT) ¼ w(nT) ¼
1 X
h1 (kT)x(nT
kT)
1 X
h2 (kT)y(nT
kT)
¼
1 X
k¼ 1 1 X
Let y(nT) be an observed wide-sense stationary process and x (nT) be a desired wide-sense stationary process. The process y (nT) could be the result of the desired signal x(nT) and a noise signal y(nT). It is desired to find a system with transfer function H(z) such that the error e(nT) ¼ x(nT) ^x(nT) ¼ x(nT) Z 1 fY(z)H(z)g is minimized. Referring to Figure 6.15 and to Equation 6.180, we can write
(6:173)
(6:174)
h1 (kT)Efx(mT þ nT h1 (kT)Rxy (mT
kT)
(6:182)
Raa (mT) ¼ g2 d(mT)
(6:183)
The signal a(nT) is known as the innovation process associated with y(nT). From Figure 6.15, we obtain
kT)y(nT)g
^x(nT) ¼
(6:175)
k¼ 1
1 Syy (z) ¼ g2 H1 (z)H1 (z 1 )
where a(nT) is taken as white noise (uncorrelated process). We, therefore, can write
k¼ 1
Let n ¼ n þ m in Equation 6.173, multiply by y(nT), and take the ensemble average to find Ryy (mT) ¼
6.1.3.5.3 Optimum Linear Filtering
Saa (z) ¼
k¼ 1
(6:181)
1 X
g(kT)a(nT
kT)
(6:184)
k¼ 1
The mean square error is given by
Hence, by taking the Z-transform we obtain Syy (z) ¼ H1 (z)Sxy (z)
(6:176)
Similarly from Equation 6.174 we obtain Ryw (mT) ¼
1 X
k¼ 1
h2 (kT)Ryy (mT þ kT)
2
E e (nT) ¼ E
("
¼ E x (nT)
(6:177)
and Syw (z) ¼ H2 (z )Syy (z)
(6:178)
From Equations 6.176 and 6.178, we obtain Syw (z) ¼ H1 (z)H2 (z 1 )Sxy (z)
¼ Rxx (0) ¼ Rxx (0) þ
(6:179)
Also, for x(nT) ¼ y(nT) and h1(nT) ¼ h2(nT) ¼ h(nT), Equation 6.179 becomes
y(nT )
Syy (z) ¼ H(z)H(z 1 )Sxx (z)
FIGURE 6.15
(6:180)
2
("
1 X
k¼ 1
1 H1(z)
1 X
2E
(
1 X
g(kT)x(nT)a(nT
g(kT)a(nT
g(kT)Rxa (kT) þ g2 gg(kT)
kT)
#2 ) )
kT)
k¼ 1
k¼ 1
1 X
k¼ 1
g(kT)a(nT
k¼ 1
2
þE 1
x(nT)
1 X
kT)
#2 )
1 X
g 2 (kT)
k¼ 1
1 Rxa (kT) 2 1 X R2 (kT) 2 g g k¼ 1 xa
a(nT )
G(z)
ˆx(nT )
6-34
Transforms and Applications Handbook
To minimize the error we must set the quantity in the brackets equal to zero. Hence, g(nT) ¼
1 Rxa (nT) g2
The Laplace transform of a sampled function
fs (t) ¼ f (t)
1 sc
c j1
where sc is the abscissa of convergence.
(6:191)
8< 1 s < 0 < jzj ¼ ¼ 1 s ¼ 0 : >1 s>0
(6:200)
6-35
Z-Transform
Therefore, we have the following correspondence between the s- and z-planes: 1. Points in the left half of the s-plane are mapped inside the unit circle in the z-plane 2. Points on the jv-axis are mapped onto the unit circle 3. Points in the right half of the s-plane are mapped outside the unit circle 4. Lines parallel to the jv-axis are mapped into circles with radius jzj ¼ esT 5. Lines parallel to the s-axis are mapped into rays of the form arg z ¼ vT radians from z ¼ 0 6. The origin of the s-plane corresponds to z ¼ 1 7. The s-axis corresponds to the positive u ¼ Re z-axis 8. As v varies between vs=2 and vs=2, arg z ¼ vT varies between p and p radians Let f(t) and g(t) be causal functions with Laplace transforms F(s) and G(s) that converge absolutely for Re s > sf and Re s > sg, respectively; then 1 +{ f (t)g(t)} ¼ 2pj
cþj1 ð
F(p)G(s
p)dp
(6:201)
c j1
The contour is parallel to the imaginary axis in the complex p-plane with s ¼ Res > sf þ sg
sf < c < s
and
sg
(6:202)
With this choice the poles G(s p) lie to the right of the integration path. For causal f(t), its sampling form is given by fs (t) ¼ f (t) ¼
1 X
1 X
j Im p B P R
C2 D
n¼0
E
Re p
c
Poles of F( p) A
FIGURE 6.16
The distance p in Figure 6.16 is given by p ¼ c þ Re ju
p=2 u 3p=2
(6:207)
If the function F(p) is analytic for some jpj greater than a finite number R0 and has a zero at infinity, then in the limit as R ! 1 the integral along the path BDA is identically zero and the integral along the path AEB averages to Fs(s). The contour C1 þ C2 encloses all the poles of F(p). Because of these assumptions, F(p) must have a Laurent series expansion of the form F(p) ¼
a 1 a 2 a 1 Q(p) þ 2 þ ¼ þ 2 p p p p
jpj > R0
(6:208)
Q(p) is analytic in this domain and
: nT) ¼ f (t)combT (t)
d(t
C1
θ
jQ(p)j < M < 1
jpj > R0
(6:209)
Therefore, from Equation 6.208
f (nT)d(t
nT)
(6:203)
a
n¼0
If
1
¼ lim pF(p)
(6:210)
p!1
From the initial value theorem 1
: X g(t) ¼ combT (t) ¼ d(t
nT)
n¼0
then its Laplace transform is G(s) ¼ +f g(t)g ¼
1 X
e
nTs
n¼0
¼
1 1 e
Ts
Res > 0
(6:205)
Because sg ¼ 0, then Equation 6.201 becomes 1 Fs (s) ¼ 2pj
cþj1 ð
c j1
1
F(p) e (s
p)T
dp
a
(6:204)
s > sf , sf < c < s (6:206)
1
¼ f (0þ)
(6:211)
Applying Cauchy’s residue theorem to Equation 6.206, we obtain ð X F(p) 1 F(p) Fs (s) ¼ Res lim dp pT e sT R!1 2pj 1 e 1 epT e sT p¼pk k C2
(6:212)
where {pk} are the poles of F(p) and s ¼ Re{s} > sf. Introducing Equations 6.208 and 6.211 into the above equation, it can be shown (see Jury, 1973) Fs (s) ¼
X k
F(p) Res 1 e pT e
sT
p¼pk
f (0þ) 2
(6:213)
6-36
Transforms and Applications Handbook
By letting z ¼ esT, the above equation becomes F(z) ¼ Fs (s)s¼T1 ‘nz ¼ f (0þ) 2
X
Res
k
F(p) 1 e pT z
sf T
jzj > e
1
It is conventional in calculating with the Z-transform of causal signals to assign the value of f(0þ) to f(0). With this convention the formula for calculating F(z) from F(s) reduces to p¼pk
F(z) ¼
(6:214)
X
Res
k
F(p) 1 e pT z
1
p¼pk
, jzj > esf T
(6:215)
6.1.3.7 Relationship to the Fourier Transform
Example
The sampled signal can be represented by
The Laplace transform of f(t) ¼ tu(t) is 1=s2. The integrand jte st e jvtj < 1 for s > 0 implies that the ROC is Re{s} > 0. Because f(t) has a double pole at s ¼ 0, Equation 6.214 becomes
fs (t) ¼
1 0 p2 (1 epT z 1 ) p¼0 2 d p2 Tz 1 ¼ ¼ T dp p2 (1 ep z 1 ) p¼0 (1 z 1 )2
Fs (s) ¼ Fs (v) ¼
Example The Laplace transform of f(t) ¼ e at u(t) is 1=(s þ a). The ROC is Res > a and from Equation 6.214 we obtain 1 F(z) ¼ Res T 1 p (p þ a)(1 e z ) p¼ 1 d(n) þ e 2
nT)
1 X
f (nT)e
snT
(6:217)
f (nT)e
jvnT
(6:218)
n¼ 1 1 X
n¼ 1
Fs (s) ¼ F(z)jz¼esT a
1 ¼ 2 1
1 e
aT z 1
1 2
u(nT )
If we had proceeded to find the Z-transform from f(nT) ¼ exp( anT)u(nT), we would have found F(z) ¼ 1=(1 e aT z 1). Hence, to make a causal signal f(t) consistent with F(s) and the inversion formula, f(0) should be assigned the value f(0þ)=2.
TABLE A.6.1
Appendix: Tables
1. Linearity Z fci fi (nT)g ¼ ci Fi (z) jzj > Ri , ci are constants ‘ ‘ P P ci fi (nT) ¼ ci Fi (z) jzj > max Ri Z i¼0
2. Shifting property Z f f (nT Z f f (nT
kT)g ¼ z k F(z), f ( nT) ¼ 0 for n ¼ 1, 2, . . . k P f ( nT)z (k n) kT)g ¼ z k F(z) þ
Z f f (nT þ kT)g ¼ Z k F(z) Z f f (nT þ T)g ¼ z ½ f (z)
n¼1 kP1
f (nT)z k
n¼0
f (0)
(6:220)
Because Fs(s) is periodic with period vs ¼ 2p=T, we need only consider the strip vs=2 < v vs=2, which uniquely determines Fs(s) for all s. The transformation z ¼ exp(sT) maps this strip uniquely onto the complex z-plane so that F(z) contains all the information in Fs(s) without the redundancy.
Z-Transform Properties for Positive-Time Sequences
i¼0
(6:219)
If the ROC for F(z) includes the unit circle, jzj ¼ 1, then Fs (v) ¼ F(z)jz¼e jvT
anT
(6:216)
If we set z ¼ esT in the definition of the Z-transform, we see that
The inverse transform is f (nT ) ¼
f (nT)d(t
n¼ 1
with corresponding Laplace and Fourier transforms
F(z) ¼ Res
1 X
n
6-37
Z-Transform TABLE A.6.1 (continued) 3. Time scaling Z anT f (nT) ¼ F(a
T
Z-Transform Properties for Positive-Time Sequences
z) ¼
1 P
f (nT)(a
T
z)
n
jzj > aT
n¼0
4. Periodic sequence zN F(1) (z) jzj > R Z f f (nT)g ¼ N z 1 N ¼ number of time units in a period R ¼ radius of convergence of F(1)(z)
F(1)(z) ¼ Z-transform of the first period
5. Multiplication by n and nT dF(z) jzj > R Z fnf (nT)g ¼ z dz dF(z) jzj > R Z fnTf (nT)g ¼ zT dz R ¼ radius of convergence of F(z) 6. Convolution Z f f (nT)g ¼ F(z) jzj > R1 Z fh(nT)g ¼ H(z) jzj > R2
Z f f (nT)*h(nT)g ¼ F(z)H(z)
jzj > max (R1 , R2 )
7. Initial value f (0T) ¼ lim F(z) jzj > R z!1
if F(1)exists
8. Final value lim f (nT) ¼ lim (z
n!1
1)F(z) if f (1T) exists
z!1
k
9. Multiplication by (nT) d Z nk T k f (nT) ¼ Tz Z (nT)k 1 f (nT) dZ 10. Complex conjugate signals Z f f (nT)g ¼ F(z) jzj > R Z f f *(nT)g ¼ F*(z*)
k > 0 and is an integer
jzj > R
11. Transform of product Z f f (nT)g ¼ F(z)
jzj > Rf
Z fh(nT)g ¼ H(z) jzj > Rh þ
z dt 1 jZj Z f f (nT)h(nT)g ¼ F(t)H , jZj > Rf Rh , Rf < jtj < 2pj t t Rh C |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} counterclockwise direction
12. Parseval’s theorem
Z f f (nT)g ¼ F(z) jzj > Rf Z fh(nT)g ¼ H(z) jzj > Rh þ 1 X 1 dz f (nT)h(nT) ¼ F(z)H(z 1 ) jzj ¼ 1 > Rf Rh 2pj z n¼0 C |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} counterclockwise integration
13. Correlation
f (nT) h(nT) ¼
1 P
f (mT)h(mT
m¼0
1 nT) ¼ 2pj
Þ
C
F(t)H
1 t
tn 1 dt n 1
Both f(nT) and h(nT) must exist for jzj > 1. The integration is taken in counterclockwise direction. 14. Transform withparameters q q f (nT, a) ¼ F(z, a) Z qa qa Z lim f (nT, a) ¼ lim F(z, a) a!a0 a!a0 ( ) Ða1 Ða1 f (nT, a)da ¼ F(z, a)da finite interval Z a0
a0
6-38
Transforms and Applications Handbook TABLE A.6.2 Z-Transform Properties for Positive- and Negative-Time Sequences 1. Linearity ‘ ‘ P P ci fi (nT) ¼ ci Fi (z) Z II i¼0
max Riþ < jzj < min Ri
i¼0
2. Shifting property
Z II f f (nT kT)g ¼ z k F(z)
Rþ < jzj < R
3. Scaling Z II f f (nT)g ¼ F(z) Rþ < jzj < R Z II anT f (nT) ¼ F(a T z) jaT jRþ < jzj < jaT jR
4. Time reversal Z II f f (nT)g ¼ F(z)
Rþ < jzj < R 1 1 Z II f f ( nT)g ¼ F(z ) < jzj < R Rþ 5. Multiplication by nT 1
Z II f f (nT)g ¼ F(z) Z II fnTf (nT)g ¼
Rþ < jzj < R dF(z) Rþ > jzj < R dz
zT
6. Convolution Z II ff1 (nT) * f2 (nT)g ¼ F1 (z)F2 (z) ROC F1 (z) [ ROC F2 (z 1 ) max (Rþf1 , Rþf2 ) < jzj < min (R 7. Correlation Rf1 f2 (z) ¼ Z II ff1 (nT) f2 (nT)g ¼ F1 (z)F2 (z 1 ) ROC F1 (z) [ ROC F2 (z 1 )
8. Multiplication by e
anT
max (Rþf1 , Rþf2 ) < jzj < min (R
Z II f f (nT)g ¼ F(z) Rþ < jzj < R Z II e anT f (nT) ¼ F(eaT z) je aT jRþ < jzj < je
9. Frequency translation G(v) ¼ Z II e jv0 nT f (nT) ¼ G(z)jz¼ejvT ¼ F ej(v ROC of F(z) must include the unit circle
aT
f1 ,
R
f2 )
f1 ,
R
f2 )
jR
v0 )T
¼ F(v
v0 )
10. Product Z II f f (nT)g ¼ F(z)
Z II fh(nT)g ¼ H(z)
Rþf < jzj < R
f
Rþh < jzj < R h þ
z dt 1 Z II f f (nT)h(nT)g ¼ G(z) ¼ F(t)H , 2pj t t
C
jzj jzj < jtj < min R f , max Rþf , R h Rþh |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
Rþf Rþh < jzj < R f R
counterclockwise integration
11. Parseval’s theorem
Z II f f (nT)g ¼ F(z) Rþf < jzj < R f Z II fh(nT)g ¼ H(z) Rþh < jzj < R h þ 1 X 1 dz f (nT)h(nT) ¼ F(z)H(z 1 ) Rþf Rþh < jzj ¼ 1 < R f R 2pj z n¼ 1
C 1 1 < jzj < min R f , max Rþf , R h Rþh |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} counterclockwise integration
12. Complex conjugate signals Z II f f (nT)g ¼ F(z) Rþf < jzj < R Z II f f *(nT)g ¼ F*(z*)
f
Rþf < jzj < R
f
h
h
6-39
Z-Transform TABLE A.6.3
Inverse Transforms of the Partial Fractions of F(z)
Partial Fraction Term z z a z2 (z a)2 z3 (z a)3 .. . n z (z a)n
ak, k 0 (k þ 1)ak, k 0 1 (k þ 1)(k þ 2)ak , k 0 2 .. . 1 (k þ 1)(k þ 2) (k þ n (n 1)!
z z2 a)2
(k þ 1)ak, k 1
a)3
1 (k þ 1)(k þ 2)ak , k 2
z3 (z .. .
.. .
zn (z a)n
TABLE A.6.4
1 (n
1)!
(k þ 1)(k þ 2) (k þ n
(I) Fi(z) Converges for jzj > Rc k 1
1
a
a z 2. (z a)2 z(z þ a) 3. (z a)3 z(z 2 þ 4az þ a2 ) 4. (z a)4
TABLE A.6.5
1)ak ,
k
1
Corresponding Time Sequence
z
a
1
Inverse Transforms of the Partial Fractions of Fi(z)a
Elementary Transforms Term Fi(z) 1.
k0
ak, k 1
a
(z
1)ak ,
Inverse transform term in F(z) converges absolutely for some jzj < jaj
Partial fraction term z
Inverse Transform Term in F(z) Converges Absolutely for Some jzj > jaj
jk 1
(II) Fi(z) Converges for jzj < Rc ak 1jk 0
kak 1jk 1
kak 1jk 0
k2ak 1jk 1
k2ak 1jk 0
k3ak 1jk 1
k3ak 1jk 0
The function must be a proper function.
Z-Transform Pairs
Number
Discrete Time-Function f(n), n 0
1
u(n) ¼1,0,
2
e
3
n
4
n2
5
n3
6
n4
7
n5
an
n
for n 0 otherwise
Z-Transform 1 P f (n)z
F (z) ¼ Z ½ f (n) ¼ z
n¼0
n
jzj > R
z
1 z z e a z (z 1)2 z(z þ 1) (z 1)3 z(z 2 þ 4z þ 1) (z 1)4 3 z(z þ 11z2 þ 11z þ 1) (z 1)5 z(z 4 þ 26z3 þ 66z 2 þ 26z þ 1) (z 1)6 (continued)
6-40 TABLE A.6.5 (continued)
Transforms and Applications Handbook Z-Transform Pairs Discrete Time-Function f(n), n 0
Number 8
nk
9
u(n
( 1)k Dk
an
f(n)
n(2) ¼ n(n
1)
12
n(3) ¼ n(n
1) (n
2)
13
n(k) ¼ n(n
1) (n
2) . . . (n
14
n[k] f(n), n[k] ¼ n(n þ 1) (n þ 2) . . . (n þ k
15 16
( 1)kn(n 1) (n (n 1) fn 1
17
( 1)k(n
18
nf(n)
22 23 24 25 26
1) (n
sin (an þ c)
30
cosh (an)
31
sinh (an)
37
k) fn
k
jzj > R
z d ; D¼z z 1 dz
1
1 , n>0 n 1 e an n sin an n cos an , n>0 n (n þ 1)(n þ 2) . . . (n þ k (k 1)! n X 1 m m¼1
z
1)3 z 3! (z 1)4 z k! (z 1)kþ1 dk ( 1)k z k k ½F (z) dz (z
dk zF (k) (z), F (k) (z) ¼ k F (z) dz F (1) (z) F (k) (z)
zF (1) (z)
z F (2) (z) þ zF (1) (z) 2
z 3 F (3) (z)
cn n! ( ln c)n n! k! k n k n k c a , ¼ , nk n n (k n)!n!
nþk n c k n c , (n ¼ 1, 3, 5, 7, . . . ) n! cn , (n ¼ 0, 2, 4, 6, . . . ) n!
29
36
2) . . . (n
1)
a kþ1
n f(n)
cos (an)
35
k þ 1) fn
2) . . . (n
3
28
34
k þ 1)
n f(n)
sin (an)
33
2
2
27
32
n
F (e z)
11
21
n¼0
a
e
20
kþ1
z z
k)
10
19
Z-Transform 1 P f (n)z
F (z) ¼ Z ½ f (n) ¼
1)
3z2 F (2) (z)
zF (1) (z)
ec=z c1=z (az þ c)k zk z kþ1 c)kþ1
c sinh z
c cosh z z sin a z 2 2z cos a þ 1 z(z cos a) z 2 2z cos a þ 1 z 2 sin c þ z sin (a c) z 2 2z cos a þ 1 z(z cosh a) z 2 2z cosh a þ 1 zsinh a z 2 2z cosh a þ 1 z ln z 1 z e a a þ ln ,a>0 z 1 sin a a þ tan 1 ,a>0 z cos a z ln pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi z 2 2z cos a þ 1
1 k , k ¼ 2, 3, . . . 1 z z z ln z 1 z 1 (z
6-41
Z-Transform TABLE A.6.5 (continued)
Z-Transform Pairs
Discrete Time-Function f(n), n 0
Number 38
39
40
41 42 43
44 45 46
47
n 1 X 1 m! m¼0
Z-Transform 1 P f (n)z F (z) ¼ Z ½ f (n) ¼
( 1)(n p)=2 , for n p and n 2n n 2 p ! nþp 2 !
p ¼ even
¼ 0, for n < p or n p ¼ odd 9 8
= < a bn=k , n ¼ mk, (m ¼ 0, 1, 2, . . . ) n=k ; : ¼0 n 6¼ mk
an d n 2 an Pn (x) ¼ n (x 1)n 2 n! dx anTn(x) ¼ an cos(n cos 1 x) 1 r Ln (x) X n ( x) ¼ r r! n! r¼0 [n=2] Hn (x) X ( 1)n k xn 2k ¼ k(n 2k)!2k n! k¼0 m d n m n Pn (x), m ¼ integer a Pn (x) ¼ a (1 x2 )m=2 dx m Lm d Ln (x) n (x) ¼ , m ¼ integer n! dx n! 0 1 1 F (z) G0 (z) z , where F (z) and G(z) Z n F (z) G(z)
Jp(z 1) k
a z þb zk z pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi z 2 2xaz þ a2 z(z ax) z 2 2xaz þ a2 z e x=(z 1) z 1 e
x=z 1=2z 2
(2m)! z mþ1 (1 x2 )m=2 am 2m m! (z 2 2xaz þ a2 )mþ1=2
( 1)m z e (z 1)mþ1 ln
48
(m
49
sin (an) n!
ecos a=z sin
cos (an) n! n P fk gn k
ecos a=z
52 53
k¼0 n P k¼0 n P
kfk gn
k2 fk gn
55
(n þ k)(k)
57
(n
58 59 60 61
"
e
1=z
m 1 X 1 k!z k k¼0
#
sin a z
sin a cos z
F (1) (z)G(z), F (1) (z) ¼
k
k
a þ ( a)n 2a2 an bn a b
56
1)!z
m
F (z)G(z)
k¼0 n
54
x=(z 1)
F (z) G(z)
1 m(m þ 1)(m þ 2) . . . (m þ n)
51
jzj > R
e1=z z 1
are rational polynomials in z of the same order
50
n
n¼0
k)(k)
(n k)(m) a(n k) e m! 1 p sin n n 2 cos a(2n 1) , n>0 2n 1 n g n 1 þ (g 1)2 1 g (1 g)2
dF (z) dz
F (2) (z)G(z) 1 z2 a2 z 2 a2 z (z a)(z b) z k!z k (z 1)kþ1 z k!z k (z 1)kþ1 z1k ema (z ea )mþ1 p 1 þ tan 1 2 z pffiffiffi 1 z þ 2 z cos a þ 1 pffiffiffi ln pffiffiffi 4 z z 2 z cos a þ 1 z (z g)(z 1)2 (continued)
6-42 TABLE A.6.5 (continued)
Transforms and Applications Handbook Z-Transform Pairs
Discrete Time-Function f(n), n 0
Number 62
g þ a0 n 1 þ a0 1 g þ nþ 1 g 1 g (g 1)2 an cos pn
64
e
an
cos an
65
e
an
sinh (an þ c)
67 68 69 70 71
74 75 76 77
78 79 80 81
1
(n þ 1)ean
tan
b
1
b , c2 ¼ a0 þ a
a
1
tan
2nea(n þ 1) þ ea(n
2)
(z 1 ð
1)3
ea )kþ1 p 1 F (p)dp þ lim
n!0
z
z
1
c) 2a
f (n) n
F (p)dp
(z
1)(z
1 ea
z(z þ a0 ) g) (z a)2 þ b2
b a
b
1
(n
u ¼ tan
,
z (z g)2 (z z(z 1)k
z1 Ð
f0 ¼ 0 f1 ¼ 0 1 þ a0 (1 g) (1 a)2 þ b2 (g þ a0 )gn þ (g 1) (g a)2 þ b2 1=2 [a2 þ b2 ]n=2 (a0 þ a)2 þ b2 þ 1=2 1=2 , b (a 1)2 þ b2 (a g)2 þ b2 c1 ¼
z(z þ a0 ) g)(z 1)2 z zþa z(z e a cos a) z 2 2ze a cos a þ e 2a z 2 sinh c þ ze a sinh (a z2 2ze a cosh a þ e z (z g) (z a)2 þ b2 (z
f (n) n fnþ2 , nþ1
l ¼ tan
73
gn (a2 þ b2 )n=2 sin (nu þ c) þ 1=2 a)2 þ b2 b (a g)2 þ b2 b u ¼ tan 1 ab 1 c ¼ tan a g ngn 1 3gn 1 n(n 1) 4n 6 3 4 þ 2 3 þ 4 2 (1 g) (1 g) (1 g) (g 1) (g 1) k X (n þ k y)(k) a(n y) y k e ( 1) y k! y¼0
sin (nu þ c þ l)
a
g
1)
z z
2
cos an , n>0 n (n þ k)! fnþk , fn ¼ 0, for 0 n < k n! f (n) , h>0 nþh p nan cos n 2 n 1 þ cos pn na 2
z ln pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi z 2 þ 2z cos a þ 1 dk ( 1)k z 2k k ½F (z) dz 1 Ð z h p (1þh) F (p)dp
p 1 þ cos pn an sin n 4 2
p n 1 þ cos pn a cos n 2 2 Pn (x) n! Pn(m) (x) , m > 0, Pnm ¼ 0, (n þ m)!
a2 z 2 þ a4 2a2 z 2 z 4 a4
pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 exz J0 1 x2 z 1
( 1)n
z
2a2 z 2 2 (z þ a2 )2 2a2 z 2 (z 2 a2 )2
z4
for n < m
n
n¼0
(g
c ¼ c1 þ c2 ,
72
a0 þ 1 (1 g)2
63
66
Z-Transform 1 P f (n)z F (z) ¼ Z ½ f (n) ¼
( 1)m exz Jm 1
pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 x2 z 1
jzj > R
6-43
Z-Transform TABLE A.6.5 (continued)
Z-Transform Pairs
Discrete Time-Function f(n), n 0
Number 82
1 , (n þ a)b
83
an
84 85
cn , (n ¼ 1, 2, 3, 4, . . . ) n cn , n ¼ 2, 4, 6, 8, . . . n n2cn
87
n3cn
88
nkcn
(n 2)=4 p X n=2 an cos n 2i þ 1 2 i¼0
89 90 91 92
(n
2)(n 3) . . . (n k þ 1) n a (k 1)! 1)(k 2) . . . (k n þ 1) n!
k(k
94
nan sin bn
97
98 99 100 101
b4 )i
1)(n
nan cos bn
96
(a4
nk f(n), k > 0 and integer
93
95
2 4i
k
nan (n þ 1)(n þ 2) ( a)n (n þ 1)(2n þ 1) an sin an nþ1 an cos (p=2)n sin a(n þ 1) nþ1 1 (2n)! 1 2 ( a)n n 1 p 2 an cos n n 2 2
102
Bn (x) n!
103
: Wn(x) ¼ Chebyshev polynomials of the second kind
104 105
Bn (x) are Bernoulli polynomials
np sin , m ¼ 1, 2, . . . m Qn(x) ¼ sin (n cos
1
x)
ln z
ln(z
jzj > R
ln z
1 2
c)
ln (z 2
c2 )
cz(z þ c) (z c)3 cz(z 2 þ 4cz þ c2 ) (z c)4 dF (z=c) , F (Z) ¼ Z nk 1 dz z2 z4 þ 2a2 z 2 þ b4 d z F 1 (z), F 1 (Z) ¼ Z nk 1 f (n) dz 1 (z a)k
1 k 1þ z (z=a)3 þ z=a cos b 2(z=a)2 2 (z=a)2 2(z=a) cos b þ 1
(z=a)3 sin b (z=a) sin b 2 (z=a)2 2(z=a) cos b þ 1 z(a 2z)
a 2 ln 1 z 2 a z a pffiffiffiffiffiffiffi z
pffiffiffiffiffiffiffi a 2 z=a tan 1 a=z ln 1 þ a z z cos a a sin a tan 1 a z a cos a z sin a z 2 2az cos a þ a2 ln þ z2 2a z z2 þ 2az sin a þ a2 ln 4a z2 2az sin a þ a2 cos h(z
1=2
)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi z=(z a)
z pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi z 2 a2
ex=z zðe1=z 1Þ z2 2 z 2xz þ 1 z sin p=m 1þz z2 2z cos p=m þ 1 1 z z z2 2xz þ 1
Source: Jury, E.I. Theory and Application of the Z-Transform Method, John Wiley & Sons, Inc., New York, 1964. With permission. It may be noted that fn is the same as f(n).
a
n
n¼0
Fðz 1 , a, bÞ, where F(1, b, a) ¼ z(b, a) ¼ generalized Rieman–Zeta function 2z4 z4 a4
a > 0, Re b > 0
1 þ cos pn p þ cos n 2 2
86
Z-Transform 1 P f (n)z F (z) ¼ Z ½ f (n) ¼
m m
6-44
Bibliography H. Freeman, Discrete-Time Systems, John Wiley & Sons, New York, 1965. R. A. Gabel and R. A. Roberts, Signals and Linear Systems, John Wiley & Sons, New York, 1980. E. I. Jury, Theory and Application of the Z-Transform Method, Krieger Publishing Co., Melbourne, FL, 1973.
Transforms and Applications Handbook
A. D. Poularikas and S. Seeley, Signals and Systems, reprinted 2nd edn., Krieger Publishing Co., Melbourne, FL, 1994. S. A. Tretter, Introduction to Discrete-Time Signal Processing, John Wiley & Sons, New York, 1976. R. Vich, Z-Transform Theory and Applications, D. Reidel Publishing Co., Boston, MA, 1987.
7 Hilbert Transforms 7.1 7.2 7.3 7.4
Introduction................................................................................................................................... 7-2 Basic Definitions........................................................................................................................... 7-2 Analytic Functions Aspect of Hilbert Transformations...................................................... 7-3 Spectral Description of the Hilbert Transformation: One-Sided Spectrum of the Analytic Signal .................................................................................................................. 7-5 Derivation of Hilbert Transforms Using Hartley Transforms
7.5 7.6 7.7
Examples of Derivation of Hilbert Transforms .................................................................... 7-7 Definition of the Hilbert Transformation by Using a Distribution ................................. 7-9 Hilbert Transforms of Periodic Signals................................................................................. 7-10 First Method
7.8 7.9
.
Second Method
.
Third Method: Cotangent Hilbert Transformations
Tables Listing Selected Hilbert Pairs and Properties of Hilbert Transformations...... 7-14 Linearity, Iteration, Autoconvolution, and Energy Equality ............................................ 7-14 Iteration
.
Autoconvolution and Energy Equality
7.10 Differentiation of Hilbert Pairs ............................................................................................... 7-20 7.11 Differentiation and Multiplication by t: Hilbert Transforms of Hermite Polynomials and Functions...................................................................................................... 7-21 7.12 Integration of Analytic Signals................................................................................................ 7-23 7.13 Multiplication of Signals with Nonoverlapping Spectra ................................................... 7-27 7.14 Multiplication of Analytic Signals .......................................................................................... 7-28 7.15 Hilbert Transforms of Bessel Functions of the First Kind ............................................... 7-28 7.16 Instantaneous Amplitude, Complex Phase, and Complex Frequency of Analytic Signals...................................................................................................................... 7-32 Instantaneous Complex Phase and Complex Frequency
7.17 Hilbert Transforms in Modulation Theory.......................................................................... 7-35 Concept of the Modulation Function of a Harmonic Carrier . Generalized Single Side-Band Modulations . CSSB: Compatible Single Side-Band Modulation . Spectrum of the CSSB Signal . CSSB Modulation for Angle Detectors
7.18 Hilbert Transforms in the Theory of Linear Systems: Kramers–Kronig Relations ....................................................................................................... 7-41 Causality . Physical Realizability of Transfer Functions . Minimum Phase Property . Amplitude-Phase Relations in DLTI Systems . Minimum Phase Property in DLTI Systems Kramers–Kronig Relations in Linear Macroscopic Continuous Media . Concept of Signal Delay in Hilbertian Sense
.
7.19 Hilbert Transforms in the Theory of Sampling .................................................................. 7-46 Band-Pass Filtering of the Low-Pass Sampled Signal
.
Sampling of Band-Pass Signals
7.20 Definition of Electrical Power in Terms of Hilbert Transforms and Analytic Signals .................................................................................................................. 7-49 Harmonic Waveforms of Voltage and Current . Notion of Complex Power of the Notion of Power . Generalization of the Notion of Power for Signals with Finite Average Power
.
Generalization
7.21 Discrete Hilbert Transformation ............................................................................................ 7-54 Properties of the DFT and DHT Illustrated with Examples . Complex Analytic Discrete Sequence . Bilinear Transformation and the Cotangent Form of Hilbert Transformations
7.22 Hilbert Transformers (Filters)................................................................................................. 7-61 Phase-Splitter Hilbert Transformers . Analog All-Pass Filters . A Simple Method of Design of Hilbert Phase Splitters . Delay, Phase Distortions, and Equalization . Hilbert Transformers with Tapped Delay-Line Filters . Band-Pass Hilbert Transformers . Generation of Hilbert Transforms Using SSB Filtering . Digital Hilbert Transformers . Methods of Design . FIR Hilbert Transformers . Digital Phase Splitters . IIR Hilbert Transformers . Differentiating Hilbert Transformers
7-1
7-2
Transforms and Applications Handbook
7.23 Multidimensional Hilbert Transformations ......................................................................... 7-79 Evenness and Oddness of N-Dimensional Signals . n-D Hilbert Transformations . 2-D Hilbert Transformations . Partial Hilbert Transformations . Spectral Description of n-D Hilbert Transformations . n-D Hilbert Transforms of Separable Functions . Properties of 2-D Hilbert Transformations . Stark’s Extension of Bedrosian’s Theorem Appendix (Section 7.23) . Two-Dimensional Hilbert Transformers
.
7.24 Multidimensional Complex Signals ....................................................................................... 7-88 Short Historical Review . Definition of the Multidimensional Complex Signal . Conjugate 2-D Complex Signals . Local (or ‘‘Instantaneous’’) Amplitudes, Phases, and Complex Frequencies . Relations between Real and Complex Notation . 2-D Modulation Theory . Appendix: A Method of Labeling Orthants
7.25 Quaternionic 2-D Signals ......................................................................................................... 7-94 Quaternion Numbers and Quaternion-Valued Functions Hermitian Symmetry of the 2-D Fourier Spectrum
.
Quaternionic Spectral Analysis
.
7.26 The Monogenic 2-D Signal...................................................................................................... 7-96 Spherical Coordinates Representation of the MS
Stefan L. Hahn Warsaw University of Technology
7.27 Wigner Distributions of 2-D Analytic, Quaternionic, and Monogenic Signals ............................................................................................................. 7-98 7.28 The Clifford Analytic Signal .................................................................................................... 7-98 7.29 Hilbert Transforms and Analytic Signals in Wavelets ...................................................... 7-99 References ................................................................................................................................................ 7-99
7.1 Introduction
7.2 Basic Definitions
The Hilbert transformations are of widespread interest because they are applied in the theoretical description of many devices and systems and directly implemented in the form of Hilbert analog or digital filters (transformers). Let us quote some important applications of Hilbert transformations:
The Hilbert transformation of a 1-D real signal (function) u(t) is defined by the integral
1. The complex notation of harmonic signals in the form of Euler’s equation exp( jvt) ¼ cos(vt) þ j sin(vt) has been used in electrical engineering since the 1890s and nowadays is commonly applied in the theoretical description of various, not only electrical systems. This complex notation had been introduced before Hilbert derived his transformations. However, sin(vt) is the Hilbert transform of cos(vt), and the complex signal exp( jvt) is a precursor of a wide class of complex signals called analytic signals. 2. The concept of the analytic signal11 of the form c(t) ¼ u(t) þ jv(t), where v(t) is the Hilbert transform of u(t), extends the complex notation to a wide class of signals for which the Fourier transform exists. The notion of the analytic signal is widely used in the theory of signals, circuits, and systems. A device called the Hilbert transformer (or filter), which produces at the output the Hilbert transform of the input signal, finds many applications, especially in modern digital signal processing. 3. The real and imaginary parts of the transmittance of a linear and causal two-port system form a pair of Hilbert transforms. This property finds many applications. 4. Recently two-dimensional (2-D) and multidimensional Hilbert transformations have been applied to define 2-D and multidimensional complex signals, opening the door for applications in multidimensional signal processing.13
y(t) ¼
1 P p
1 ð
1
u(h) 1 dh ¼ P ht p
1 ð
u(h) dh th
(7:1)
1 ð
y(h) dh th
(7:2)
1
and the inverse Hilbert transformation is 1 u(t) ¼ P p
1 ð
1
y(h) 1 dh ¼ P ht p
1
where P stands for principal value of the integral. For convenience, two conventions of the sequence of variables in the denominator are given; both have been used in studies. The left-hand side formulae is used in this chapter. The following terminology is applied: the algorithm, that is, the right-hand side of Equations 7.1 or 7.2, is called ‘‘transformation,’’ and the specific result for a given function, that is, the left-hand side of Equations 7.1 or 7.2, is called the ‘‘transform.’’ The above definitions of Hilbert transformations are conveniently written in the convolution notations y(t) ¼ u(t) *
1 pt
(7:3)
u(t) ¼ y(t) *
1 pt
(7:4)
The integrals in definition (7.1) are improper because the integrand goes to infinity for h ¼ t. Therefore, the integral is defined as the Cauchy principal value (sign P) of the form
7-3
Hilbert Transforms
0 e A 1 ð ð 1 @ u(h) A y(t) ¼ lim þ dh e)0 p ht A)1 A
with the signum function (distribution) defined as follows: (7:5)
Using numerical integration in the sense of the Cauchy principal value with uniform sampling of the integrand, the origin h ¼ 0 should be positioned exactly at the center of the sampling interval. The limit e ) 0 is substituted by a given value of the sampling interval and the limit A ) 1 by a given value of A. The accuracy of the numerical integration increases with smaller sampling intervals and larger values of A. The Hilbert transformation was originally derived by Hilbert in the frame of the theory of analytic functions. The theory of Hilbert transformations is closely related to Fourier transformation of signals of the form
U(v) ¼
1 ð
u(t)ejvt dt;
1
v ¼ 2pf
(7:11)
The multiplication to convolution theorem of the Fourier analysis yields the following spectrum of the Hilbert transform: F
y(t) () V(v) ¼ j sgn(v)U(v)
that is, the spectrum of the signal u(t) should be multiplied by the operator j sgn(v). This relation enables the calculation of the Hilbert transform using the inverse Fourier transform of the spectrum defined by Equation 7.12, that is, using the following algorithm: F 1
F
u(t) ) U(v) ) V(v) ¼ j sgn(v)U ) y(t)
u(t) ¼
1 ð
U(v)e jvt df
The pair of transforms (Equations 7.6 and 7.7) may be denoted F
(7:8)
called a Fourier pair. Similarly the Hilbert transformations (Equations 7.1 and 7.2) may be denoted H
u(t) () y(t)
(7:9)
forming a Hilbert pair of functions. Contrary to other transformations, the Hilbert transformation does not change the domain. For example, the function of a time variable t (or of any other variable x) is transformed to a function of the same variable, while the Fourier transformation changes a function of time into a function of frequency. The Fourier transform (see also Chapter 2) of the kernel of the Hilbert transformation, that is, Q(t) ¼ 1=(pt) (see Equations 7.3 and 7.4) is 1 F () j sgn(v) pt
(7:13)
where the symbols F and F 1 denote the Fourier and inverse Fourier transformations, respectively. In practice, the algorithms of DFT (Discrete Fourier Transform) or FFT (Fast Fourier Transform) can be applied (Section 7.21).
(7:7)
1
u(t) () U(v)
(7:12)
(7:6)
The complex function U(v) is called the Fourier spectrum or Fourier image of the signal u(t) and the variable f ¼ v=2p, the Fourier frequency. The inverse Fourier transformation is
Q(t) ¼
8 0 sgn(v) ¼ 0 v¼0 : 1 v < 0
e
(7:10)
7.3 Analytic Functions Aspect of Hilbert Transformations The complex signal whose imaginary part is the Hilbert transform of its real part is called the analytic signal. The simplest example is the harmonic complex signal given by Euler’s formula c(t) ¼ exp( jvt) ¼ cos(vt) þ j sin(vt). A more general form of the analytic signal was defined in 1946 by Gabor.11 The term ‘‘analytic’’ is used in the meaning of a complex function C(z) of a complex variable z ¼ t þ jt, which is defined as follows:39 Consider a plane with rectangular coordinates (t, t) (called C plane or C ‘‘space’’) and take a domain D in this plane. If we define a rule connecting to each point in D a complex number c, we defined a complex function c(z), z 2 D. This function may be regarded as a complex function of two real variables: c(z) ¼ c(t, t) ¼ u(t, t) þ jv(t, t)
(7:14)
in the domain D 2 R2 (R2 is Euclidean plane or ‘‘space’’). The complete derivative of the function c(z) has the form dc ¼
qc qc * dz þ dz qz qz *
(7:15)
7-4
Transforms and Applications Handbook
where z* ¼ t jt is the complex conjugate and the partial derivatives are
qc 1 qc qc ¼ j ; qz 2 qt qt
qc 1 qc qc þj ¼ qt qz * 2 qt
jτ Large half-circle path CR
(7:16)
The function c(z) ¼ u(t, t) þ jv(t, t) is called the analytic function in the domain D if and only if u(t, t) and v(t, t) are continuously differentiable. It can be shown that this requirement is satisfied, if qc=qz* ¼ 0. This complex equation may be substituted by two real equations qu qy ¼ ; qt qt
qu qy ¼ qt qt
1 ¼ u(t, t) þ jy(t, t) a jz
(a þ t) t ; y(t, t) ¼ (a þ t)2 þ t 2 (a þ t)2 þ t 2
(7:19)
c(z0 ) ¼
1 2pj
9 8 R ð ð < ð c(z) c(z) c(z) = dz þ dz þ dz P : z z0 z z0 z z0 ; CR
Ce
The symbol P denotes the Cauchy principal value, that is, (7:20)
c(z) dz z z0
(7:21)
1 2pj
ð
c(y þ z0 ) dy y
(7:22)
This is a contour integral in the (t, jt) plane. Let us take the contour C in the form shown in Figure 7.1. It is a sum of Ct þ Ce þ CR, where Ct is a line parallel to the t axis shifted by e, Ce is a half-circle of radius e and CR a half-circle of radius R. The analytic signal is defined as a complex function of the real variable t given by the formula c(t) ¼ u(t, 0þ ) þ jy(t, 0þ )
lim
e!0, R!1
(7:24)
ð
C
FIGURE 7.1 The integration path defining the analytic signal (Equation 7.23).
R
1 2pj
C
Small half-circle path Cε
c(t0 , 0þ ) ¼
verifies the Cauchy–Riemann equations. It was shown by Cauchy that if z0 is a point inside a closed contour C 2 D such that c(z0) is analytic inside and on C, then (see also Appendix A) c(z0 ) ¼
R
(7:18)
and the differentiation qu(t, t) qy(t, t) 2t(a þ t) ¼ ¼ qt qt [(a þ t)2 þ t 2 ]2
t
r=ε
(7:17)
is analytic because u(t, t) ¼
z0 = t0 + jε
Ct
ε –R
called the Cauchy–Riemann equations. These equations should be satisfied if the function c(z) is analytic in the domain z 2 D. For example, the complex function c(z) ¼
R
(7:23)
obtained by inserting in the Equation 7.14 t ¼ 0þ, where the subscript þ indicates that the path Ct approaches the t axis from the upperside. The Equation 7.23 is the result of contour integration along the path of Figure 7.1 using the limit e ! 0, R ! 1. We have
P
ðR
R
¼
t0ð e R
þ
ðR
(7:25)
t0 þe
For analytic functions the integral along CR vanishes for R ! 1 and in the limit e ! 0 the integral along the small half-circle Ce equals 0.5 c(t0, 0þ) since within the very small circle around t0 the function c(z) ¼ c(t0, 0þ) is a constant and the integral ð dz ¼ pj. In consequence, the real and imaginary parts Ce z z0 of the analytic signal are given by the integrals (a Hilbert pair)
v(t) ¼
1 P p
1 u(t) ¼ P p
1 ð
u(h, 0) dh ht
(7:26)
1 ð
v(h, 0) dh ht
(7:27)
1
1
where the subscripts t0 and 0þ are deleted. The only difference between the above integrals and those defined by Equations 7.1 and 7.2 consists in notation (deleting zeros in parentheses). Therefore, the real and imaginary parts of the analytic signal c(t) ¼ u(t) þ jv(t)
(7:28)
7-5
Hilbert Transforms
form a Hilbert pair of functions. For example, inserting t ¼ 0 in Equation 7.19 yields the Hilbert pair H a t u(t) ¼ 2 () y(t) 2 2 a þt a þ t2
c(t) þ c*(t) 2
UIm (v) ¼
(7:29)
The signal u(t) is called the Cauchy signal and v(t) is its Hilbert transform. A real signal u(t) may be written in terms of analytic signals u(t) ¼
and the imaginary part of the sine transform
(7:30)
cos (vt) ¼ sin (vt) ¼
e
þe 2
jvt
e jvt ejvt 2j
(7:31)
(7:42)
(7:43)
H[sin (vt)] ¼ cos (vt)
(7:44)
(7:34)
(7:35)
u0 (t) ¼
u(t) u(t) 2
(7:36)
and the odd term
The decomposition is relative, i.e., changes with the shift of the origin of the coordinate t 0 ¼ t t0. In general, the Fourier image of u(t) defined by Equation 7.6 is a complex function (7:37)
where the real part is given by the cosine transform
1
VIm (v) ¼ sgn(v)URe (v)
H[cos (vt)] ¼ sin (vt)
u(t) þ u(t) 2
ue (t) cos (vt)dt
and
(7:33)
ue (t) ¼
URe (v) ¼
(7:41)
(7:32)
where the even term is defined as
1 ð
VRe (v) ¼ j sgn(v)[ jUIm (v)] ¼ sgn(v)UIm (v)
Therefore, the Hilbert transformation changes any even term to an odd term and any odd term to an even term. The Hilbert transforms of harmonic functions are
Any real signal u(t) may be decomposed into a sum
U(v) ¼ URe (v) þ jUIm (v)
(7:40)
where
7.4 Spectral Description of the Hilbert Transformation: One-Sided Spectrum of the Analytic Signal u(t) ¼ ue (t) þ u0 (t)
(7:39)
1
V(v) ¼ VRe (v) þ jVIm (v)
where c*(t) ¼ u(t) jv(t) is the conjugate analytic signal. For this signal the Equation 7.24 takes the form c(t) ¼ u(t, 0) jv(t, 0) and the path C is in the lower half of the z plane. Notice, that the above formulae present a generalization of Euler’s formulae jvt
u0 (t) sin (vt)dt
The multiplication of the Fourier image by the operator j sgn(v) changes the real part of the spectrum to the imaginary one and vice versa (see Equation 7.12). The spectrum of the Hilbert transform is
and its Hilbert transform is c(t) c*(t) v(t) ¼ 2j
1 ð
H[e jvt ] ¼ j sgn(v)e jvt ¼ sgn(v)e j(vt0:5p)
(7:45)
Therefore, the Hilbert transformation changes any cosine term to a sine term and any sine term to a reversed signed cosine term. Because sin(vt) ¼ cos(vt 0.5p) and cos(vt) ¼ sin(vt 0.5p), the Hilbert transformation in the time domain corresponds to a phase lag by 0.5p (or 908) of all harmonic terms of the Fourier image (spectrum). Using the complex notation of the Fourier transform, the multiplication of the spectral function U(v) by the operator j sgn(v) provides a 908 phase lag at all positive frequencies and a 908 phase lead at all negative frequencies. A linear two-port network with a transfer function H(v) ¼ j sgn(v) is called an ideal Hilbert transformer or filter. Such a filter cannot be exactly realized because of constraints imposed by causality (details in Section 7.22). The Fourier image of the analytic signal c(t) ¼ u(t) þ jv(t)
(7:46)
is one-sided. We have H
F
F
u(t) () y(t) u(t) () U(v); y(t) () j sgn(v)U(v): (7:47) Therefore,
(7:38)
F
c(t) () U(v) þ j[j sgn(v)U(v)] ¼ [1 þ sgn(v)]U(v)
(7:48)
7-6
Transforms and Applications Handbook
U(v). For the conjugate signal c*(t) ¼ u(t) jv(t) the Fourier image is doubled at negative frequencies and canceled at positive frequencies.
where 8 < 2 for v > 0 1 þ sgn(v) ¼ 1 for v ¼ 0 : 0 for v < 0
(7:49)
Examples
The Fourier image of the analytic signal is doubled at positive frequencies and canceled at negative frequencies with respect to
1. Consider the analytic j sin (v0 t). We have
signal
e jv0 t ¼ cos (v0 t) þ
H
cos (v0 t) () sin (v0 t); v0 ¼ 2pf0 1 2
F
1 δ(ω – ω0) 2
δ(ω + ω0)
cos (v0 t) () 0:5[d(f þ f0 ) þ d(f f0 )] F
cos (v0 t) () 0:5[d(f þ f0 ) þ d(f f0 )] F
–ω0
ω0
0
e jv0 t () d(f f0 )
ω
j
1 2
The spectra are shown in Figure 7.2. 1 t 2. Consider the analytic signal c(t) ¼ 1þt 2 þ j 1þt 2 . We have
δ(ω + ω0)
–ω0
H t t () 1 þ t2 1 þ t2 F F 1 t () pejvj ; () j sgn(v)pejvj 2 2 1þt 1þt
ω
0
F
c(t) () [1 þ sgn(v)]pejvj
–j δ(ω – ω0) 2
The signals and spectra are shown in Figure 7.3. δ(ω – ω0)
7.4.1 Derivation of Hilbert Transforms Using Hartley Transforms 0
FIGURE 7.2 signal e jv0 t .
ω0
Alternatively, the Hilbert transform may be derived using a special Fourier transformation known as Hartley transformation (See also Chapter 4); it is given by the integral
ω
The spectra of cos(v0t), sin(v0t), and of the analytic
u(t) =
1 1 + t2
U (ω) = πe – |ω|
ω
t t v(t) = 1 + t2
V (ω) = –j
sgn(ω)πe – |ω|
ω
t π (1 + sgn(ω))e– |ω|
ω
FIGURE 7.3 The Cauchy pulse, its Hilbert transform, and the corresponding spectra and the spectrum of the analytic signal c(t) ¼ 1=(1 jt).
7-7
Hilbert Transforms
UHa (v) ¼
1 ð
The inverse Hartley transformation of this spectrum is
u(t)cas(vt)dt;
v ¼ 2pf
1
(7:50) 1 ð
and the inverse Hartley transformation is 1 ð
u(t) ¼
UHa (v) cas(vt)df
1
(7:51)
Ha
u(t) () UHa (v)
y(t) ¼ (see Equation 7.61).
VHa (v) ¼ sgn(v)UHa (v)
7.5 Examples of Derivation of Hilbert Transforms (7:53)
Therefore, the Hilbert transform is given by the inverse Hartley transformation
sgn(v)UHa (v)cas(vt)df
(7:54)
1. The harmonic signal u(t) ¼ cos(vt); v ¼ 2pf, where f is a constant. The Hilbert transform of the periodic cosine signal using the defining integral (Equation 7.1) is
H[cos (vt)] ¼ y(t) ¼
2ða
y(t) ¼
sin (2va) sin2 (va) [ cos (vt) þ sin (vt)]dt ¼ 2a þ 2va va
¼
The spectrum of the Hilbert transform given by Equation 7.53 is
1 P p
1 ð
(7:55)
1
8 1 < p :
cos [v(y þ t)] dy y 1 ð
cos (vt)P
sin (vt)P
sin (2va) sin2 va VHa (v) ¼ 2a sgn(v) 2va va
1
1 ð
1
cos (vy) dy y
9 sin (vy) = dy ; y
(7:56)
The integrals inside the brackets are
Πa (t – a)
P
1 ð
1
cos (vy) dy ¼ 0; P y
1 ð
1
sin (vy) dy ¼ p y
(7:57)
Therefore, v(t) ¼ sin(vt). The same derivation for the function u(t) ¼ sin(vt) yields v(t) ¼ cosv(t). 2. The two-sided symmetric unipolar square pulse
1
0
FIGURE 7.4
1
cos (vh) dh ht
The change of variable y ¼ h t, dy ¼ dh yields
Consider the one-sided square pulse Pa (t a) (see Figure 7.4). The Hartley transform of this pulse is
0
1 ð
1 P p
1
Example
UHa (v) ¼
1 t ln p t 2a
(7:52)
The Hartley spectral function of the Hilbert transform is
y(t) ¼
Notice that the integrals of products of opposite symmetry equal zero and the integration yields
1
where cas(vt) ¼ cos(vt) þ sin(vt). The Hartley spectral function was denoted by the index Ha because in this chapter the index H denotes the Hilbert transform. Consider the Hartley pair
1 ð
sin (2va) sin2 (va) 2a sgn(v) þ [ cos (vt) þ sin (vt)]df 2va va
a
One-sided square pulse.
2a
t
8 < 1 for jtj < a u(t) ¼ Pa (t) ¼ 0:5 for jtj ¼ a : 0 for jtj > a
(7:58)
7-8
Transforms and Applications Handbook
The Hilbert transform of this pulse is
y(t) ¼ H[Pa (t)] ¼
1 P p
1 ð
1
Pa (h) dh ht
(7:62)
Therefore, the Hilbert transform of a function u(t) ¼ u0 þ u1 (t) is H[u0 þ u1 (t)] ¼ H[u1 (t)]
tþe
te a 1 1 ¼ lim ln (h t) ln (h t) e)0 p p a tþe
(7:59)
The insertion of the limits of integration yields
1 t ln p t 2a
F
(7:64)
Because for this signal the Hilbert transform defined by the integral (Equation 7.1) has no closed form, it is convenient to derive the Hilbert transform using the inverse Fourier transformation of the Fourier image (Equation 7.64). This inverse transform has the form
y(t) ¼
1 ð
1
2
j sgn(v)epf e jvt df
(7:65)
Because the integrand is an odd function, this integral has the simplified form
(7:61)
3. The Hilbert transform of a constant function u(t) ¼ u0 equals zero. This is easily seen from Equation 7.60 at the limit a ) 1. The mean value of a function is given by the integral
2
ept () epf ; v ¼ 2pf
(7:60)
The square pulse and its Hilbert transform are shown in Figure 7.5. Notice that the support of the square pulse is limited within the interval jtj a, while the support of the Hilbert transform is infinite. This statement applies to all Hilbert transforms of functions of limited support. Of course, the inverse Hilbert transformation of the logarithmic function (Equation 7.60) restores the square pulse of limited support. The change of variable t0 ¼ t a (time shift of the pulse) yields the Hilbert transform of a one-sided square pulse.
(7:63)
that is, in electrical terminology the Hilbert transformation cancels the DC term u0. 4. Consider the Gaussian pulse and its Fourier image 2
1 t þ a y(t) ¼ ln p ta
H[Pa (t a)] ¼
u(t)dt
T=2
9 8 ð ða 0
(7:86) t
Again, the constant term is eliminated (sgn(0) ¼ 0).
Example 2
Consider the Fourier series of the periodic square wave given by the formula up(t) ¼ sgn[cos (vt)] (v ¼ 2pf a constant): 4 1 1 up (t) ¼ cos (vt) cos (3vt) þ cos (5vt) p 3 5 1 cos (7vt) þ 7
1
–3π 2
(7:87)
0
–π 2
–π
π 2
t
The Hilbert transform has the form 4 1 1 sin (vt) sin (3vt) þ sin (5vt) p 3 5 1 sin (7vt) þ 7
(a)
yp (t) ¼
(7:88) t
Figure 7.8a and b shows the signals represented by the Fourier series (Equations 7.87 and 7.88) truncated at the fifth harmonic and at a much higher harmonic term. We observe the Gibbs peaks for the cosine series. Because in the limit, the energy of the Gibbs peaks equals zero (a zero function), the Gibbs peaks disappear for the sine series.
Enlarged
1.1 1.0 0.9 85°
90° 5
7.7.2 Second Method
4
The derivation of the Hilbert transform of a periodic signal directly in the time domain (or any other domain) using the basis integral definition of the Hilbert transformation given by Equation 7.1 has the form of the infinite sum of integrals over successive periods. Only one of these integrals includes the pole of the kernel 1=(p t). For example, the Hilbert transform of the periodic square wave (see Figure 7.9a) has the form
3 2 1
–
8 3b b ð ð 1< dh dh yp (t) ¼ p: ht ht 5b
3b ð b
dh þ ht
3b
9 = dh ; ht
π 2
–1
π 2
t
–3
3b
–4 (b)
tþe
5b ð
–
–2
2te 3 ðb ð dh dh 5 þ þ lim 4 e)0 ht ht b
3π 2
(7:89)
–5
FIGURE 7.8 (a) The waveforms given by the truncation of the Fourier series of a square wave at the 5th harmonic number and of the corresponding Hilbert transform. (b) Analogous waveforms by the truncation at a high harmonic number.
7-12
Transforms and Applications Handbook
1 η–t
–5b
–3b
–b
t
b
3b
5b
η
(a)
–4b
–2b
t
2b
4b
η
(b)
FIGURE 7.9
Illustration to the derivation of the Hilbert transform of a square wave.
where b ¼ T=4. The result of this integration has the form yp (t) ¼
Pm¼1 ½2m 1 (1)m x 2 ln p Pm¼1 ½2m 1 þ (1)m x
(7:90)
where x ¼ 4t=T and m ¼ 1, 2, 3,. . . . The first terms of the infinite products are 2 (1 þ x)(3 x)(5 þ x)(7 x) yp (t) ¼ ln p (1 x)(3 þ x)(5 x)(7 þ x)
sin (z) ¼ z (7:91)
The infinite products in the above formulas are convergent. Using the numerical evaluation of Equation 7.91 we have to truncate the products having the same number of terms in the nominator and denominator. For the odd square wave up(t) ¼ sgn[sin(vt)] (see Figure 7.9b), Equation 7.91 changes to 2 y(4 y2 )(16 y2 )(36 y2 ) ; yp (t) ¼ ln p (1 y2 )(9 y2 )(15 y2 )(7 y)
obtain a symmetrical truncation. Using a computer, the quotients in Equations 7.91 or 7.92 should be calculated using one term of the nominator divided by one term of the denominator. Otherwise there is a danger of entering in the overflow range of the computer (number too big). Let us recall that the harmonic functions have a representation in the form of infinite series.
(7:92)
y ¼ 2x ¼ 2t=T
Notice that the denominator has been truncated so that a halfterm of (49 y2) ¼ (7 y) (7 þ y) is deleted. This is needed to
cos (z) ¼
1 Y k¼1
1 Y k¼1
1
1
z2 2 k p2
4z2 (2k 1)2 p2
(7:93)
(7:94)
7.7.3 Third Method: Cotangent Hilbert Transformations The cotangent form of the Hilbert transformation of periodic functions may be conveniently derived starting with the convolution equation (Equation 7.81). The Hilbert transform of a convolution of two functions equals the convolution of the Hilbert transform of one function (arbitrary choice) with the original of the other function (see Table 7.3). The Hilbert transform of the delta sampling sequence is
7-13
Hilbert Transforms
This generating function equals 1 in the intervals T=2 to T=4 and T=4 to T=2 and equals 1 in the interval T=4 to T=4. The insertion of the integration intervals (Cauchy principal value) t 0
T
T
2T
T =2 T =4 T =4 t e þ þ T =2 T =4 t þ e T =4
into the integral
cot (πt/T )
1 T
–T
0
(7:98)
T
2T
t
ð
hp i i 1 hp cot (t t) dt ¼ lnsin (t t) T p T
(7:99)
yields the following form of the Hilbert transform of the square wave p T sin t 2 T 4 yp (t) ¼ ln p T p þ t sin T 4
(7:100)
Using trigonometric relations, we get the Hilbert pair FIGURE 7.10 transform.
The periodic sequence of delta pulses and its Hilbert
dp (t) ¼
1 X
k¼1
H
sgn[ cos (vt)] ()
H
sgn[ sin (vt)] ()
(7:95)
This Hilbert pair is shown in Figure 7.10. The derivation is given at the end of this section. The insertion of this Hilbert transform in the convolution equation (Equation 7.75) yields the following form of the Hilbert transform of periodic functions: 1 hp i 1 X cot (t kT) yp (t) ¼ uT (t)* T k¼1 T
(7:96)
where uT(t) is the generating function defined by Equation 7.80. Contrary to Fourier series, Equation 7.96 has a closed integral form and for many generating functions a closed analytic solution. If the analytic solution does not exist, a numerical evaluation of the convolution yields the desired Hilbert transform.
Consider again the square wave of sgn[cos(vt)]. The generating function is sgn[ cos (vt)]
for jtj 0:5T ; v ¼
0
otherwise
2p T
(7:97)
2 ln j tan (vt=2)j p
(7:102)
The Hilbert transform of the periodic delta sequence given by Equation 7.95 may be derived as follows: We start with the Hilbert pair H
d(t) ()
1 pt
(7:103)
The support of the Hilbert transform 1=(pt) is infinite. Therefore, in the interval of one period, for example, the interval from 0 to T, there is a summation of successive tails of functions Qn(t) ¼ 1=[p (t nT)], i.e., the generating function of the Hilbert transform of the delta sampling sequence is
QT (t) ¼
Example
uT (t) ¼
(7:101)
Similarly, it may be shown that
H
d(t kT) () Qp (t)
1 hp i 1 X ¼ cot (t kT) T k¼1 T
(
2 ln j tan (vt=2 þ p=4) p
1 X
n¼1
1 1 ¼ cot (pt=T ) p(t nT ) T
(7:104)
that is, the infinite sum converges to the cotangent function. The repetition of this generating function yields the periodic Hilbert transform of the delta sampling sequence of the form
Qp (t) ¼
1 X
k¼1
QT (t kT ) ¼
1 hp i 1 X cot (t kT ) (7:105) T k¼1 T
7-14
Transforms and Applications Handbook
This sequence also may be written in the convolution form
Qp (t) ¼
1 X 1 d(t kT ) cot (pt=T ) * T k¼1
7.8 Tables Listing Selected Hilbert Pairs and Properties of Hilbert Transformations
(7:106)
Table 7.1 presents the Hilbert transforms of some selected a periodic signals and the two basic periodic harmonic signals cos (vt) and sin(vt). The Hilbert transforms of selected other periodic signals are listed in Table 7.2. The knowledge of the Hilbert transforms listed in these tables and the application of various properties of the Hilbert transformation listed in Table 7.3 enables an easy derivation of a large variety of Hilbert transforms. Applications of the properties listed in these tables are given in Sections 7.9 through 7.15, which also include selected derivations and applications of the properties of Hilbert transformations.
The generating function QT(t) (Equation 7.104) may be alternatively derived using Fourier transforms. The well-known Fourier pair is 1 X
F
k¼1
d(t kT) ()
1 1 X d(f k=T ) T k¼1
(7:107)
The multiplication of this Fourier image by the operator j sgn ( f ) yields the Fourier image of the generating function QT(t): 1 1 X j sgn(f )d(f n=T ) QT (t) () T n¼1 F
7.9 Linearity, Iteration, Autoconvolution, and Energy Equality
(7:108)
The Hilbert transformation is linear and, if a complicated waveform can be decomposed into a sum of simpler waveforms, then the summation of the Hilbert transforms of each term yields the desired transform. For example, the waveform of Figure 7.12a may be a decomposed into a sum of two rectangular pulses. Therefore, the Hilbert transform of this waveform is (see Table 7.1)
The inverse Fourier transform of this spectrum yields
QT (t) ¼ ¼
1 1 j X j X ej2pnt=T e j2pn=T T n¼1 T n¼1
1 2X sin (2p nt=T) T n¼1
(7:109)
^ a (t) þ P ^ b (t) y(t) ¼ H ½Pa (t) þ Pb (t) ¼ P t þ b 1 t þ a ln ¼ þ ln p ta t b 1 (t þ b)(t þ a) ¼ ln p (t b)(t a)
The insertion of the relation (in the distribution sense) 1 X n¼1
sin (nx) ¼
1 cot (x=2) 2
(7:110)
Let us derive in a similar way the Hilbert transform of the ‘‘ramp’’ pulse shown in Figure 7.12b. We decompose this pulse into a sum of one-sided square pulse and one-sided inverse triangle. The summation of Equation 7.61 and No. 8 of Table 7.1 yields
yields Q(t) given by the formula 7.104. Notice that the derivation of the periodic Hilbert transform Qp(t) involves two summations. The first yields the generating function QT(t) and the second gives the periodic repetition of this function (Figure 7.11).
H[ramp] ¼ H[Pb=2 (t b=2)] H[1(t)tri(t)] t 1 t 1 ¼ ln (1 t=a) ln p tb t þ a
u(t)
–a
0
(7:112)
7.9.1 Iteration
0.5
–b
(7:111)
a
FIGURE 7.11 A trapezoidal pulse (see Table 7.1, #9.)
b
t
Iteration of the Hilbert transformation two times yields the original signal with the reverse sign, and the iteration four times restores the original signal u(t). In the Fourier frequency domain the n-time iteration is translated to the n-time multiplication by the operator j sgn(v). We have (j sgn(v))2 ¼ 1, (j sgn(v))3 ¼ j sgn(v), and (j sgn(v))4 ¼ 1. In analog or digital signal processing, the Hilbert transform is produced approximately and with a delay. The n-time iteration is implemented using a series connection of Hilbert filters (see Section 7.22) and the time delay increases n-times.
7-15
Hilbert Transforms TABLE 7.1
Selected Useful Hilbert Pairs
Number
u(t)
Name
cos (vt)
1
sine
sin (vt)
2
cosine
cos (vt)
3
Exponential harmonic
e jvt
4
Square pulse
5
Bipolar pulse
Q
6
Double triangle
tPa(t) sgn(t)
7
Triangle tri(t)
8
One-sided triangle
1 jt=aj; jtj a 0; jtj > a
9
Trapezoid pulse
Waveformb
10
Cauchy pulse
a ; a>0 a2 þ t 2
11
Gaussian pulse
ept
a
sin (vt)
(t)a
Pa(t) sgn(t)
1(t) tri(t)
2
1 jt=aj ; jtj a
Parabolic pulse
13
Symmetric exponential
j sgn(v)e jvt 1 t þ a ln p ta 1 ln j1 (a=t)2 j p 1 ln j1 (a=t)2 j p t a t t 2 1 þ ln ln p t þ a a t 2 a2 t 1 þ1 (1 t=a) ln p t þ a 2 2 (a þ t)(b t) 1 b þ t lna t þ lna t ln 2 2 p b a (b þ t)(a t) ba b t a þ t t a2 þ t 2 ð1 2 epf sin (vt)df 2 0
2
12
y(t)
0; jtj > a eajtj
v ¼ 2pf t a 2t 1 [1 (t=a)2 ] ln p tþa a ð1
2a sin (vt)df a2 þ v 2 1 { exp (ajtj)E(ajtj) exp (ajtj)E(-ajtj) or p ð1 exp ( t) dt where E(x) ¼ t X ð1 2v 2 cos (vt)df 2 2 0 a þv ð1 a sin (vt) v cos (vt) 2 df a2 þ v2 0 2
0
14
Antisymmetric exponential
sgn(t)eajtj
15
One-sided exponential
1(t)eajtj
16
sinc pulse
sin (at) at
17
Video test pulse
cos2 (pt=2a); jtj a
18
Constant
0; jtj > a a
sin2 (at=2) 1 cos (at) ¼ (at=2) at ð1 2 2a sin (pv=2a) 2 sin (vt)df 2 2 v 0 4a v zero
Hyperbolic functions: Approximation by summation of Cauchy signals (see Hilbert pairs 10 and 43)c u(t)
Number 19
tanh (t) ¼ 2
X1
h¼0
t (h þ 0:5)2 p2 þ t 2
The part of finite energy of tanh(t) is 20 21 22 23
sgn(t) tanh (t); X1 1 t coth (t) ¼ þ 2 ; h¼1 (hp)2 þ t 2 t X1 (h þ 0:5) sech(t) ¼ 2p (1)(h1) ; h¼0 (h þ 0:5)2 p2 þ t 2 X 1 t 1 cosech(t) ¼ 2 (1)(h1) ; h¼1 t (hp)2 þ t 2
y(t) 2p
X1
h¼0
(h þ 0:5) (h þ 0:5)2 p2 þ t 2
(h þ 0:5) (h þ 0:5)2 p2 þ t 2 X1 h pd(t) þ 2p h¼1 (hp)2 þ t 2 X1 t 2 (1)(h1) h¼0 (h þ 0:5)2 p2 þ t 2 X1 n pd(t) þ 2p (1)(h1) h¼1 (hp)2 þ t 2
pd(t) þ 2p
X1
h¼0
(continued)
7-16 TABLE 7.1 (continued)
Transforms and Applications Handbook Selected Useful Hilbert Pairs u(t)
Number
y(t)
Hyperbolic functions by inverse Fourier transformation (v ¼ 2pf) 24
sgn(t) tanh (at=2); Re a > 0;
25
coth (at=2) sgn(t);
26
sech(at=2);
27
cosech(at=2);
28
sech2 (at=2);
u(t)y(t) ð1 2p 2 cos (vt)df 2 a sinh (pv=a) v 0 ð1 2p 2 coth (pv=a cos (vt)df 2 a v 0 ð1 2p sin (vt)df 2 a cosh (pv=2a) 0 ð1 2p tanh (pv=2a) cos (vt)df 2 a 0 ð1 2pv 2 sin (vt)df a sinh (pv=2a) 0
Delta distribution, 1=pt distribution, and its derivatives 29 d(t) 30
1=pt
31
d(1) (t)
32
1=pt 2
33
d(2) (t)
1=pt d(t)
1=pt 2 d(1) (t)
2=pt 3 0:5d(2) (t)
34
1=pt 3
35
d(3) (t)
36 37
1=pt 4 u(t)d(t)
(1=6)d(3) (t) y(t) ¼ ð1=pt Þu(0)
d(t) ¼ d(t) * d(t)
d(t) ¼ (1=pt)*(1=pt)
6=pt 4
Equality of convolutions 38
d(1) (t) ¼ d(1) (t) * d(t)
39
(2)
(1)
(3)
(3)
d (t) ¼ d (t) * d (t)
40 41
d(1) (t) ¼ (1=pt 2 )*(1=pt)
(1)
(2)
(1)
d (t) ¼ d (t) * d(t) ¼ d (t) * d (t)
Approximating functions to the above distributions ð 1 42 d(a, t)dt ¼ tan1 (t=a); p 1 a ; 43 d(a, t) ¼ p a2 þ t 2 1 2at 44 d(1) (a, t) ¼ ; p (a2 þ t 2 )2 45
d(2) (a, t) ¼
46
d(3) (a, t) ¼
Trigonometric functions
1 6at 2 2a2 ; p (a2 þ t 2 )3
1 24a3 t 24at 2 ; p (a2 þ t 2 )4
sin (at) t cos (at) t sin (at) t2 cos (at) t2 sin (at) t3 cos (at) t3
47 48 49 50 51 52 a b c
See Figure 7.5. See Figure 7.11. Notice the infinite energy of the functions tanh(t), coth(t), and cosech(t).
d(2) (t) ¼ (1=pt 2 )*(1=pt 2 ) d(3) (t) ¼ (6=pt 4 )*(1=pt) ¼ (2=pt 3 )*(1=pt 2 ) ð
ln (a2 þ t 2 ) 2p 1 t Q(a, t) ¼ p a2 þ t 2 1 a2 t 2 Q(1) (a, t) ¼ p (a2 þ t 2 )2 Q(a, t)dt ¼
Q(2) (a, t) ¼ Q(3) (a, t) ¼
1 2t 3 6at 2 p (a2 þ t 2 )3
1 6t 2 þ 36a2 t 2 6a4 p (a2 þ t 2 )4
1 cos (at) t sin (at) pd(t) þ t 1 cos (at) pad(t) þ t2 a sin (at) pd(1) (t) þ t t2 a2 1 cos (at) pad(1) (t) þ 2t t3 p (2) a2 p a sin (at) d (t) þ d(t) 2 þ 2 2 t t3
7-17
Hilbert Transforms TABLE 7.2 Selected Useful Hilbert Pairs of Periodic Signals Number
yp(t) X 1 1 cot [(p=T)(t kT)] k¼1 T (2=p) ln j tan (vt=2 þ p=4)j
1
Sampling sequence
X1
2
Even square wave
sgn[ cos (vt)]
3
Odd square wave
(2=p) ln j tan (vt=2)j
4
Squared cosine
sgn[ sin (vt)] v ¼ 2p=T
cos2 (vt)
0.5 sin (2vt)
5
Squared sine
sin2 (vt)
0.5 sin (2vt) 3 1 sin (vt) þ sin (3vt) 4 4 3 1 cos (vt) þ cos (3vt) 4 4 1 1 sin (2vt) þ sin (4vt) 2 8 1 1 sin (2vt) þ sin (4vt) 2 8 j sgn(v)ejvt
k¼1
d(t kT)
v ¼ 2p=T
3
6
Cube cosine
cos (vt)
7
Cube sine
sin3 (vt)
8
cos4 (vt)
9
sin4 (vt) ejvt
10 11
Product
12
Fourier series
13
Any periodic function a
TABLE 7.3
up(t)
Name
cos (at þ w) cos (bt þ c)
0 < a < b; w, c are constants Xn U cos (kvt þ fk ) U0 þ k¼1 k X1 d(t kT)ia uT (t)* k¼1
uT(t) is the generating function (see Equation 7.96).
cos(at þ w) sin (bt þ c)
Xn
Uk sin (kvt þ fk ) 1 X1 uT (t)* cot [(p=T)(t kT)] k¼1 T k¼1
Properties of the Hilbert Transformation
Number
Name
1
Notations
2
Time domain definitions
3
Change of symmetry
4
Fourier spectra
Original or Inverse Hilbert Transform u(t) or H1[y] 8 ð 1 1 y(h) > > dh < u(t) ¼ p 1 h t or > 1 > u(t) ¼ : *y(t) pt u(t) ¼ u1e(t) þ u2o(t); F
Hilbert Transform y(t) or û(t) or H[u] ð 1 1 u(h) dh y(t) ¼ p 1 h t 1 y(t) ¼ *u(t) pt y(t) ¼ y1o(t) þ y2e(t) F
u(t) () U(v) ¼ Ue (v) þ j Uo (v);
y(t) () V(v) ¼ Ve (v) þ jVo (v)
For even functions the Hilbert transform is odd:
Ue (v) ¼ 2
yo (t) ¼ 2
For odd functions the Hilbert transform is even:
Uo (v) ¼ 2
5
Linearity
6
Scaling and time reversal
au1(t) þ bu2(t)
7
Time shift
8
Scaling and time shift
9
Iteration
U(v) ¼ j sgn(v)V(v); Ð1 0 u1e (t) cos (vt)dt Ð1 0
u2o (t) sin (vt)dt
u(at); a > 0
u(at)
Time derivatives
ye (t) ¼ 2
Ð1 0
Uo (v) cos (vt)df
ay1(t) þ by2(t)
y(at)
y(at)
u(t a)
u(bt a) H[u(t)] ¼ y(t)
y(t a)
y(bt a) Fourier image j sgn(v)U(v)
H[H[u]] ¼ u(t)
[j sgn(v)]2 U(v) [j sgn(v)]3 U(v)
H[H[H[Hu]]] ¼ u(t)
[j sgn(v)]4 U(v)
H[H[H[u]]] ¼ y(t)
10
V(v) ¼ j sgn(v)U(v) Ð1 0 Ue (v) sin (vt)df
_ ¼ u(t)
1 y_ (t) pt *
_ ¼ u(t)
d (1=pt) * y(t) dt
First option _ y_ (t) ¼ pt1 * u(t) Second option d (1=pt) * u(t) y_ (t) ¼ dt (continued)
7-18
Transforms and Applications Handbook
TABLE 7.3 (continued)
Properties of the Hilbert Transformation
Number
Name
Original or Inverse Hilbert Transform
Hilbert Transform
u1(t) * u2(t) ¼ y1(t) * y2(t) Ð Ð u(t)u(t t)dt ¼ y(t)y(t t)dt for t ¼ 0 energy equality
11
Convolution
12
Autoconvolution equality
13
Multiplication by t
tu(t)
14
Multiplication of signals with nonoverlapping spectra
u1(t) (low pass signal) u1(t)u2(t)
15
Analytic signal
16
Product of analytic signals
17
Nonlinear transformations
17a
x¼
17b
x ¼aþ
c bt þ a b t
u1(t) * y2(t) ¼ y1(t) * u2(t)
ty(t)
Ð1
1
u(t)dt
u2(t) (high pass signals) u1(t)y2(t)
c(t) ¼ u(t) þ jH[u(t)]
H[c(t)] ¼ jc(t)
u(x)
y(x)
c u1 (t) ¼ u bt þ a b u1 (t) ¼ u a þ t
ð1 c 1 u(t) P dt bt þ a p 1 t b b y1 (t) ¼ y a þ y(a) a t
H[c(t)] ¼ c1(t)H[c2(t)] ¼ H[c1(t)]c2(t)
c(t) ¼ c1(t)c2(t)
y1 (t) ¼ y
Notice that the nonlinear transformation may change the signal u(t) of finite energy to a signal u1(t) of infinite energy. P is the Cauchy principal value. Asymptotic value as t ) 1 for even functions of finite support:
18
ue(t) ¼ ue(t)
limt)1 jyo (t)j ¼
1 pt
ð
ue (t)dt a S
Note: e, even; o, odd. a S is support of ue(t).
u(t)
7.9.2 Autoconvolution and Energy Equality
1
F
The energy of a real signal u(t) () U(v) is given by the integrals Eu ¼
1 ð
1
1 ð
2
u (t)dt ¼
1
0.5 2
jU(v)j df ; v ¼ 2pf
(7:113) –b
The above equality of the energy defined in the time domain and Fourier frequency domain is called Parseval’s theorem. The squared magnitude of the Fourier image of the Hilbert transform F n(t) ¼ H[u(t)] () V(v) ¼ j sgn(v)U(v) is jV ðvÞj2 ¼ jj sgnðvÞU ðvÞj2 ¼ jU ðvÞj2
Ev ¼
2
1
y (t)dt ¼
1 ð
1
jU(v)j2 df
1
b
t
1
(7:114) 0
a
b
t
b
t
(7:115)
Therefore, the energies Eu and Ev are equal. This property of a pair of Hilbert transforms may be used to check the algorithms of numerical evaluation of Hilbert transforms. A large discrepancy DE ¼ Ev Eu indicates a fault in the program. A small discrepancy may be used as a measure of the accuracy. Notice that the Hilbert transformation cancels the mean value of the signal. Therefore, the energy (or the power) of this term is rejected. The signals forming a Hilbert pair are orthogonal; that is, the mutual energy defined by the integral 1 ð
a
u(t)
that is, the energy of the Hilbert transform is given by the integrals 1 ð
0
–a
(a)
0
a
b
t
–1 (b)
u(t)y(t)dt ¼ 0
(7:116)
FIGURE 7.12 (a) A pulse given by the summation of two square pulses Pa(t) þ Pb(t) and (b) The ‘‘ramp’’ pulse and its decomposition in two pulses.
7-19
Hilbert Transforms
equals zero. The autoconvolution of the signal u(t) is defined by the integral ruu (t) ¼ u(t) * u(t) ¼
1 ð
1
u(t)u(t t)dt
Sample at t = 0; 50.64
(7:117)
The autoconvolution equality theorem for a Hilbert pair of signals has the form ruu (t) ¼ rvv (t)
Floor samples ~ 5 . 10–3
(7:118)
that is, the autoconvolutions of u(t) and v(t) have the same waveform and differ only by sign. Proof Let us apply the convolution to multiplication theorem of Fourier analysis to both sides of the equality (Equation 7.118). We get the Fourier pairs F
ruu (t) ¼ u(t) * u(t) () U 2 (v)
—1
that is, the autoconvolution of the function (distribution) 1 of infinite support yields the delta pulse of a point pt support. Figure 7.13 shows the result of a numerical approximate calculation of the autoconvolution (Equation 7.121). 2. Consider the square pulse and its Hilbert transform
F
(7:120)
We have shown that the functions ruu(t) and rvv(t) have the same waveforms because they have equal Fourier transforms.
H
Pa (t) ()
Examples 1. It is really amazing to observe the result of calculation of the autoconvolutions of some Hilbert pairs. Consider the H 1 Hilbert pair d(t) () . Because the autoconvolution of pt the delta pulse is d(t) ¼ d(t) * d(t) (see Section 7.6), the autoconvolution equality yields the surprising result d(t) ¼
1 1 * pt pt
1
FIGURE 7.13 The discrete delta pulse obtained by numerical computing of the autoconvolution 1=(pt) * 1=(pt).
(7:119)
ryy (t) ¼ y(t) * y(t) () [ j sgn(v)U(v)]2 ¼ U 2 (v)
0
1 t þ a ln p ta
The waveforms are shown in Figure 7.5. The autoconvolution of the square pulse is a tri(t) (triangle) pulse of doubled support (Figure 7.14a). Again, the autoconvolution of the logarithmic function of infinite support defined by Equation 7.122, which has infinite peaks at points jtj ¼ a, yields the triangle pulse of finite support. Indeed, we have
(7:121)
ˆ X(t) ˆ –X(t) * 1
Xˆ (t)
X (t)
(7:122)
1 0.8 0.5
a
–a a X (t) * X (t) 2a
–a
0.6
t
t ˆ (t) –Xˆ (t) * X 2a
0.4
–a
–2a (a)
0
ˆ = 1 1n t + 1 X(t) 2π t–1
0.2
2a t
–2a
0
2a
t
–3 (b)
–1
0
1
3
t
FIGURE 7.14 (a) An example of the autoconvolution equality: (left) the square pulse and its autoconvolution; (right) the Hilbert transform of the square pulse and its autoconvolution. (b) The result of numerical computing of the autoconvolution of the Hilbert transform.
7-20
Transforms and Applications Handbook
tri(t) ¼
t þ ao 1 n t þ a ln ln * p2 ta ta
(7:123)
These integrals have in the convolution notation the form (Equation 7.127). The change of variable y ¼ h t yields the following form of the Hilbert integrals:
Figure 7.14b shows the result of a numerical evaluation of the above autoconvolution.
1 y(t) ¼ P p
7.10 Differentiation of Hilbert Pairs H
1 u(t) ¼ P p
Consider a Hilbert pair u(t) () n(t). Differentiation of both sides gives a new Hilbert pair: H
_ () y_ (t) u(t)
Therefore, differentiation is a useful tool for creating new Hilbert pairs. Obviously, the operation can be repeated to get the next Hilbert pairs:
1 P y_ (t) ¼ p
d u H d y () n dt n dt
(7:125)
Because the signal c(t) ¼ u(t) þ jv(t) is an analytic function, in principle all of its derivatives exist.39 Consider the convolution notation of the Hilbert transformations: u(t) ¼
H 1 1 y(t) () y(t) ¼ u(t) * pt pt *
H 1 1 _ u(t) y_ (t) () y_ (t) ¼ * pt pt *
(7:132)
y_ (y þ t) dy y
y(t) ¼
F 1 u(t) () j sgn(v)U(v) * pt
F
(7:127)
(7:133)
Time domain differentiation corresponds to the multiplication of the Fourier image by the differentiation operator jv. Therefore, y_ (t) () jv[ j sgn(v)U(v)]
and the second option is _ ¼ u(t)
_ þ t) u(y dy; y
These integrals have in the convolution notation the form (Equation 7.128). Very illustrative is the same proof in terms of the frequency domain representation:
H d d (t=pt) * y(t) () y_ (t) ¼ (t=pt) * u(t) dt dt
¼ [1=(pt 2 )] * y(t) ¼ [1=(pt 2 )] * u(t)
1
1
(7:131)
(7:126)
The derivative of a convolution has two options: the convolution of the derivative of the first term with the second term, or the convolution of the first term with the derivative of the second term; i.e., the first option has the form _ ¼ u(t)
1 _ ¼ P u(t) p
1 ð
1 ð
u(y þ t) dy; y
y(y þ t) dy y
1
and the differentiation yields
n
1
1 ð
(7:124)
n
1 ð
(7:134)
However, the operator jv may be arbitrarily assigned to the first or second factor of the product in parentheses. In the time domain, this arbitrary choice corresponds to the two options of the convolution.
(7:128)
Example 1 Proof The Hilbert integrals (Equations 7.1 and 7.2) are
y(t) ¼
1 P p
1 ð
1
u(h) 1 dh; u(t) ¼ P ht p
1 ð
1
Consider the Hilbert pair
y(h) dh (7:129) ht
The differentiation of these integrals with respect to t yields 1 y_ (t) ¼ P p _ ¼ u(t)
1 ð
1 1 ð
u(h) dh; (h t)2
1 P p
1
y(h) dh (h t)2
H
d(t) ()
1 pt
The derivatives are H d 1 _ () d(t) (1=pt) ¼ 2 dt pt
(7:130)
(7:135)
(7:136)
_ and, hence, the function d=dt(1=p t) are The derivative d(t) defined in the distribution sense (notation FP 1=(p t2), where FP denotes ‘‘finite part of’’).35 The energy of these signals is infinite.
7-21
Hilbert Transforms
approximating functions defining the derivatives of the complex delta distribution. For example, H _ _ ¼ lim 1 2at d(t) () Q(t) a)0 p (a2 þ t 2 )2 1 a2 t 2 ¼ lim (7:139) a)0 p (a2 t 2 )2
Example 2 Consider the Hilbert pair u(t) ¼
H 1 t () ¼ y(t) 2 1þt 1 þ t2
(7:137)
Let us differentiate n-times both sides of this equation. In this way we find an infinite series of Hilbert transform pairs as shown in Table 7.4. The derivations are simpler by using the differentiation of the analytic signal c(t) ¼ u(t) þ jy(t) ¼
1 1 jt
(see Table 7.1, 42–46).
7.11 Differentiation and Multiplication by t: Hilbert Transforms of Hermite Polynomials and Functions
(7:138)
and determining the real and imaginary parts of the derivatives in the form of Hilbert pairs. The waveforms of the first four terms of the Hilbert pairs of Table 7.4 are shown in Figure 7.15a and b. The energy was normalized to unity by division of the amplitudes by the SQR of energy. The Cauchy pulse may serve as the function approximating the delta pulse (see Equation 7.76). Therefore, the derivatives of the Cauchy–Hilbert pair may serve as the
Consider the Gaussian Fourier pair: 2
F
2 2
et () p0:5 ep f
(7:140)
The successive differentiation of the Gaussian pulse exp(t2) generates the nth order Hermite polynomial (see Table 7.5).
TABLE 7.4 Hilbert Transforms of the Derivatives of the Cauchy Signal u(t) ¼ 1=(1 þ t2) Signal u(n)
n
t 1 þ t2
1 1 þ t2
0
1
2t (1 þ t 2 )2
1 t2 (1 þ t 2 )2
2
2
3t 2 1 (1 þ t 2 )3 4t 3 4t 6 (1 þ t 2 )4 5t 4 10t 2 þ 1 24 (1 þ t 2 )5
2 3 4 n a
Hilbert Transform y(n)
Energy ¼
ð1 0
Analytic Signal c(n)
Energy
1 1 jt
p 2
2 (1 jt)3 6j (1 jt)4 24 (1 jt)5
3p 4 45 p 8 315 p 4
j (1 jt)2
t 3 3t (1 þ t 2 )3 t 4 6t 2 þ 1 6 (1 þ t 2 )4 t 5 10t 3 þ 5t 24 (1 þ t 2 )5
( j)n n! (1 jt)nþ1
p 4
a
n!dt (n!)2 1:35 . . . (2n 1) p nþ1 ¼ 2 2:46 . . . 2n 2 (1 þ t )
υ(n) (t) υ
1
u(n) (t)
u
t υ(t)= 1 + t2
1 1 u(t) = 1 + t2
–1 1 –3
–2
–1
(a)
1
t
0
–1 u
0
2
υ
u
υ
(b)
FIGURE 7.15 (a) The waveforms of the Cauchy pulse and of its derivatives. (b) The waveforms of the corresponding Hilbert transforms.
7-22
Transforms and Applications Handbook TABLE 7.5 Weighted Hermite Polynomials and Their Hilbert Transforms Hermite Polynomial n Hnu 0 (1)u 1 (2t)u 2 (4t2 2)u
3 (8t3 12t)u
4 (16t4 48t2 þ 12)u
5 (32t5 160t3 þ 120t)u
n Hnu ¼ (1)n[2tHn1(t) 2(n 1)Hn2(t)
Hilbert Transform
Energy
H(Hnu) pffiffiffiffi Ð 1 2 p 0 exp(p2 f 2 ) sin (vt)df pffiffiffiffi Ð 1 2 p 0 v exp(p2 f 2 ) cos (vt)df pffiffiffiffi Ð 1 2 2 p 0 v exp(p2 f 2 ) sin (vt)df pffiffiffiffi Ð 1 2 p 0 v3 exp(p2 f 2 ) cos (vt)df pffiffiffiffi Ð 1 2 p 0 v4 exp(p2 f 2 ) sin (vt)df pffiffiffiffi Ð 1 2 p 0 v5 exp(p2 f 2 ) cos (vt)df pffiffiffiffi Ð 1 (1)n 2 p 0 vn exp(p2 f 2 ) sin (vt þ np=2)df
E pffiffiffiffiffiffiffiffi p=2 pffiffiffiffiffiffiffiffi p=2 pffiffiffiffiffiffiffiffi 3 p=2 pffiffiffiffiffiffiffiffi 15 p=2 pffiffiffiffiffiffiffiffi 105 p=2 pffiffiffiffiffiffiffiffi 945 p=2
2 Notes: Notation: u ¼ exp(t pffiffiffiffiffiffiffiffi Ð 1 ). Ð1 Energy ¼ 1 u2 H2n dt ¼ 1 [H(uHn )]2 dt ¼ 1 3 5 j2n 1j p=2.
The Hermite polynomials are defined by the formula (see also Chapter 1) Hn (t) ¼ (1)n et
2
d n t2 e dt n
the Hilbert pair (see Table 7.1, the Hilbert transform of the Gaussian pulse).
(7:141a)
H
2
et () 2p0:5
n ¼ 0, 1, 2, . . . ; t 2 1 (Roman H is used to denote the Hermite polynomial in distinction from the italic H for the Hilbert transform). The Hermite polynomials are also defined by the recursion formula
u (t) = e–t
0
2 2
ep f sin (vt)df ; v ¼ 2pf
(7:142)
The next terms are obtained by calculating the successive time derivatives of both sides of this Hilbert pair. For example, the second term is t 2
Hn (t) ¼ 2tHn1 (t) 2(n 1)Hn2 (t); n ¼ 1, 2, . . . (7:141b) The first terms of the Hermite polynomials weighted by the generating function exp(t2) and their Hilbert transforms are listed in Table 7.5. The Hilbert transform of the first term was calculated using the frequency domain method represented by
1 ð
2te
H
0:5
() 2p
1 ð
2 2
vep f cos (vt)df
(7:143)
0
The value of the energy of successive terms is listed in the last column of Table 7.5. The waveforms are shown in Figure 7.16. Each Hilbert pair in Table 7.5 is a pair of orthogonal functions.
2
1 u (t) u (t)
1 υ(t)= H e–t
H[u] –3
–1
0
1
2
3
H[u]
t
u (t)
–2
0
–1
1
2
3
4 H [u]
1 –1
–1 (a)
2
(b)
FIGURE 7.16 (a) The waveforms of Hermite polynomials. (b) The waveforms of the corresponding Hilbert transforms.
t
2
7-23
Hilbert Transforms
(7:144)
This is exactly the relation (Equation 7.148). The second term in this equation equals zero for odd functions u(t). The first term in the recurrent formula 7.147 has the form of the product th(t) enabling the application of Equation 7.148. Therefore, the Hilbert transforms of the Hermite functions hn(t) have the form
differs from zero for n 6¼ m. The Hermite polynomials can be orthogonalized by replacing the weighting function exp(t2) by exp(2t2) because
2 3 1 ð 2(n 1)! 0:5 4 1 tyn1 (t) un1 (t)dt5 H[hn (t)] ¼ yn (t) ¼ n! p
However, the weighted Hermite polynomials do not form a set of orthogonal functions; that is, the integral of the product 1 ð
2
e2t Hn (t)Hm (t)dt 6¼ 0 for n 6¼ m
1
1 ð
1
2
et Hn (t)Hm (t) ¼
for n 6¼ m for n ¼ m
0 2n n!p0:5
2
=2
Hn (t); n ¼ 0, 1 . . .
(7:151)
To derive the Hilbert transforms of Hermite functions, we have to derive by any method the first term v0(t) and then apply the above recurrency. Let us use the frequency domain method. The function h0(t) and its Fourier image are (7:146)
are forming an orthonormal (energy is equal unity) set of functions called Hermite functions. Let us derive the Hilbert transforms of the Hermite functions. Combining the Equations 7.141 and 7.145 we get the following recurrency: 2(n 1)! 0:5 thn1 (t) hn (t) ¼ n (n 2)! 0:5 hn2 (t) (n 1) n!
(n 2)! 0:5 (n 1) yn2 (t) n!
(7:145)
Therefore, the functions denoted by small italic h(t) hn (t) ¼ (2n n!)0:5 p0:25 et
1
F h0 (t) ¼ p0:25 exp (t 2 =2) () (4p)0:25 exp 2(pf )2
(7:152)
By using Equation 7.66 we obtain
H ½h0 (t) ¼ y0 (t) ¼ 2(4p)0:25
1 ð
2 2
e2p f sin (vt)df
(7:153)
0
(7:147)
The Hilbert transforms H[hn(t)] may be derived using the multiplication by t theorem (see Table 7.3):
Introducing the abbreviated notation (v ¼ 2pf ) b ¼ p0:25 , g(t) ¼
1 ð
2 2
e2p f sin (vt)df
(7:154)
0
H
tu(t) () ty(t)
1 p
1 ð
u(t)d t
(7:148)
hu(h) dh ht
(7:149)
1
Proof The formula 7.1 yields
H[tu(t)] ¼
1 p
1 ð
1
1 ¼ p
1 ð
(y þ t)u(y þ t) dy y
1 ð
tu(y þ t) 1 dy y p
1
1
1 ¼ t H[u(t)] p
1 ð
1
u(t)dt
7.12 Integration of Analytic Signals Consider the analytic signal defined by Equation 7.28 as a complex function of a real variable t in the form
The insertion of the new variable y ¼ h t gives 1 H[tu(t)] ¼ p
we get the form of Equation 7.152 used in Table 7.6. The next terms y1, y2, . . . in this table are derived by using Equation 7.150. They are listed using two notations: the recurrent and nonrecurrent. The waveforms of the first four terms of the Hermite functions hn(t) and their Hilbert transforms are shown in Figure 7.17a and b.
c(t) ¼ u(t) þ j v(t) 1 ð
1
(7:155)
This function is integrable in the Riemann sense in the interval [a, b] if and only if the functions u(t) and v(t) are integrable; that is,
u(y þ t)dy (7:150)
ðt
F(t) ¼ c(t)dt ¼ a
ðt a
atb
ðt
u(t)dt þ j y(t)dt a
(7:156)
7-24
Transforms and Applications Handbook TABLE 7.6 Hilbert Transforms of Orthonormal Hermite Functions (Energy ¼ 1) Hermite Functions hn(t)
Hilbert Transforms yn(t)
Recurrent notation
pffiffiffi y0 ¼ 2 2bg(t) pffiffiffi pffiffiffi 2b y1 ¼ 2 ty0 p pffiffiffiffiffiffiffi y2 ¼ ty1 1=2y0 pffiffiffiffiffiffiffi b y3 ¼ 2=3 ty2 y1 p pffiffiffiffiffiffiffi pffiffiffiffiffiffiffi y4 ¼ 1=2ty3 3=4y2 pffiffiffi pffiffiffiffiffiffiffi pffiffiffiffiffiffiffi 3b y5 ¼ 2=5 ty4 4=5y3 2p
h0 ¼ a h1 ¼
pffiffiffi 2th0
h2 ¼ th1 h3 ¼ h4 ¼ h5 ¼
hn ¼
pffiffiffiffiffiffiffi 1=2h0
pffiffiffiffiffiffiffi 2=3 [th2 h1 ]
pffiffiffiffiffiffiffi pffiffiffiffiffiffiffi 1=2th3 3=4h2 pffiffiffiffiffiffiffi pffiffiffiffiffiffiffi 2=5th4 4=5h3 qffiffiffiffiffiffiffiffiffiffiffi 2(n1)! n! thn1
qffiffiffiffiffiffiffiffiffiffi þ (n 1) (n2)! n! hn2
yn ¼
Nonrecurrent notation
qffiffiffiffiffiffiffiffiffiffiffi
2(n1)! n!
tyn1 þ p1
Ð
hn1 (t)dt
(n 1)
qffiffiffiffiffiffiffiffiffiffi (n2)! n! yn2
pffiffiffi 2 2bg(t)
h0 ¼ a1 pffiffiffi h1 ¼ 2at
2b[2 tg(t) p1 ]
a h2 ¼ pffiffiffi (4t 2 2) 8 a h3 ¼ pffiffiffiffiffi (8t 3 12t) 48
2b [(2t 2 1) g(t) tp1 ] pffiffiffiffiffiffiffi 3 t2 1 8=3b (2t 3t)g(t) þ p 2p pffiffiffiffiffiffiffi t 3 2r 4 2 4=3b (2t 6t þ 1:5)g(t) þ p p pffiffiffiffiffiffiffiffiffiffi (t 4 4t 2 ) þ 1:75 5 3 8=15b (2t 10t þ 7:5t)g(t) þ p
a h4 ¼ pffiffiffiffiffiffiffi (16t 4 48t 2 þ 12) 384
a h5 ¼ pffiffiffiffiffiffiffi (32t 5 160t 3 þ 120t) 3840 a hn (t) ¼ pffiffiffiffiffiffiffiffiffi Hn (t), 2n n! n 0 1 pffiffiffi Ð1 h (t)dt 2 b 0 n 1
Hn (t) ¼ 2tHn1 (t) 2(n 1)Hn2 (t) 2
3
b
0
4 pffiffiffiffiffiffiffi 3=4b
5
...
0
...
Note: Notations: h0 (t), h1 (t), . . . ) h0 , h1 , . . . ; y0 (t), y1 (t), . . . ) y0 , y, . . . Ð1 2 2 2 g(t) ¼ 0 e2p f sin (2pf t) df ; a ¼ p0:25 et =2 ; b ¼ p0:25
hn (t) 1
h0
h2
h1
h3
h4
h5
1 –2
–1
0
–1
(a)
FIGURE 7.17 (a) Waveforms of Hermite functions.
2
3
t
7-25
Hilbert Transforms
H [hn(t)] 1
H[h0(t)] H[h2(t)] H[h4(t)]
–5
–4
–3
–2
–1
0
1
2
3
5
4
t
–1
(b)
FIGURE 7.17 (continued)
(b) Waveforms of the corresponding Hermite transforms.
Let us define
jτ
F(t) ¼ U(t) þ jV(t)
(7:157)
Г1
The functions U(t) and V(t) are forming a Hilbert pair only if F(z) is an analytic function of a complex variable z ¼ t þ jt. Therefore, let us give without a proof the following theorem: If the function c(z) ¼ u(t, t) þ jv(t, t) is analytic in a simply connected domain D, then the function
z0
z1
Г2
Г3 t
F(z) ¼
ðz
c(z)dz
(7:158)
z0
is also analytic, and the derivative F0 (z) ¼ c(z). The integral (Equation 7.158) is defined as a path integral in the plane (t, t), and in the domain D the integral depends on z and z0 but not on the particular path G connecting them (Figure 7.18).39 If function 7.155 is continuous in the interval [a, b], then the function defined by the integral ðt
F(t) ¼ c(t)dt; a t b
(7:159)
a
is called the primary function, or antiderivative of c(t), and has in the interval [a, b] a continuous derivative F0 (t) ¼ c(t), the relation holds ðb a
c(t)dt ¼ F(t)jba ¼ F(b) F(a)
(7:160)
FIGURE 7.18
Passes of integration in the complex plane (t, jt).
Example The function ejt has in the interval (1, 1) the primary function ejt=j þ c, where c is any complex constant. We have p=2 ð 0
e jt dt ¼
p=2 e jt ep j=2 1 ¼ ¼1þj j a j
If the analytic function has a representation in the form of a power series
c(z) ¼
1 X n¼0
dn (z z0 )n
(7:161)
7-26
Transforms and Applications Handbook
its integral must have a power series in the form
F(z) ¼ a þ
1 X dn (z z0 )nþ1 n þ1 n¼0
The insertion of the limits of integration and change of coordinates from rectangular to polar yields (7:162) F(t, t) ¼
This means that the power series representation can be integrated term by term. Integration in the time domain can be converted by using the Fourier transforms into integration in the frequency domain. For instance, the function u(t) can be integrated using the Fourier pairs F
u(t) () U(v) ðt
1
F d(f ) 1 u(t)dt () U(v) þ 2 jv
(7:163)
1 t a tan1 þ tan1 p aþt aþt j (a þ t)2 þ t 2 þ Ln 2p (a þ t)2 þ a2
(7:170)
Because arg(a jz) is only determined to within a constant multiple of 2p, the function (1=p) Ln(a jz) is not single valued (Notation Ln instead of ln). They prevent any winding of the integration path around z ¼ ja, let us make a cut extending from the point z ¼ ja to infinity. Then F(z) is analytic in the remaining part of the z-plane and satisfies the Cauchy–Riemann equation (see also Appendix A).
(7:164)
Example
v ¼ 2pf
Consider a signal represented by the product:
The term [d( f )=2] U(v) is equal to (1=2) U(0) and the term 1=jv is the well-known integration operator. The same algorithm may be used to integrate the Hilbert transform v(t).
u(t) ¼ sgn(t)Pa (t)
(7:171)
where Pa(t) is defined by Equation 7.58 sgn(t) is defined by Equation 7.11
Example Consider the analytic function of the complex variable z ¼ t þ jt c(z) ¼
1 1 1 1 ¼ p a jz p a þ t jt
(7:165)
where a is a real constant (a > 0). We get c(z) ¼ c(t, t) ¼ u(t, t) þ jv(t, t)
(7:166)
where u(t, t) ¼
1 aþt p (a þ t)2 þ t 2
(7:167)
and y(t, t) ¼
1 t p (a þ t)2 þ t 2
(7:168)
Let us integrate the function (Equation 7.165) in the interval [a, t] where a > 0 is a real constant. Hence, we find
F
0:5 sgn(t)Pa (t) ()
1 cos (va) jv
(7:172)
The above Fourier spectrum is easy to derive by decomposing u(t) into right-sided and reverse sign left-sided square pulses and adding the spectra of these pulses. In a similar way we can derive the Hilbert transform by adding the two Hilbert transforms defined by Equation 7.61. The resulting Hilbert pair is H
0:5 sgn(t)Pa (t) ()
1 t 2 ln 2 2p t a2
(7:173)
Let us integrate the signal u(t) by frequency domain integration. We get the spectrum of the primary function using the operator 1=jv:
Up (v) ¼
1 1 cos (va) 1 þ cos (va) ¼ jv jv v2
(7:174)
The primary function of u(t) is the inverse Fourier transform of Equation 7.174 and has the form of a reverse-signed triangle pulse.
ðt
t 1 dz j F(z) ¼ ¼ Ln(a jz) p a jz p a a t j ¼ Ln(a þ t jt) p a
We have the Fourier pair
(7:169)
F a 1 þ cos (va) tri(t) () 2 v2
(7:175)
7-27
Hilbert Transforms The signal tri(t) is defined in Table 7.1 and its Hilbert transform is 2 t a H a 1 þ ln t tri(t) () a ln t 2 a2 2 2p t þ a
However, the product f (t)H[g(t)] and its Fourier transform are
(7:176)
F
f (t)H[g(t)] ()
7.13 Multiplication of Signals with Nonoverlapping Spectra
1
F( f u)[j sgn(u)G(u)]du
H[f (t)g(t)] ¼ f (t)H[g(t)]
(7:177)
Example
The Fourier spectra of these signals do not overlap; that is, if F
f (t) () F(v)
(7:178)
F
g(t) () G(v)
(7:179)
Consider a signal in the form of the amplitude-modulated harmonic function: u(t) ¼ A(t) cos (Vt þ F);
V ¼ 2pF
F
then (v ¼ 2pf)
A(t) () CA ( f )
jF( f )j ¼ 0 for j f j > W
(7:180)
jG( f )j ¼ 0 for j f j < W
(7:181)
1 ð
1
F( f u)G(u)du
(7:182)
The multiplication of the spectrum by j sgn( f ) (see Equation 7.12) yields the spectrum of the Hilbert transform F
H[f (t)g(t)] () j sgn( f )
1 ð
1
F( f u)G(u)du
|F( f )|
(7:183)
|G( f )|
for f F
0
W
v(t) ¼ H[u(t)] ¼ A(t) sin(V t þ F)
FIGURE 7.19 Nonoverlapping Fourier spectra of two signals.
(7:188)
(7:189)
Therefore, the amplitude-modulated signal (Equation 7.186) is a real part of the analytic signal: c(t) ¼ A(t) e j(VtþF)
(7:190)
and has a geometrical representation in the form of a phasor of instantaneous amplitude A(t) and rotating with a constant regular velocity V. Bedrosian’s theorem was extended by Nuttal and Bedrosian25 to include ‘‘frequency-translated’’ analytic signals. The condition, which applies to vanishing spectra at negative frequencies, can be applied more generally to signals whose Fourier spectra satisfy the condition F(v) ¼ F[c1 (t)] ¼ 0, v < a
f
(7:187)
By using Bedrosian’s theorem, we get
G(v) ¼ F[c2 (t)] ¼ 0, v > a –W
(7:186)
and the magnitude CA( f ) is low-pass limited: jCA ( f )j ¼ 0
as shown in Figure 7.19. In terms of Fourier methods, the Hilbert transform of the product u(t) ¼ f (t)g(t) may be derived using the multiplication-convolution theorem of the form (see also Chapter 2) F
(7:185)
This equation presents Bedrosian’s theorem: Only the high-pass signal in the product of low-pass and high-pass signals gets Hilbert transformed.4
where f (t) is a low-pass signal g(t) a high-pass signal
f (t)g(t) ()
(7:184)
One can show4 that the right-hand sides of Equations 7.183 and 7.184 are identical. Therefore, the left-hand sides are identical too, and
Consider a signal of the form of the product u(t) ¼ f (t) g(t)
1 ð
(7:191)
where a is an arbitrary positive constant. The extension of Bedrosian’s theorem for multidimensional signals is given in Section 7.23.
7-28
Transforms and Applications Handbook
7.14 Multiplication of Analytic Signals
Equation 7.192 has a generalized form given by the formula H[c(at)] ¼ jsgn(a)c(at)
The Hilbert transform of the analytic signal is given by the formula H[c(t)] ¼ H[u(t) þ jH[u(t)]] ¼ H[u(t)] ju(t) ¼ jc(t)
(7:192)
where the formula H[H[u(t)]] ¼ u(t) (iteration) (see Table 7.3) has been applied. The Hilbert transform of the product of two analytic signals is given by the formula H[c1 (t) c2 (t)] ¼ c1 (t)H[c2 (t)] ¼ c2 (t)H[c1 (t)]
where a is a real positive or negative constant. The negative sign of a may be interpreted as time reversal. For example, the Hilbert transform of exp( jv t) is H(e jvt ) ¼ j sgn(v) e jvt where v may be positive or negative.
(7:193)
that is, the Hilbert transformation should be applied to one term of the product only (to the first or the second). Proof The product of two analytic functions is an analytic function.39 Therefore, if c(t) ¼ c1 (t) c2 (t)
(7:194)
where c1(t) and c2(t) are analytic signals, then using Equation 7.192, we get H[c(t)] ¼ jc(t) ¼ jc1 (t) c2 (t)
(7:195)
However, the operator j may be assigned either to c1(t) or c2(t). The application of Equation 7.193 yields two options: H[c(t)] ¼ H[c1 (t)]c2 (t);
H[c] ¼ c1 (t)H[c2 (t)]
(7:196)
Let us apply Equations 7.186 and 7.190 to find the Hilbert transforms of the nth power of the analytic signal. We get H[c2 (t)] ¼ c(t)H[c(t)] ¼ jc2 (t)
(7:197)
H[cn (t)] ¼ cn1 (t)H[c(t)] ¼ jcn (t)
(7:198)
7.15 Hilbert Transforms of Bessel Functions of the First Kind The Bessel functions (see also Chapter 1) are the solution of the second order Bessel differential equation: z 2 c00 (z) þ zc0 (z) þ (z 2 l2 )c(z) ¼ 0
Jn (t) ¼
1 X k¼0
(1)k (t=2)nþ2k ; k!(n k)!
(7:199)
The application of Equation 7.192 gives
1 < t < 1
(7:203)
The computation of the Bessel functions by means of this power series is inconvenient. Due to the truncation of the series at some value of k, we get divergence for large values of t. It is possible to apply Equation 7.203 up to t < t1 and calculate the values for t > t1 using the asymptotic formula
Example Let us find the Hilbert transform of
(7:202)
where c(z) is a complex function of a complex variable z ¼ t þ jt and l is a complex constant. If l ¼ n, where n is an integer (0, 1, 2, . . . ), and z ¼ t, we get the solution in the form of Bessel functions of the first kind of the order n denoted Jn(t). They find numerous applications in signal and system theory. For example, they are used to calculate the Fourier spectra of frequency modulated signals. The substitution in Equation 7.202 of a solution in the form of P m a series Jn (t) ¼ 1 m¼0 am x gives the power series representation
Jn (t) ¼
c2 (t) ¼ (1 jt)2
(7:201)
2 pn p r(t) sin t þ þ pffiffi pt 2 4 t t
(7:204)
The term r(t) is a limited function for t ) 1. However, it is much easier to compute the Bessel functions and its Hilbert transforms using integral forms, as described below. Let us start with the periodic complex function exp( jt sin(w)) and its Hilbert transform. We have a Hilbert pair H
1
H[c(t)] ¼ j(1 jt)
(7:200)
e jt sin (w) () H[e jt sin (w) ] ¼ j sgn[ sin (w)]e jt sin (w)
(7:205)
The Fourier series expansion of the left-hand side is
and Equation 7.197 yields H[c2 (t)] ¼ (1 jt)1 [ j(1 jt)1 ] ¼ j(1 jt)2
e jtsin(w) ¼
1 X
n¼1
Jn (t)e jnw
(7:206)
7-29
Hilbert Transforms
The Bessel functions, i.e., the coefficients of this series, are given by the integral 1 Jn (t) ¼ 2p
ðp
e j(t sin (w)nw) dw
The derivative of a Bessel function is also given by the recursion formula
(7:207)
Jn (t) ¼ (1)n Jn (t)
(7:208)
In fact, the integral of the imaginary part of Equation 7.207 equals zero, and due to the evenness of the real part of the integrand, we have 1 Jn (t) ¼ p
0
J2n (t) ¼
2 p
p2 ð 0
(7:218)
H[e jt sin w ] ¼ j sgn(w)e jt sin w ¼ cos [t sin (w) nw]dw
(7:209)
cos [t sin (w)] cos (2nw)dw
(7:210)
sin [t sin (w)] sin [(2n þ 1)w]dw
(7:211)
The real part of the Fourier series (Equation 7.206) is
cos [t sin (w)] ¼ J0 (t) þ 2
1 X
J2n (t) cos (2nw)
1 X n¼1
(7:220)
^Jn (t) ¼ 1 2p
ðp
H[e j[t sin wnw] ]dw
(7:221)
p
As in Equation 7.207, the integral of the imaginary part equals zero and due to the evenness of the real part, we have (7:212) ^Jn (t) ¼ 1 p
(7:213)
Inserting w ¼ p=2 gives the well-known formulae cos(t) ¼ J0 (t) 2J2 (t) þ 2J4 (t)
(7:214)
sin(t) ¼ 2J1 (t) 2J3 (t) þ
(7:215)
The following recursion formula is very useful (2n=t)Jn (t) ¼ Jn1 (t) þ Jnþ1 (t)
(7:219)
n¼1
because the Hilbert transforms of odd functions are even, and vice versa (compared with Equation 7.208). The functions ^Jn (t), i.e., the coefficients of the Fourier series (Equation 7.219), are given by the integral
n¼1
J2n1 (t) sin [(2n 1)w]
^Jn (t)e jnw
^Jn (t) ¼ (1)nþ1^Jn (t)
and the imaginary part is
sin [t sin (w)] ¼ 2
1 X
where ^Jn (t) ¼ H[Jn (t)] are the Hilbert transforms of the Bessel functions. For these functions we have the relation
0
2 J2nþ1 (t) ¼ p
j0 (t) ¼ 0:5[ J1 (t) J1 (t)] ¼ J1 (t)
(we used Equation 7.218). The left-hand side of Equation 7.205 was expanded in the Fourier series (Equation 7.206). Similarly, due to the linearity of the Hilbert transformation, the right-hand side may be expanded in the Fourier series
This formula enables very efficient calculation of Bessel functions Jn(t) using numerical integration. The number of integration steps may be halved using two separate integrals: p=2 ð
(7:217)
For example
p
The odd-ordered Bessel functions are odd functions of the argument t, while the even-ordered are even functions and
ðp
2jn (t) ¼ Jn1 (t) Jnþ1 (t)
0
sin [t sin w nw]dw
(7:222)
Notice that the integrand is even because it is multiplied by sgn(w) (see Equation 7.219). As before, using numerical integration, the Hilbert transforms of the Bessel functions can be easily computed. The first five Bessel functions and their Hilbert transforms computed using Equations 7.219 and 7.222 are shown in Figure 7.20a and b. Let us derive the Hilbert transforms of the Bessel functions Jn(t) using Fourier transforms. The Fourier transform of the function J0(t) is 2 J0 (t) () C0 ( f ) ¼ (1 v2 )0:5 : 0 F
(7:216)
ðp
8
1
(7:223)
7-30
Transforms and Applications Handbook
Jn 1 J1
J0
J3
J2
J5
J4
4 –1
2
1
8
3
5
6
9
7
10
t
(a)
ˆJn 1
2
1
4
ˆJ3
ˆJ4
8
6
3
–1
ˆJ2
ˆJ1
ˆJ0
5
7
ˆJ5
9
10
t
–1 (b)
FIGURE 7.20 (a) Waveforms of the first five Bessel functions Jn(t). (b) Waveforms of the corresponding Hilbert transforms.
Proof Let us find the inverse transform of this spectrum:
obtaining the following Fourier parts F
J0 (t) ¼
1 2p
1 ¼ p
ð1
1 ðp
2 cos (vt)dv ¼ (1 v2 )0:5
v ¼ sin (w) dv ¼ cos (w)dw
cos [t sin w]dw
(7:224)
0
(See Equation 7.209) The Fourier transforms of higher-order Bessel functions can be calculated using the recursion formula (Equation 7.217) and frequency domain differentiation. We have Jnþ1 (t) ¼ Jn1 (t) 2J_n (t)
(7:225)
J0 (t) () C0 ( f ) ¼
2 (1 v2 )0:5
F
J1 (t) ¼ J_0 (t) () jvC0 ( f ) F
J2 (t) ¼ J0 (t) 2J_1 (t) () C0 ( f ) 2jvC1 ( f )
(7:226) (7:227) (7:228)
Successive application of the recurrency gives the Fourier spectra of the Bessel functions Jn(t) tabulated in Table 7.7. We find that F
J_n (t) () Cn ( f ) ¼ (j)n 2n1 Tn (t)C0 ( f )
(7:229)
where C0( f ) is defined by Equation 7.226 and Tn(t) is a Chebyshev polynomial defined by the formula
7-31
Hilbert Transforms TABLE 7.7
Fourier and Hilbert Transforms of Bessel Functions of the First Kind
Bessel Function
Fourier Transform
Jn(t)
Cn (f )
J0(t)
C0 ¼
Hilbert Transform
2 ; jvj < 1 (1 v2 )0:5 ¼ 0; jvj > 0
J1(t)
C1 ¼ jvC0
J2(t)
C2 ¼ (2v2 1)C0
J3(t)
C3 ¼ j(4v3 3v)C0
J4(t)
C4 ¼ (8v4 8v2 þ 1)C0
J5(t)
C5 ¼ j(16v5 20v3 þ 5v)C0
J6(t)
C6 ¼ (32v6 48v4 þ 18v2 1)C0
Jn(t)
Cn ¼ (j)n 2n1 Tn (v)C0
Note: Tn(v) ¼ cos[n cos1(v)] is the Chebyshev polynomial.
Tn (t) ¼ cos [n cos1 (t)];
n ¼ 0, 1, 2, . . .
(7:230)
A recursion formula can be applied Tnþ1 (t) 2t Tn (t) þ Tn1 (t) ¼ 0; n ¼ 1, 2, . . .
(7:231)
Because we derived the analytical expressions for the Fourier images of the Bessel functions, the use of inverse Fourier transformations enables the evaluation of either the Bessel function Jn(t) or its Hilbert transform ^Jn (t). For example
J0 (t) ¼
1 p
ð1 0
2 cos (vt)dv (1 v2 )0:5
(7:232)
and the Hilbert transform is ^J0 (t) ¼ H[J0 (t)] ¼ 1 p
ð1 0
2 sin (vt)dv (1 v2 )0:5
(7:233)
Hence, we have an analytic signal c0 (t) ¼ J0 (t) þ j^J0 (t)
(7:234)
Equations 7.232 and 7.233 may be regarded as alternative definitions of the Bessel functions J0(t) and ^J0 (t). However, the computation by means of the integrals (7.209 and 7.210) (n ¼ 0) gives much better accuracy with a given number of integration steps.
^Jn (t) ¼ H[Jn (t)] ð 1 1 C0 (f ) sin (vt)dv p 0 ð1 1 jC1 (f )j cos (vt)dv p 0 ð1 1 jC2 (f )j sin (vt)dv p 0 ð1 1 jC3 (f )j cos (vt)dv p 0 ð1 1 jC4 (f )j sin (vt)dv p 0 ð1 1 jC5 (f )j cos (vt)dv p 0 ð1 1 jC6 (f )j sin (vt)dv p 0 ð (1)n=2 1 jCn (f )j sin (vt)dv p 0 for n ¼ 0, 2, 4, . . . ð (1)(nþ1=2) 1 jCn (f )j cos (vt)dv p 0 for n ¼ 1, 3, 5, . . .
The expressions for the Fourier images of Bessel functions and their Hilbert transforms derived using these images are listed in Table 7.7. If needed, the Fourier spectra enable the derivation of the coefficients of the power series representation of Jn(t) and ^Jn (t). Starting with the power series for Jn(t) given by Equation 7.203, let us derive the power series for ^Jn (t). We start with the expression defining the Taylor series ^Jn (t) ¼
1 ^(n) X Jn (t ¼ 0) n! n¼0
(7:235)
The derivatives ^Jn(n) (t) (t ¼ 0) can be obtained by differentiation of the integrand of the integrals listed in Table 7.7. By inserting t ¼ 0, we obtain ^J0 (0) ¼ 1 p
ð1 0
2dv sin (0) ¼ 0 (1 v2 )0:5
^J0(1) (0) ¼ 1 p
ð1
2vdv 2 0:5 cos (0) ¼ 2 p (1 v )
0
1 ¼ p
ð1 0
2v2 dv sin (0) ¼ 0 (1 v2 )0:5
^J0(3) (0) ¼ 1 p
ð1
2v3 dv 4 0:5 cos (0) ¼ 2 3p (1 v )
^J0(2) (0)
0
(7:236)
7-32
Transforms and Applications Handbook
where (1), (2), . . . denote the order of the derivative. Continuing the differentiation using Equation 7.235, we get the following power series:
jυ (t) Ω
A(
# (1)(3þn)=2 2n2 n þ þ t þ n!(1:3:5::n)
t)
^J0 (t) ¼ 2 t 1 t 3 þ 1 t 5 2 t 7 p 9 225 33075
(t)
(7:237)
(t)
In the same way one can derive the power series of higher order Hilbert transforms of the Bessel functions.
u (t)
7.16 Instantaneous Amplitude, Complex Phase, and Complex Frequency of Analytic Signals Signal theory needs precise definitions of various quantities such as the instantaneous amplitude, instantaneous phase, and instantaneous frequency if a given signal and many other quantities. Let us recall that neither definition is true or false. If we define something, we simply propose to make an agreement to use a specific name in the sense of the definition. When using this name, for instance, ‘‘instantaneous frequency,’’ we should never forget what we have defined. The history of signal theory contains examples of misunderstanding when various authors applied the same name, instantaneous frequency, to different definitions and then tried to discuss which is true or false. Such a discussion is meaningless. Of course, one may discuss which definition has advantages or disadvantages from a specific point of view or whether it is compatible with other definitions or existing knowledge. The notions of the instantaneous amplitude, instantaneous phase, and instantaneous frequency of the analytic signal c(t) ¼ u(t) þ jy(t) may be uniquely and conveniently defined introducing the notion of a phasor rotating in the Cartesian (u, y) plane, as shown in Figure 7.21. The change of coordinates from rectangular (u, y) to polar (A, w) gives uðt Þ ¼ Aðt Þ cos½wðt Þ
(7:238)
yðt Þ ¼ Aðt Þ sin½wðt Þ
(7:239)
cðt Þ ¼ Aðt Þe jwðtÞ
(7:240)
FIGURE 7.21 A phasor in the Cartesian (u, n) plane representing the analytic signal c(t) ¼ u(t) þ jn(t) ¼ A(t)ejw(t).
(t)
2π
π
–5
0
5
υ(t) u(t)
FIGURE 7.22 The multibranch function w(t) ¼ tan1[y(t)=u(t)]. As time elapses (arrows) they are jumps from one branch to a next branch.
We define the instantaneous amplitude of the analytic signal equal to the length of the phasor (radius vector) A: A(t) ¼
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u2 (t) þ y2 (t)
(7:241)
and define the instantaneous phase of the analytic signal equal to the instantaneous angle w(t) ¼ Tan1
y(t) u(t)
(7:242)
The notation with capital T indicates the multibranch character of the Tan1 function, as shown in Figure 7.22. As times elapses, the phasor rotates in the (u, y) plane and its instantaneous angular speed defines the instantaneous angular frequency of the analytic signal given by the time derivative w(t) _ ¼ V(t) ¼ 2pF(t)
(7:243)
7-33
Hilbert Transforms
7.16.1 Instantaneous Complex Phase and Complex Frequency
or V(t) ¼
_ d y(t) u(t)_y(t) y(t)u(t) tan1 ¼ 2 2 dt u(t) u (t) þ y (t)
(7:244)
Signal and systems theory widely uses the Laplace transformation of a real signal u(t) of the form
Notice the anticlock direction of rotation for positive angular frequencies. The instantaneous frequency is defined by the formula
U(s) ¼
1 ð
u(t)est dt
(7:246)
0
F(t) ¼
V(t) 1 ¼ w(t) _ 2p 2p
(7:245)
Summarizing, using the notion of the analytic signal, we defined the instantaneous amplitude, phase, and frequency. A number of different definitions of the notion of instantaneous amplitude, phase, and frequency have developed over the years. There are many pairs of functions A(t) and w(t), which inserted into Equation 7.238 reconstruct a given signal u(t), for example, _ functions defining a phasor in the phase plane [u(t), u(t)]. But only the analytic signal has the unique feature of having a onesided Fourier spectrum. Let us recall that a real signal and its Hilbert transform are given in terms of analytic signals by Equations 7.30 and 7.31 (see Section 7.3). Figure 7.23 shows the geometrical representation of these formulae in the form of two phasors of a length 0.5 A(t) and opposite direction of rotation, positive for c(t) and negative for c*(t). Equation 7.242 defines the instantaneous frequency of a signal regardless of the bandwidth. It is sometimes believed that the notion of instantaneous frequency has a physical meaning only for narrow-band signals (high-frequency [HF]-modulated signals). However, using adders, multipliers, dividers, Hilbert filters, and differentiators, it is possible to implement a frequency demodulator used for wide-band signals, for example, speech signals, the algorithm defined by Equation 7.244. Modern VLSI enables efficient implementation of such frequency demodulators at reasonable cost.
where s ¼ a þ jv; v ¼ 2pf is a time-independent complex frequency (a and v are real). The exponential kernel est has the form of a harmonic wave with an exponentially decaying amplitude; that is, its instantaneous amplitude is A(t) ¼ eat
(7:247)
The notion of the complex frequency has been generalized by this author in 1964 defining a complex instantaneous variable frequency using the notion of the analytic signal.12 It is convenient to define the instantaneous complex frequency as the time derivative of a complex phase. The instantaneous complex phase of the analytic signal c(t) is defined by the formula Fc (t) ¼ Ln[c(t)]
(7:248)
Capital L denotes the multibranch character of the logarithmic function of the complex function c(t). The insertion of the polar form of the analytic signal (see Equation 7.240) yields Fc (t) ¼ Ln[A(t)] þ jw(t)
jυ
jυ Ω
Ω
Ω (t)
(t)
Ψ(t)
–Ψ*(t) u
–Ω
(t )
Ψ*(t)
FIGURE 7.23 A pair of conjugate phasors representing the Equations 7.2.17 and 7.2.18.
(t)
Ψ(t)
Ψ*(t)
u
(7:249)
7-34
Transforms and Applications Handbook
The instantaneous complex frequency is defined by the derivative _ _ c (t) ¼ A(t) þ jv(t) s(t) ¼ F A(t)
(7:250)
Examples 1. Consider the analytic signal given by Equation 7.76: cd (t) ¼
or
a t 2 þj 2 p(a þ t ) p(a þ t 2 ) |fflfflfflfflfflffl{zfflfflfflfflfflffl} |fflfflfflfflfflffl{zfflfflfflfflfflffl} 2
y(t)
u(t)
sðt Þ ¼ aðt Þ þ jvðt Þ
(7:251)
where
a(t) ¼
_ A(t) A(t)
(7:252)
(7:259)
The polar form of this signal is
1 cd (t) ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi exp j tan1 (t=a) 2 2 |fflfflfflfflfflfflfflfflfflffl ffl {zfflfflfflfflfflfflfflfflfflffl ffl } p a þt |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} w(t)
(7:260)
A(t)
Therefore, the instantaneous complex phase is
is the instantaneous radial frequency (a measure of the radial velocity representing the speed of changes of the radius or amplitude of the phasor), and
1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi þ j tan1 (t=a) Fc (t) ¼ Ln |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} p a2 þ t 2 |fflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflffl} w(t)
(7:261)
b(t)
v(t) ¼ w(t) _
(7:253)
and the instantaneous complex frequency is
is the instantaneous angular frequency. Equation 7.252 has the form of a first-order differential equation. The solution of this equation yields the following form of the instantaneous amplitude A(t) ¼ A0 e
Ðt 0
a(t)dt
(7:254)
A0 is the value of the amplitude at the moment t ¼ 0. Let us introduce the notation
a(t)
b(t) ¼ a(t)dt
v(t)
1 þ j 0:5p sgn(t) |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} pjtj |{z} w(t)
(7:263)
A(t)
(7:255)
and the complex frequency is
0
Using this notation the complex phase can be written as Fc (t) ¼ ln A0 þ b(t) þ jw(t)
(7:262)
Because in the limit a ) 0 the signal (Equation 7.253) approximates the complex delta distribution (see Equation 7.64), the instantaneous complex phase of this distribution is Fcd (t) ¼ Ln
ðt
t a 2 2 þj 2 þ t þ t2 a a |fflfflffl{zfflfflffl} |fflfflffl{zfflfflffl}
_ c (t) ¼ s(t) ¼ F
sd (t) ¼
1 þ j pd(t) |ffl{zffl} t |{z} a(t)
(7:256)
(7:264)
v(t)
2. Consider the analytic signal
or ðt
Fc (t) ¼ ln A0 þ s(t)dt þ jF0
c(t) ¼
(7:257)
0
F0 is the integration constant or the angular position of the phasor at t ¼ 0. The introduction of the concept of a complex constant c0 ¼ A0 e jF0 gives the following form of the analytic signal Ðt s(t)dt C(t) ¼ c0 e 0
(7:258)
sin (at) sin2 (0:5at) þj at 0:5at |{z} |ffl{zffl} u(t)
(7:265)
y(t)
where u(t) is the well-known interpolating function of the sampling theory. Equations 7.241 and 7.242 yield, using trigonometric relations, the polar form of this signal: sin (0:5at) exp ( j at=2) c(t) ¼ |fflfflffl{zfflfflffl} 0:5at |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} w(t) A(t)
(7:266)
7-35
Hilbert Transforms Therefore, the instantaneous complex phase is sin (0:5at) þ j at=2 cc (t) ¼ Ln 0:5at |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl}
(7:267)
A(t)
and the instantaneous complex frequency s(t) ¼
a 1 a cot (0:5at) þ j 2 t 2
(7:268)
In conclusion, the interpolating function may be regarded as a signal of a variable amplitude and a constant angular frequency v ¼ a=2. 3. The classic complex notation of a frequency- or phasemodulated signal (Carson and Fry, 1937) has the form41 c(t) ¼ A0 e j[V0 tþF0 þw(t)] ; V0 ¼ 2pF0
1 dF 1 dw ¼ F0 þ 2p dt 2p dt
(7:270)
The signal (Equation 7.269) is represented by a phasor in the plane (cos[(F(t)], sin[F(t)]), as shown in Figure 7.24. These definitions of the instantaneous phase and frequency differ from the definition using the analytic signal that is represented by a phasor in the (cos[F(t)],
A0 sin[Ω (t)]
Ω(
H{ cos [V0 t þ F0 þ w(t)]} ¼ cos [w(t)]H[ cos (V0 t þ F0 )]
sin [w(t)]H[ sin (V0 t þ F0 )] ¼ sin [V0 t þ F0 þ w(t)] (7:271)
In the case of harmonic modulation with w(t) ¼ b sin (vt), where b is the modulation index, the spectra of the function cos[w(t)] and sin[w(t)] are given by the Fourier series
(7:269)
where w(t) represents the angle modulation. The whole argument of the exponential function F(t) ¼ V0t þ F0 ¼ w(t) defines the instantaneous phase and its derivative, the instantaneous frequency F(t) ¼
H(cos[F(t)]) plane, because sin[F(t)] is not the Hilbert transform of cos[F(t)] and the signal (7.269) is not an analytic function. However, it may be nearly analytic if the carrier frequency is large. If the spectra of the functions cos[w(t)] and sin[w(t)] have a limited lowpass support of a highest frequency jWj < jF0j, then Bedrosian’s theorem (see Section 7.13) may be applied and
cos½b sin (vt) ¼ J0 (b) þ 2 sin½b sin (vt) ¼ 2
1 X n¼1
1 X
J2n (b) cos (2nvt)
(7:272)
J2n1 (b) sin½(2n 1)vt
(7:273)
n¼1
and this is not a pair of Hilbert transforms (see Section 7.7). Although the number of terms of the series is infinite, the number of significant terms is limited and for a good approximation Bedrosian’s theorem may be applied for large values of F0. Further comments are given in Reference 25.
7.17 Hilbert Transforms in Modulation Theory This section is devoted to the theory of analog modulation of a harmonic carrier uc(t) ¼ A0 cos(2p F0t þ F0) with emphasis on the role of Hilbert transformation, analytic signals, and complex frequencies. The theory of amplitude and angle modulation is mentioned briefly in favor of a more detailed description of the theory of single side-band (SSB) modulations. The last are conveniently defined using Hilbert transforms. Many modulators are implemented using Hilbert filters, mostly digital filters, because nowadays modulated signals can be conveniently generated digitally and converted into analog signals.
t)
A0
A0 cos[Ω(t)]
7.17.1 Concept of the Modulation Function of a Harmonic Carrier The complex notation of signals is widely used in modern modulation theory. The harmonic carrier is written in the form of the analytic signal cc (t) ¼ A0 e j(V0 tþF0 ) FIGURE 7.24 signal.
A phasor representing a frequency (or phase) modulated
(7:274)
Analog modulation is the operation of continuous change of one or more of the three parameters of the carrier: the amplitude A0,
7-36
Transforms and Applications Handbook
the frequency F0, or the phase F0, resulting in amplitude, frequency, or phase modulation. The complex-modulated signal has a convenient representation in the form of a product3 c(t) ¼ g(t)cc (t) ¼ A0 g(t)e j(V0 tþF0 )
(7:275)
The function g(t) is called the modulation function. It is a function of the modulating signal (the message) x(t), that is, g(t) ¼ g[x(t)]. Any kind of modulation, for example, amplitude, frequency, or phase modulation, is represented by a specific real or complex modulation function. We shall investigate models of modulating signals for which the Fourier transform exists and is given by the Fourier pair F
x(t) () X(v); v ¼ 2pf
(7:276)
The frequency band containing the terms of the spectrum X(v) is called the baseband. In general, the modulation function is a nonlinear function of the variable x, and the spectrum of the modulation function differs from X(v) and is represented by the Fourier pair: F
g(t) () G(v)
(7:278)
The initial phase of the carrier F0 is of importance only if we deal with two or more modulated carriers of the same frequency, for example, by summation or multiplication of modulated signals. It is convenient to write the modulated signal in the form c(t) ¼ A0 g(t)e jF0 e jV0 t
(7:280)
The new Fourier spectrum is F
g1 (t) () G1 (v) ¼ G(v)e jF0
g(t) ¼ 1 þ mx(t);
jmx(t)j < 1
(7:282)
The number 1 represents the carrier term. Therefore, the modulation function for balanced modulation (suppressed carrier) has the simple form g(t) ¼ mx(t)
(7:283)
Therefore, the spectra of the message and of the modulation function are to within the scale factor m, the same. The message may be written in the form (see Equation 7.30)
x(t) ¼
cx (t) þ cx* (t) 2
(7:284)
This formula shows that the upper sideband of the AM signal is represented by the analytic signal cx(t) of a one-sided spectrum at positive frequencies and the lower sideband by the conjugate analytic signal cx* (t) of a one-sided spectrum at negative frequencies. The sidebands have the geometric form of two conjugate phasors (see Figure 7.23). The instantaneous amplitude of the phasors is A(t) ¼
m m jc (t)j ¼ 2 x 2
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 (t) þ (^x(t))2
(7:285)
(^x(t) ¼ H[x(t)]) and the instantaneous angular frequency is vx (t) ¼
^x(t) d tan1 x(t) dt
(7:286)
(7:279)
and define a modified modulation function in the form of the product g1 (t) ¼ g(t)e jF0
Examples of modulation functions: The modulation function for a linear full-carrier AM has the form
(7:277)
The nonlinear transformations of the spectrum may have a complicated analytic representation. Usually only approximate determination of the spectrum is possible. The approximations are easier to perform if the energy of the modulating signal is nonuniformly distributed and concentrated in the low-frequency part of the baseband, for example, the energy of voice, music, or TV signals. Usually it is possible to find the terms of G(v) for harmonic modulating signals. In special cases, if the modulation function is proportional to the message, that is, g(t) ¼ mx(t) (m is a constant), we have G( f ) ¼ mX( f )
Notice that the spectrum G1(v) is defined at zero carrier frequency and the spectrum of the modulated signal is obtained by shifting this spectrum from zero to carrier frequency by the Fourier shift operator e jV0 t . This approach enables us to study the spectra of modulated signals at zero carrier frequency.
Therefore, a SSB represents a signal with simultaneous amplitude and phase modulation. The multiplication of cx (t) or cx* (t) with the complex carrier (Fourier shift operator) e jV0 t yields the highfrequency analytic signals. The upper sideband (F0 ¼ 0) is (with mA0 ¼ 2) cupper (t) ¼ cx (t)e jV0 t
(7:281)
We observe that the spectra, in Equations 7.277 and 7.281, have the same magnitude and differ only by the phase relations.
(7:287)
with the modulation function cx(t), and the lower sideband is c(t) ¼ cx* (t)e jV0 t
(7:288)
7-37
Hilbert Transforms
with the conjugate modulation function cx* (t). The above signals represent the complex form of SSB AM. The real notation of these signals is uSSB (t) ¼ x(t) cos (V0 t) ^x(t) sin (V0 t)
(7:289)
with the minus sign for the upper sideband and plus sign for the lower one. The products x(t) cos(V0t) and ^x(t) in (V0t) represent double side-band (DSB) compressed carrier AM signals. Therefore, an SSB modulator may be implemented, as shown in Figure 7.25. The angle modulation is represented by the exponential modulation function of the form g(t) ¼ e jw[x(t)]
C(t) ¼ A0 e j[V0 tþw(t)]
(7:291)
where w is a function of the modulating signal x(t). In general, this complex signal may be only approximately analytic (see Section 7.16, Example 3). In the case of a linear phase modulation, the modulation function has the form g(t) ¼ e jmx(t)
(7:292)
and for the linear frequency modulation jm
Ð1
1
x(t)dt
(7:293)
The Fourier spectrum of the modulation function is given by the integral
G(v) ¼
1 ð
g(t) ¼ e jb sin (v0 t)
e jw[x(t)] ejvt dt
(7:294)
g(t) ¼ J0 (b) þ þ
1 X n¼1
1 X
J2n (b) cos (2 nv0 t)
n¼1
J2n1 (b) sin [(2n 1)v0 t]
j2 nv0 t e þ ej2nv0 t J2n (b) g(t) ¼ J0 (b) þ 2 n¼1 1 X e j(2n1)v0 t ej(2n1)v0 t J2n1 (b) þ 2 n¼1 1 X
Modulating signal
7.17.2 Generalized Single Side-Band Modulations The SSB AM signal defined by Equations 7.287 and 7.288 is an example of many other possible SSB modulations. Any kind of modulation of a harmonic carrier is called SSB modulation if the modulation function is an analytic single of a one-sided spectrum at positive frequencies for the upper sideband and at negative
Balanced modulator
+
Lower sidebands
Carrier
– sin Ω0t
Hilbert transformer
(7:297)
Because the exponentials in the time domain are represented by F delta functions in the frequency domain ejnv0 t () d( f n f0 ) , the spectrum of the modulation function (zero carrier frequency) has the form shown in Figure 7.26 (b ¼ 4).
cos Ω0t X (t)
(7:296)
Using Euler’s formulae (see Equations 7.32 and 7.33), this modulation function becomes
1
Delay
(7:295)
where b is the modulation index (in radians). The Fourier series expansion of this complex periodic function has the form:
(7:290)
Therefore, the complex signal representation of the angle modulation has the form
g(t) ¼ e
If for a specific function w[x(t)] the closed form of this integral does not exist, a numerical integration may be applied. In the simplest case of linear phase modulation with a harmonic modulating signal the modulation function (Equation 7.292) has the form
Balanced modulator
FIGURE 7.25 Block diagram of a SSB modulator (phase method) implementing Equation 7.16.16.
Upper
7-38
Transforms and Applications Handbook
0.5 Jn (β) δ ( f/f0 – n)
0.3 0.2 0.1 –7
–5
1
–3
–6
–4
–2
–1 0
2
3
4
5
6
7
nf/f0
–0.1 –0.2 –0.3
FIGURE 7.26 The spectrum of a phase modulated signal translated to zero carrier frequency, i.e., of the modulation function. Phase deviation b ¼ 4 radians.
frequencies for the lower sideband. Therefore, the modulation function should have the form g(t) ¼ gx (t) þ j^ gx (t) ¼ A(t)e jw(t)
(7:298)
H
^x (t). Let us use here the notion of the instantwhere gx (t) () g aneous complex phase defined by Equation 7.248 of the form fc (t) ¼ ln A(t) þ jf(t)
(7:299)
The modulation function (Equation 7.298) can be written in the form g(t) ¼ efc (t) ¼ eln [A(t)]þjf(t)
(7:300)
that is, the instantaneous amplitude is written in the exponential form A(t) ¼ eln [A(t)]
(7:301)
We now put the question: under what conditions are g(t) and simultaneously fc(t) analytic? That is, when is not only the relation (Equation 7.298), but also the relation H
^
ln A(t) () f(t) ¼ LN [A(t)]
(7:302)
satisfied? The answer comes from the dual (time domain) version of the Paley–Wiener criterion28 1 ð
1
jLn[A(t)]j dt < 1 1 þ t2
(7:303)
which should be satisfied. Let us remember that A(t) is defined as a nonnegative function of time. The Paley–Wiener criterion is equivalent to a requirement that A(t) should not approach zero faster than any exponential function. This is a property of each signal with finite bandwidth that is of any practical signal.
7.17.3 CSSB: Compatible Single Side-Band Modulation The CSSB signal has the same instantaneous amplitude as the conventional DSB full-carrier AM signal, that is, of the form A(t) ¼ A0 (1 þ mx(t));
mx(t) < 1
(7:304)
and can be demodulated by a conventional linear diode demodulator (but not by a synchronous detector). The one-sided spectrum of the CSSB signal is achieved by a simultaneous specific phase modulation. The analytic modulation function should satisfy the requirement (Equation 7.302) and has the form ^
g(t) ¼ [1 þ mx(t)]e j ln [1þmx(t)]
(7:305)
Figure 7.27 shows a block diagram of a modulator producing a high-frequency CSSB signal implemented by the use of Equation 7.305. This modulation function guarantees the exact cancellation of the undesired sideband. Using digital implementation, the level of the undesired sideband depends only on design. The bandwidth of the nonlinear logarithmic device, the Hilbert filter and phase modulator, should be several times wider than the bandwidth of the input signal. In practice it should be three to four times larger than the baseband. The instantaneous
7-39
Hilbert Transforms
ˆ + mx(t – τ)] A0 cos[Ω0t + ln(1
A0 cos Ω0t
Phase modulator
Driver
Amplitude modulator
ˆ + mx(t – τ)] ln[1 Hilbert transformer
CSSB signal
x (t – τ)
"
2
Delay
A (t) ¼ 1 þ m
ln[1 + mx(t )]
Nonlinear logarithmic device
The truncation of this series at the term 4N þ 2 is not arbitrary because it will be shown that terms for k > 4N þ 2 vanish. Therefore, the bandwidth of g(t) equals exactly 2W. To give the evidence, let us insert x(t) given by Equation 7.306 in Equation 7.305. The square of the instantaneous amplitude of so-defined modulation function is
x (t )
FIGURE 7.27 Diagram of the modulator producing the compatible single side-band AM signal.
amplitude A(t) should never fall to zero because the logarithm of zero equals minus infinity. Tradeoff is needed between the smallest value of A and the phase deviation.
N X k¼0
k
(1) C2kþ1 cos [(2k þ 1)v0 t]
#2
(7:308)
The highest term of this Fourier series has the harmonic number 4N þ 2. Analogously, the square of the instantaneous amplitude of the modulation function (Equation 7.307) is
2
A (t) ¼
"
4Nþ2 X k¼0
#2 "
Ak cos (kv0 t) þ
4Nþ2 X
Ak sin (kv0 t)
k¼1
#2
(7:309)
However, the functions 7.307 and 7.308 should be equal. Therefore, they should have the same coefficients of the Fourier series. The comparison of these coefficients yields a set of 4N þ 3 equations. The solution of these equations yields the coefficients A0, A1, A2, . . . , A2Nþ2 as functions of the modulation index m and the amplitudes C2kþ1 of the modulating signal (Equation 7.304).
7.17.4 Spectrum of the CSSB Signal It may be a surprise that the bandwidth of the one-sided spectrum of the CSSB signal is limited. If the spectrum of the modulating signal exists in the interval W < f < W, then the spectrum of the modulation function exists in the interval 0 < f < 2W. Seemingly, the bandwidths of the CSSB and DSB AM signals are equal. However, the spectra of many messages such as speech or video signals are nonuniform, with significant terms concentrated at the lower part of the baseband. This enables us to transmit the CSSB signal in a smaller band; for example, from F0 to F0 þ W instead to F0 þ 2W, at the cost of some distortions enforced by the truncation of insignificant terms of the spectrum. Let us investigate the spectra and distortions using the model of a wide-band modulating signal given in the form of the Fourier series.
x(t) ¼
N X k¼0
(1)k C2kþ1 cos½(2k þ 1)v0 t ;
v0 ¼ 2pf0
(7:306)
For C2k þ 1 ¼ 1=(2k þ 1) this modulating signal is a truncated Fourier series of a square wave. Its bandwidth equals W ¼ (2N þ 1) f0. The insertion of this signal in Equation 7.304 yields a periodic modulation function given by the Fourier series
g(t) ¼
4Nþ2 X k¼0
Ak e jkv0 t
(7:307)
Examples 1. For the harmonic modulating signal x(t) ¼ cos(v0t), N ¼ 0, C1 ¼ 1, and C2k þ 1 ¼ 0 for k > 0. The comparison of the squares of the instantaneous amplitudes yields three equations: A20 þ A21 þ A22 ¼ (1 þ m2 =2)C1
(7:310)
A0 A1 þ A1 A2 ¼ mC1
(7:311)
A0 A2 ¼ (mC1 )2 =4
(7:312)
The solution of these equations yields (C1 ¼ 1): The amplitude of the zero frequency carrier pffiffiffiffiffiffiffiffiffiffiffiffiffiffi A0 ¼ 0:5 þ 0:5 1 m2
(7:313)
A1 ¼ m
(7:314)
The amplitude of the first sideband
and the amplitude of the second sideband pffiffiffiffiffiffiffiffiffiffiffiffiffiffi A2 ¼ 0:5 0:5 1 m2
(7:315)
Figure 7.28 shows an example of the spectrum of the CSSB signal and Figure 7.29, the dependence of the amplitudes on m.
7-40
1.2
Transforms and Applications Handbook The solutions of these equations yield the seven terms of the CSSB signal. In practice it is simpler to find these terms applying any numerical method of determination of the coefficients of the Fourier series expansion of the modulation function (Equation 7.305). However, the above set of equations gives the evidence that the spectrum has a finite number of terms (example in Figure 7.30). The above equations may be used to control the accuracy of numerical calculations. Notice that Equations 7.310 and 7.316 have the form of power equality equations. Let us quote three other modulation functions generating CSSB AM signals. The analytic modulation function of the form
Ak m = 0.5
1
0.933
0.8 0.6
0.5 0.366
0.4 0.2 0
0
1 k
2
g(t) ¼
FIGURE 7.28 Example of a spectrum of the CSSB AM signal with a cosine envelope.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi j1 ln [1þmx(t)] 1 þ mx(t)e 2
(7:320)
uses the square root of the instantaneous amplitude of an AM signal. Its spectrum is exactly one-sided. A squaring demodulator should be applied at the receiver. The phase deviation equals one-half of the phase deviation of the function (7.299). Some years ago Kahn implemented a CSSB modulator using the modulation function17
1 A0
g(t) ¼ [1 þ mx(t)]e j tan
1
m^x (t) 1þmx(t)
(7:321)
Similarly Villard (1948) implemented a modulator using another modulation function40
A1
g(t) ¼ (mx(t))e jm^x(t) A2 0
0
FIGURE 7.29 The dependence of the three terms of the spectrum on the modulation index m.
2. For the modulating signal x(t) ¼ C1 cos(v0t) C3 cos (3v0t), N ¼ 1, and C2kþ1 ¼ 0 for k > 1. We get seven equations of the form 6 X k¼0
A2k ¼ 1 þ
The last two modulation functions are not exactly analytic and their spectra are only approximately one-sided.
1
m
m2 2 (C þ C32 ) 2 1
1
0
(7:316)
1
Ak Akþ1 ¼ mC1 ;
4 X
Ak Akþ2 ¼
m2 (0:5C12 C1 C3 ) (7:317) 2
.5
3 X
2 X
Ak Akþ4 ¼
m2 C 1 C3 2
0
Ak Akþ3 ¼ mC3 ;
k¼0
1 X k¼0
Ak Akþ5 ¼ 0;
k¼0
k¼0
0 X k¼0
Ak Akþ6 ¼
m2 2 C 4 3
(7:318)
(7:319)
Spectrum of the baseband signal
.5
5 X k¼0
(7:322)
0
5
10
15
20
25
30
25
30
k
Spectrum of the corresponding CSSB signal
0
5
10
15
20
k
FIGURE 7.30 The spectrum of the CSSB AM signal with an envelope given by the Fourier series of a square wave truncated at the 15th harmonic number.
7-41
Hilbert Transforms
β = 0.5
8
– f0 6
0
f0
2 f0 3 f0
eβ cos (ωt)
f β=1
5
– f0
4
0
f0
2 f0 3 f0 4 f0
β=3 2 1 0.5
3 2
f
β=2
1
–π
–0.5 π
0
0.5 π
π
ωt
– f0
0
f0
2 f0 3 f0 4 f0 5 f0 6 f0 7 f0
f
FIGURE 7.31 Envelope of the compatible with a linear FM detector single side-band FM signal. b-modulation index in radians. β=3
7.17.5 CSSB Modulation for Angle Detectors The modulation function of a SSB modulation compatible with a linear phase detector has the form g(t) ¼ eb^x(t)þjbx(t)
(7:323)
and the modulation function of a SSB modulation compatible with a linear frequency demodulator has the form – f0
g(t) ¼ emf H
Ð
x(t)dt þjmf
Ð
x(t)dt
(7:324)
where b and mf are modulation indexes of phase or frequency modulation (in radians). The above modulation functions are analytic. Therefore, their spectra are exactly one-sided due to the simultaneous amplitude and angle modulation. Notice the exponential amplitude modulation function. For large modulation indexes the required dynamic range of the amplitude modulator is extremely large. An example is the modulating signal x(t) ¼ sin (v0t). Here, the instantaneous amplitude has the form A(t) ¼ exp[b cos (v0t)] and is shown in Figure 7.31. Figure 7.32 shows the amplitudes of the one-sided spectrum in dependence of b.
7.18 Hilbert Transforms in the Theory of Linear Systems: Kramers–Kronig Relations The notions of impedance, admittance, and transfer function are commonly used to describe the properties of linear, timeinvariant (LTI) systems. If the signal at the input port of the LTI system varies in time as exp( jvt), the signal at the output is a sine wave of the same frequency with a different amplitude and phase.
0
f0
2 f0 3 f0 4 f0 5 f0 6 f0 7 f0 8 f0 9 f0
f
FIGURE 7.32 One-sided spectrum of the modulation function of the compatible with a linear detector FM signal.
In other words, the LTI conserves the waveform of sine signals. A pure sine waveform is a mathematical entity. However, it is easy to generate physical quantities that vary in time practically as exp( jvt). Signal generators producing nearly ideal sine waves are widely used in many applications, including precise measurements of the behavior of circuits and systems. The transfer function of the LTI system is defined as a quotient of the output and input analytic signals H(jv) ¼
c2 (t) A2 e j(vtþw2 ) ¼ c1 (t) A1 e j(vtþw1 )
(7:325)
This transfer function describes the steady-state, input-output relations. Theoretically, the input sine wave should be applied at the time at minus infinity. In practice, the steady state arrives if the transients die out. The transfer function is time independent because the term exp(jvt) may be deleted from the nominator and denominator of Equation 7.325. The frequency domain description by means of the transfer function can be converted into the time-domain description
7-42
Transforms and Applications Handbook
using the Fourier transformation. A response of the LTI system to the delta pulse, i.e., the impulse response, is defined by the Fourier pair: F
h(t) ¼ d(t) * h(t) () 1H( jv) ¼ H( jv)
(7:326)
where d(t) F 1. ()
7.18.2 Causality All physical systems are causal. Causality implies that any response of a system at the time t, depends only on excitations at earlier times. For this reason, the impulse response of a causal system is one-sided; that is, h(t) ¼ 0 for t < 0. But one-sided time signals have analytic spectra (see Section 7.4). Therefore, the spectrum of the impulse response given by Equation 7.326, and thus the transfer function of a causal system is an analytic function of the complex frequency s ¼ a þ jv. The analytic transfer function H(s) ¼ A(a, v) þ j B(a, v)
(7:327)
These products are the time-domain representation of the convolution integrals (Equations 7.329 and 7.330) (convolution to multiplication theorem).
7.18.3 Physical Realizability of Transfer Functions The Hilbert relations between real and imaginary parts of transfer functions are valid for physically realizable transfer functions. The terminology ‘‘physically realizable’’ may be misleading because a transfer function given by a closed algebraic form is a mathematical representation of a model of a circuit built using ideal inductances, capacitances, and resistors or amplifiers. Such models are a theoretical, approximate description of physical systems. The physical realizability of a particular transfer function in the sense of circuit (or systems) theory is defined by means of causality. A general question of whether a particular amplitude characteristic can be realized by a causal system (filter) is answered by the Paley–Wiener criterion. Consider a specific magnitude of a transfer function jH( jv)j (an even function of v). It can be realized by means of a causal filter if and only if the integral
satisfies the Cauchy–Riemann equations (see Equation 7.17) qA qB ¼ ; qa qv
qA qB ¼ qv qa
(7:328)
1
and the real and imaginary parts (a ¼ 0) of the transfer function form a Hilbert pair: 1 P A(v) ¼ p
B(v) ¼
1 P p
1 ð
1 1 ð
1
B(l) dl lv A(l) dl lv
(7:329)
(7:330)
ln jH(jv)j dv < 1 1 þ v2
(7:336)
is bounded.28 Then a phase function exists such that the impulse response h(t) is causal. The Paley–Wiener criterion is satisfied only if the support of jH( jv)j is unbounded, otherwise jH( jv)j would be equal to zero over finite intervals of frequency resulting in infinite values of the logarithm (lnjH( jv)j ¼ 1).
7.18.4 Minimum Phase Property Transfer functions satisfying the Paley–Wiener criterion have a general form:
A one-sided impulse response can be regarded as a sum of noncausal even and odd parts (see Equations 7.35 and 7.36) h(t) ¼ he (t) þ ho (t)
1 ð
(7:331)
because h(t) is real, we have the following Fourier pairs: F 1 he (t) ¼ [h(t) þ h(t)] () A(v) 2
(7:332)
F 1 ho (t) ¼ [h(t) h(t)] () jB(v) 2
(7:333)
The causality of h(t) yields the relations he (t) ¼ sgn(t)ho (t)
(7:334)
ho (t) ¼ sgn(t)he (t)
(7:335)
H( jv) ¼ Hw ( jv)Hap ( jv)
(7:337)
where Hw( jv) is called a minimum phase transfer function and Hap( jv) is an all-pass transfer function. The minimum phase transfer function Hw ( jv) ¼ jH( jv)je jw (v) ¼ Aw (v) þ jBw (v)
(7:338)
has a minimum phase lag w(v) for a given magnitude characteristic. The minimum phase transfer function Hw(s) has all the zeros lying in the left half-plane (i.e., a < 0) of the s-plane. The minimum phase transfer function is analytic and its real and imaginary parts form a Hilbert pair H
Aw (v) () Bw (v)
(7:339)
7-43
Hilbert Transforms
An important feature of the minimum phase transfer function is that the propagation function g(s) ¼ ln [H(s)] ¼ b(a, v) þ jw(a, v)
(7:340)
is analytic in the right half-plane. It is so because all zeros are in the left half-plane and, because we postulate stability, all poles are in the left half-plane, too. Then the real and imaginary part of the propagation function form a Hilbert pair:
w(v) ¼
1 P p
1 ð
1
b(l) 1 dl ¼ P lv p 1 b(v) ¼ P p
1 ð
1
1 ð
1
ln jH(jl)j dl lv
w(l) lv
The sequence h(i) (i ¼ 0, 1, 2, . . . ) is the impulse response of the system to the excitation by the Kronecker delta and H(z) is the one-sided Z transform of the impulse response called the transfer function (or frequency characteristic) of the DLTI system, a function of the dimensionless normalized frequency c ¼ 2pf=fs, where f is the actual frequency and fs the sampling frequency. For causal systems the impulse response is one-sided (h(i) ¼ 0 for i < 0). The transfer function H(e jc) is periodic with the period equal to 2p. This periodic function may be expanded into a Fourier series
(7:341)
(7:342)
1 X
H(e jc ) ¼
i¼1
1
db db ln [cothju=2j]du du du o
The Bode formula shows that for the minimum-phase transfer functions the phase depends on the slope of the b-curve (b is the damping coefficient). The factor ln[cothju=2j] is peaked at u ¼ 0 (or v ¼ v0) and, hence, the phase at a given v0 is mostly influenced by the slope db=du in the vicinity of v0. The all-pass part of the nonminimum phase transfer function defined by Equation 7.337 may be written in the form Hap (jv) ¼ e jc(v)
arg [H(jv)] ¼ w(v) þ c(v)
(7:345)
1 2p
ðp
jc
H(e )e jci dc
(7:348)
p
H(e jc ) ¼ A(c) þ jB(c)
(7:349)
Analogously to Equation 7.331 the causal impulse response h(i) can be regarded as a sum of two noncausal even and odd parts of the form h(i) ¼ h(0) þ he (i) þ ho (i)
(7:350)
The even part is defined by the equation he (i) ¼ 0:5[h(i) þ h(i)];
jij > 0
(7:351)
and the odd part by the equation: ho (i) ¼ 0:5[h(i) h(i)]
H(e jc ) ¼ h(0) þ
(7:352)
A discrete, linear, and time-invariant system (DLTI) is characterized by the Z-pair (see also Chapter 6) (7:346)
1 X i¼l
h(i) cos (ci) j
1 X
h(i) sin (ci)
(7:353)
i¼l
The comparison of Equations 7.349 and 7.353 shows that H(c) ¼ h(0) þ
7.18.5 Amplitude-Phase Relations in DLTI Systems
z ¼ e jc
(7:347)
Let us write the Fourier series (Equation 7.347) term by term. We get
where w(v) is the minimum phase c(v) is the nonminimum phase part of the total phase
h(i) () H(z);
h(i)ejci
i¼0
In general, the transfer function is a complex quantity
(7:344)
Therefore, the total phase function has two terms:
Z
h(i) ¼
(7:343)
where u ¼ ln(v=v0) is the normalized logarithmic frequency scale db=du is the slope of the b-curve in ln–ln scale
1 X
The Fourier coefficients h(t) are equal to the terms of the impulse response and are given by the Fourier integral:
These relations can be converted to take the form of the wellknown Bode phase-integral theorem: 1 ð p db 1 w(v0 ) ¼ þ P 2 du o p
h(i)ejci ¼
1 X i¼l
h(i) cos (ci) ¼ h(0) þ F 1 [he (i)]
(7:354)
and B(c) ¼
1 X i¼l
h(i) sin (ci) ¼ F 1 [ho (i)]
(7:355)
7-44
Transforms and Applications Handbook
and we have a Hilbert pair H
A(c) () B(c)
(7:356)
We used the relations H[h(0)] ¼ 0 and H[cos (ci)] ¼ sin (ci). Because A(c) and B(c) are periodic functions of c, we may apply the cotangent form of the Hilbert transform (see Section 7.7). 1 P B(c) ¼ 2p
ðp
p
A(Q)cot[(Q c)=2]dQ
(7:357)
and 1 P A(c) ¼ h(0) 2p
ðp
p
B(Q) cot [(Q c)=2]dQ
(7:358)
It can be proved that the relations (Equations 7.362 and 7.363) are valid for transfer functions with zeros on the unit circle. In general, a stable and causal system has all its poles inside, while its zero may lie outside the unit circle. However, starting from a nonminimum-phase transfer function, a minimum-phase function can be constructed by reflecting those zeros lying outside the unit circle, inside it.
7.18.7 Kramers–Kronig Relations in Linear Macroscopic Continuous Media The amplitude-phase relations of the circuit theory are known in the macroscopic theory of continuous lossy media as the Kramers–Kronig relations.18,19 Almost all media display some degree of frequency dependence of some parameters, called dispersion. Let us take the example of a linear and isotropic electromagnetic medium. The simplest constitutive macroscopic relations describing this medium are32
7.18.6 Minimum Phase Property in DLTI Systems Analogous to Equations 7.337 and 7.338 the transfer function of DLTI system may be written in the form: H(z) ¼ Hw (z)Hap (z)
(7:360)
The all-pass function has a magnitude of one, hence, H(z) and Hw(z) have the same magnitude. Hw(z) differs from H(z) in that the zeros of H(z), lying outside the unit circle at points z ¼ 1=zi, are reflected inside the unit circle at z ¼ z*i . Let us take the complex logarithm of Hw (e jc): ln [Hw (e jc )] ¼ ln j Hw (e jc ) j þ j arg [Hw (e jc )]
(7:361)
and analogous to Equations 7.341 and 7.342, we have a Hilbert pair 1 P ln j Hw (e ) j ¼ ln [h(0)] 2p jc
arg [Hw (e jQ )] ¼
1 P 2p
ðp
p
ðp
p
(7:364)
B ¼ mm0 H ¼ (1 þ xm )m0 H
(7:365)
P ¼ xe e0 E
(7:366)
M ¼ xm H
(7:367)
and
(7:359)
where Hw(z) satisfies the constraints of a minimum phase transfer function; that is, has all the zeros inside the unit circle of the z-plane Hap(z) is an all-pass function consisting of a cascade of factors of the form Hap (z) ¼ [z 1 zi ]=[1 z*i z 1 ]
D ¼ ee0 E ¼ (1 þ xe )e0 E
arg[Hw (e jQ )] cot [(Q c)=2] (7:362)
log jHw (e jQ )j cot [(Q c)=2] dQ (7:363)
where E [V=m] is the electric field vector H [A=m] is the magnetic field vector D [C=m2] is the electric displacement B [Wb=m2] is the magnetic induction m0 ¼ 4p 107[Hy=m] is the permeability e0 ¼ 1=36p 109[F=m] is the permittivity of free space e, m, xm, and xe are dimensionless constants The vectors P and M are called polarization and magnetization of the medium. If we substitute the electrostatic field vector E with a field varying in time as exp( jvt), then the properties of the medium are described by the frequency-dependent complex susceptibility x( jv) ¼ x0 (v) jx00 (v)
(7:368)
where x0 is an even and x00 an odd function of v. The imaginary term x00 represents the conversion of electric energy into heat; that is, losses of the medium. In fact, x( jv) plays the same role as the transfer function in circuit theory and is defined by the equation x( jv) ¼
Pm e j(vtþw) Pm jw ¼ e jvt e0 Em e e0 Em
(7:369)
Let us apply Fourier spectral methods to examine Equations 7.366 and 7.369. We consider a disturbance E(t) given by the Fourier pair
7-45
Hilbert Transforms F
E(t) () XE ( jv)
(7:370)
The application of the convolution-multiplication theory yields the convolution
The response P(t) is represented by the Fourier pair F
x(t) ¼ x1 (t)*x2 (t)
(7:379)
F
P(t) () Xp ( jv)
(7:371)
where x1 (t) () X1 ( jv) is defined as a minimum-phase signal satisfying relations (7.341 and 7.342); that is,
Xp ( jv) ¼ e0 x ( jv)XE ( jv)
(7:372)
arg [X1 ( jv)] () ln jX1 ( jv)j
where
The multiplication convolution theorem yields the time-domain solution
H
and the signal F
P(t) ¼ e0
1 ð
x2 (t) () X2 ( jv) ¼ e jc(v)
1
h(t) E(t t) dt
(7:373)
(7:380)
(7:381)
is defined as the nonminimum-phase part of the signal x(t). Let us formulate the following definitions:
where h(t) is given by the Fourier pair F
h(t) () x( jv)
(7:374)
is the ‘‘impulse response’’ of the medium; that is, the response to the excitation d(t). For any physical medium, the impulse response is causal. This is possible if x( jv) is analytic. Therefore, its real and imaginary parts form a Hilbert pair 1 x (v) ¼ P p 00
x0 (v) ¼
1 P p
1 ð
1
1 ð
1
x0 (h) dh hv
x00 (h) dh hv
(7:375)
Definition 7.1 The minimum phase signal x1(t) has a zero delay in the Hilbert sense.
Definition 7.2 The delay of the signal relative to the moment t ¼ 0 is defined by a specific property of the signal x2(t). Krylov and Ponomariev20 used the name ‘‘ambiguity function’’ for x2(t) and proposed to define the delay by the position of its maximum. Another possibility is to define the delay using the position of the center of gravity of x2(t).
(7:376)
Examples
These relations are known as the Kramers–Kronig relations and are a direct consequences of causality. They apply for many media; for example, in optics, the real and imaginary parts of the complex reflection coefficient form a Hilbert pair.
1. If the function x2(t) ¼ d(t), the delay equals zero because x(t) ¼ x1 (t)*d(t) ¼ x1 (t)
2. If the function x2(t) ¼ d(t t0), the delay equals t0 because
7.18.8 Concept of Signal Delay in Hilbertian Sense
x(t) ¼ x1 (t)*d(t t0 ) ¼ x1 (t t0 )
Consider a signal and its Fourier transform F
x(t) () X( jv)
(7:382)
(7:377)
Let us assume that the Fourier spectrum X( jv) may be written in the form of a product defined by Equation 7.337
(7:383)
3. Consider a phase-delayed harmonic signal and its Fourier image: F
cos (v0 t w0 ) () pd(v þ v0 )e jw0 þ pd(v þ v0 )ejw0
(7:384)
X( jv) ¼ X1 ( jv) X2 ( jv)
(7:378)
where X1( jv) fulfills the constraints of a minimum-phase function X2( jv) is an ‘‘all-pass’’ function of the magnitude equal to one and the phase function c(v); that is, X2( jv) ¼ e jc(v)
or F w cos v0 t*d t 0 () p[d(v þ v0 ) þ d(v v0 )]ejw0 sgn v v0 (7:385)
7-46
Transforms and Applications Handbook Evidently the ambiguity function x2(t) is F w0 x2 (t) ¼ d t () ejw0 sgn v v0
ˆ (t) X
X (t)
(7:386)
Sequence of samples
and the time delay is, of course, t0 ¼ w0=v0, as we could expect. 4. Consider the series connection of the first-order lowpass with the transfer function X1 ( jv) ¼
1 1 þ jvt
(7:387)
T
and the first-order all-pass with the phase function of the form arg [X2 ( jv)] ¼ tan1
2vt (vt)2 1
(7:388)
The impulse response of the low-pass is x1 (t) ¼ F1
1 ¼ 1(t)et=t 1 þ jvt
(7:389)
and satisfies the definition of the minimum-phase signal. The impulse response of the all-pass plays here the role of the ambiguity function and has the form 2vt 2 x2 (t) ¼ F1 exp 2 2 ¼ 1(t) et=t d(t) v t 1 t We observe that the maximum of x2(t) is at t ¼ 0. However, we expect that the all-pass introduces some delay. In this case it would be advisable to define the delay using the center of gravity of the signal x2(t).
7.19 Hilbert Transforms in the Theory of Sampling The generation of a sequence of samples of a continuous signal (sampling) and the recovery of this signal from its samples (interpolation) is a widely used procedure in modern signal processing and communications techniques. Basic and advanced theory of sampling and interpolation is presented in many textbooks. This section presents the role of Hilbert transforms in the theory of sampling and interpolation. Figure 7.33, for reference, is the usual means by which the sequence of samples is produced. In general, the sampling pulses may be nonequidistant. However, this section presents the role of Hilbert transforms in the basis WKS (Wittaker, Kotielnikow, Shannon) theory of periodic sampling and interpolation. The periodic sequence of sampling pulses may be written in the form (see Equation 7.81) p(t) ¼ pT (t)*
1 X
k¼1
d(t kT)
(7:390)
Sampling sequence
t
FIGURE 7.33 A method of generation of a sequence ^ x (t) of samples of the analog signal x(t).
where pT(t) defines the waveform of the sampling pulse (the generating function of the periodic sequence of pulses) f ¼ 1=T is the sampling frequency From the point of view of the presentation of the role of Hilbert transforms in sampling and interpolation, it is sufficient to use the delta sampling sequence inserting pT(t) ¼ d(t). The delta sampling sequence is given by the formula (remember that d (t) * d(t) ¼ d(t)) p(t) ¼
1 X
k¼1
d(t kT)
(7:391)
For convenience, let us write here the Hilbert transform of this sampling sequence (see Section 7.7, Equation 7.95) 1 X
H
k¼1
d(t nT) ()
1 1 X cot [(p=T)(t kT)] T k¼1
(7:392)
The Fourier image of the delta sampling sequence is given by another periodic delta sequence 1 X
k¼1
F
d(t kT) ()
1 1 X d( f k=T) T k¼1
(7:393)
The sampler produces as an output a sequence of samples given by the formula xs (t) ¼
1 X
k¼1
x(kT)d(t kT)
(7:394)
that is, a sequence of delta functions weighted by the samples of the signal x(t). Let us recall the basic WKS sampling theorem. Consider a signal x(t) and its Fourier image X( f), v ¼ 2pf.
7-47
Hilbert Transforms
If the Fourier image is low-pass band limited, i.e., jX( jf )j ¼ 0 for j f j > W, then x(t) is completely determined by the sequence of its samples taken at the moments tk spaced T ¼ 1=2W apart. The sampling frequency fs ¼ 2W is called the Nyquist rate. The multiplication to convolution theorem yields the spectrum of the sequence of samples
Xs ( jf ) ¼ X( jf )*
1 1 X d( f k=T) T k¼1
wife fs < 2 W. Notice that the sequence of samples given by Equation 7.394 may be regarded as a model of a signal with pulse amplitude modulation (PAM). The original signal x(t) may be recovered by filtering this PAM signal using the ideal noncausal and physically unrealizable low-pass filter defined by the transfer function 8 2W, the limit case with fs ¼ 2W, and the spectrum of undersampled signal
for j f j < W for j f j ¼ W for j f j > W
(7:396)
The noncausal impulse response of this filter is h(t) ¼ F 1 [Y( jf )] ¼ 2W
sin (2pWt) 2pWt
(7:397)
X(jf )
–W
(a)
0
f
W X [ j ( f – 2nW )]
(b)
–2W
–W
0
2W
W
3W
4W
5W
f
X [ j ( f – nfs)]
(c) fs
–W
0
2 fs
fs
W
f
X [ j ( f – nfs)]
fs (d)
–W
0
W
fs
2 fs
3 fs
f
Aliasing
FIGURE 7.34 (a) A band-limited low-pass spectrum of a signal, (b) the corresponding spectrum of the sequence of sampled with Nyquist rate of sampling fs < 2W, (c) spectrum by oversampling fs, > 2W, and (d) spectrum by undersampling fs < 2W showing the aliasing of the sidebands.
7-48
Transforms and Applications Handbook
and is called the interpolatory function. The total response is a sum of responses to succeeding samples giving the well-known interpolatory expansion ( fs ¼ 2W):
1 k X k sin 2pW t 2W x x(t) ¼ k 2W 2pW t 2W k¼1
(7:399)
1 X k sin [2a(t, k)] x 2W 2a(t, k) k¼1
k¼1
sin [2a(t, k)] ¼1 2a(t, k)
(7:401)
This equation may be used to calculate the accuracy of the interpolation due to any truncation of the summation. The Whittaker’s interpolatory function and its Hilbert transform are forming the Hilbert pair sin [2a(t, k)] H sin2 [a(t, k)] () 2a(t, k) a(t, k)
(7:402)
Therefore, the interpolatory expansion of the Hilbert transform H[x(t)] ¼ ^x(t), due to the linearity property, is given by the formula ^x(t) ¼
2 1 X k sin [a(t, k)] x 2W a(t, k) k¼1
The expansion of the analytic signal c(t) ¼ x(t) þ j^x(t) using interpolatory functions has the form 1 X k sin [2a(t, k)] sin2 [a(t, k)] x þj c(t) ¼ 2W 2a(t, k) a(t, k) k¼1
(7:400)
Notice that the sampling of the function x(t) ¼ a (a constant) yields the formula 1 X
(7:405)
(7:406)
and using trigonometric identities we get the following form of the interpolatory expansion of the analytic signal:
giving the following form of the interpolation expansion x(t) ¼
1 X sin2 [a(t, k)] ¼0 a(t, k) k¼1
(7:398)
The summation exactly restores the original signal x(t). In the following text the argument of the interpolatory function will be written using the notation k 2a(t, k) ¼ 2pW t 2W
The sampling of the function x(t) ¼ a yields
(7:403)
This formula may be applied to calculate the Hilbert transforms of low-pass signals using their samples. The transfer function of the low-pass Hilbert filter (transformer) is given by the Fourier transform of the impulse response given by the right-hand side of Equation 7.402:
c(t) ¼ j
j2a(t, k) 1 X k e 1 x 2W 2a(t, k) k¼1
(7:407)
7.19.1 Band-Pass Filtering of the Low-Pass Sampled Signal Consider the ideal band-pass with a physically unrealizable transfer function in the form of a ‘‘spectral window’’ as shown in Figure 7.35. The impulse response of this filter is h(t) ¼ 2( f2 f1 )
sin [p( f2 f1 )] cos [p( f1 þ f2 )t] p( f2 f1 )t
(7:408)
The insertion f1 ¼ W and f2 ¼ 3W yields h(t) ¼ 4W
sin (2pWt) cos (4pWt) 2pWt
(7:409)
If the sequence of samples of the signal x(t) is applied to the input of this band-pass, the output signal z(t) is given by the interpolatory expansion of the form
z(t) ¼
1 X
x(k=(2W))
k¼1
sin [2a(t, k)] cos [4a(t, k)] 2a(t, k)
(7:410)
f2
f
|H ( f )|
sin (a, k) YH ( jf ) ¼ F a(t, k) 2
¼ j sgn( f ) Y( jf ) 8 j for jf j < W > < ¼ 0 for f ¼ 0; 0:5 for jf j > : j for jf j < W
–f2
(7:404)
FIGURE 7.35 band-pass.
–f1
0
f1
The magnitude of the transfer function of an ideal
7-49
Hilbert Transforms
where a(t, k) is given by Equation 7.399. We obtained the compressed-carrier amplitude-modulated signal of the form z(t) ¼ x(t) cos (4pWt)
(7:411)
with a carrier frequency 2W. Therefore, the AM-balanced modulator may be implemented using a sampler and a band-pass. Multiplication of the carrier frequency is possible using band-pass filters with f1 ¼ 3W and f2 ¼ 5W or f1 ¼ 5W, and f2 ¼ 7W,. . . . The conclusion is that in principle one may multiply the carrier frequency of AM signals getting undistorted sidebands (envelope). The comparison of Equations 7.400 and 7.411 enables us to write the signal z(t) in the form z(t) ¼
(
) 1 X k sin [2a(t, k)] x cos (4pWt) 2W 2a(t, k) k¼1
1 X k sin [2a(t, k)] x cos (4pWt) 2W 2a(t, k) k¼1
sin (pWt) cos (5pWt) pWt
(7:413)
(7:414)
(7:415)
(7:416)
Let us derive the above form starting with Equation 7.414. Using the trigonometric identity cos (5a) ¼ cos a cos (4a) sin a sin (4a), Equation 7.415 becomes 1 X k sin [2a(t, k)] x cos [4a(t, k)] 2W 2a(t, k) k¼1 sin2 [a(t, k)] sin [4a(t, k)] a(t, k)
f1
f2
f
The magnitude of the spectrum of a band-pass signal.
Consider a band-pass signal f (t) with the spectrum limited in band f1 < j f j < f2 ¼ f1 þ W (see Figure 7.36). In general, a so-called second-order sampling should be applied to recover, using interpolation, the signal f (t). However, it may be shown that alternatively, first-order sampling at the rate W may be applied with simultaneous sampling of the signal f (t) and of its Hilbert transform H[ f (t)] ¼ ^f (t). The following interpolation formula has to be applied to recover the signal using the sequences of samples f (k=W) and ^f (k=W). f (t) ¼
1 n X n ^ n n ^s t f s t þf W W W W k¼1
(7:418)
s(t) ¼
sin (pWt) W cos 2p f1 þ t pWt 2
(7:419)
and of a band-pass Hilbert filter (see Section 7.22)
This SSB signal may be written in the standard form given by Equation 7.289 (see Section 7.17) zSSB (t) ¼ x(t) cos (4pWt) ^x(t) sin (4pWt)
FIGURE 7.36
0
where the interpolating functions are given by the impulse response of the band-pass
\and the interpolatory expansion is 1 X k sin [2a(t, k)] x cos [5a(t, k)] zSSB (t) ¼ 2W a(t, k) k¼1
–f1
–f2
7.19.2 Sampling of Band-Pass Signals
Analogously, a SSB AM signal may be produced by band-pass filtering of the sequence of samples using a filter with f1 ¼ 2W and f2 ¼ 3W (upper sideband). The impulse response of this filter is h(t) ¼ 2W
W
W
(7:412)
and because cos(4pWt k2p) ¼ cos(4pWt), in the form z(t) ¼
|X( f )|
zSSB (t) ¼
(7:417)
It may be shown in the same manner as before that Equations 7.416 and 7.417 have identical left-hand sides.
^s(t) ¼
sin (pWt) W sin 2p f1 þ t pWt 2
(7:420)
7.20 Definition of Electrical Power in Terms of Hilbert Transforms and Analytic Signals The problem of efficient energy transmission from the source to the load is of importance in electrical systems. Usually the voltage and current waveforms may be regarded as sinusoidal. However, many loads are nonlinear and, therefore, nonsinusoidal cases should be investigated. In many applications the voltages and currents are nearly periodic, unperiodic, or even random. Therefore, some generalizations of theories developed for periodic cases are needed.
7-50
Transforms and Applications Handbook
The instantaneous quadrature power is
i (t)
Q(t) ¼ UJ cos (vt þ wu ) sin (vt þ wi ) Linear or nonlinear load
u (t)
(7:427)
The Fourier series expansion of Q(t) is Q(t) ¼ 0:5UJ sin (wi wu ) þ 0:5UJ [ sin [2(vt þ wi )] cos (wi wu ) (7:428) þ cos [2(vt þ wi )] sin (wi wu )] The mean value of P(t) defined by the equation
FIGURE 7.37 An electrical one-port where u(t) is the instantaneous voltage and i(t) the instantaneous current.
Consider an electrical one-port (linear or nonlinear) as shown in Figure 7.37. The instantaneous power is defined by the equation P(t) ¼ u(t)i(t)
(7:421)
¼1 P T
0
P(t)dt ¼ 0:5UJ cos (wi wu );
v¼
2p T
(7:429)
is called the active power and it is a measure of the unilateral energy transfer from the source to the load. The mean value of the quadrature power Q(t) defined by the equation
where u(t) is the instantaneous voltage across the load i(t) the instantaneous current in the load
¼1 Q T
ðT
Q(t)dt ¼ 0:5UJ sin (wi wu )
(7:430)
0
We arbitrarily assign a positive sign to P if the energy P(t)dt is delivered from the source to the load and a negative sign for the opposite direction. The above formal definition of power involves all limitations associated with the definition of voltage, current, and the electrical one-port. Let us introduce the notion of quadrature instantaneous power defined by the equation Q(t) ¼ u(t)^i(t) ¼ ^ u(t)i(t)
ðT
(7:422)
^ and î are Hilbert transforms of the voltage and current where u waveforms.
7.20.2 Harmonic Waveforms of Voltage and Current
is called the reactive power. The value of the reactive power depends on energy that is delivered periodically back and forth between the source and the load with no net transfer. The waveform of the instantaneous power given by Equation 7.426 is shown in Figure 7.38 (for convenience wu ¼ 0). The energy transfer from the source to the load is given by the integral
1 Eþ ¼ v
p=2w ð i
UI cos (vt) cos (vt þ wi )dvt
p=2
¼
UI [(p w) cos w þ sin w] 2v
(7:431)
and the energy transfer from the load to the source during the remaining part of the half-period is
Consider the classical case of a linear load with sine waveforms of u(t) and i(t). We have u(t) ¼ U cos (vt þ wu )
(7:423)
i(t) ¼ J cos (vt þ wi )
(7:424)
P (t)
The instantaneous power is P(t) ¼ UJ cos (vt þ wu ) cos (vt þ wi )
(7:425)
The Fourier series expansion of P(t) is P(t) ¼ 0:5UJ cos (wi wu ) þ 0:5UJ[ cos [2(vt þ wi )] cos (wi wu ) (7:426) sin [2(vt þ wi )] sin (wi wu )]
φi
0
0.5 π
π
ωt
FIGURE 7.38 The waveform of the instantaneous power given by Equation 7.425.
7-51
Hilbert Transforms
1 E ¼ v
p=2 ð
S ¼ P þ j Q ¼ jSj exp [ j(wi wu )]
UI cos (vt) cos (vt þ wi )dvt
p=2wi
¼
UI [w cos w sin w] 2v
(7:432)
Therefore, the net energy transfer toward the load is UIT cos (wi ) E ¼ Eþ E ¼ 4
(7:433)
(7:434)
and this mean power differs from the reactive power defined by Equation 7.430. Therefore, the notions of active and reactive power differ considerably. The active power equals the timeindependent or constant term of the instantaneous power given by the Fourier series (Equation 7.426) while the reactive power equals the amplitude of the quadrature (or sine) term of Equation 7.426. Notice that in the Fourier series (Equation 7.428) the role of both quantities is reversed. Let us recall that the quantity S ¼ 0:5UJ ¼ URMS JRMS
(7:435)
is called the apparent power and the quantity r ¼ cos (wi wu ) ¼
P S
(7:436)
is called the power factor. The power factor may be regarded as a normalized correlation coefficient of the voltage and current signals while sin (wi wu) ¼ SQR(1 r2) may be called the anticorrelation satisfy the relation and Q coefficient. The quantities S, P, 2 2 þ Q S ¼P 2
(7:437)
7.20.3 Notion of Complex Power Consider the analytic (complex) form of the voltage and current harmonic signals defined by Equations 7.423 and 7.424. We have cu(t) ¼ U exp(vt þ wu) and ci(t) ¼ J exp(vt þ wi). The complex power is defined by the equation 1 S ¼ cu (t)c*i (t) ¼ 0:5 U J exp [ j(wi wu )] 2
The real part of S equals the active power and the imaginary part equals the reactive power. The module of the complex power equals the apparent power and the argument equals the phase angle wi wu.
7.20.4 Generalization of the Notion of Power
The division of this energy by 0.5T gives the mean value of the power equal to the active power. However, the division of E by 0.5T yields ¼ 2E [w cos wi sin wi ] P T
(7:439)
(7:438)
In the following text, the symbol S will be used to denote the complex power. We have
The above-described well-known notions of apparent, active, and reactive power were in the past generalized by several authors for nonsinusoidal cases and later for signals with finite average power. The nonsinusoidal periodic waveforms of u(t) and i(t) may be described in the frequency domain by the Fourier series:
u(t) ¼ U0 þ
N X
Un cos (nvt þ wun )
(7:440)
i(t) ¼ I0 þ
N X
Jn cos (nvt þ win )
(7:441)
n¼1
n¼1
where v is a constant equal to the fundamental angular frequency, v ¼ 2p=T T is the period Some or even all harmonics of the voltage waveform may not be included in the current waveform and vice versa. The active power may be defined using the same equation (Equation 7.429) as for sinusoidal waveforms. Inserting Equations 7.440 and 7.441 into Equation 7.429 yields ¼ U0 J0 þ P
X
0:5 Un Jn cos (win wun )
(7:442)
The summation involves terms included in both waveforms. Analogously, the reactive power is defined using Equation 7.430: ¼ Q
X
0:5 Un Jn sin (win wun )
(7:443)
This definition of the reactive power was proposed in 1927 by Budeanu6 and is nowadays commonly accepted. It has been sometimes criticized as ‘‘lacking of physical meaning.’’ Another definition of reactive power was introduced by Fryze10 who proposed to resolve the current waveform in two components: i(t) ¼ ip (t) þ iq (t)
(7:444)
The ‘‘in-phase’’ component is given by the relation
ip (t) ¼
Ð 1 T T 0 Ð 1 T T 0
iudt u2 dt
u(t) ¼
P u(t) 2 URMS
(7:445)
7-52
Transforms and Applications Handbook
URMS is the root mean square (RMS) value of the voltage. The ‘‘quadrature’’ component is
i ¼ Gu if u > 0 i ¼ 0 if u < 0
iq ¼ i ip
The current has the waveform of a half-wave rectified cosine (see Figure 7.39a) and may be resolved into the in-phase and quadrature components. The Fourier series expansion of the current has the form
(7:446)
and satisfies the orthogonality property ðT
iq ip dt ¼ 0
(7:447)
0
i(t) ¼
U p 2 1 cos (vt) þ cos (2vt) p 2 3
This orthogonality yields for the RMS values: 2 2 2 ¼ Ip,RMS þ Iq,RMS IRMS
(7:448)
2 2 cos (4vt) þ cos (6vt) 15 35
U cos (vt) 2
and the Fourier series of the quadrature component (full-wave rectified cosine) is U 2 2 1 þ cos (2vt) cos (4vt) p 3 15 2 þ cos (6vt) 35
iq (t) ¼
Q¼ Voltage and current
Quadrature current
U2 8
(7:454)
t
t
7.20.5 Generalization of the Notion of Power for Signals with Finite Average Power
ip (t)
iq (t) t
Instantaneous power
(7:453)
However, the instantaneous power (Figure 7.39) is always positive, so there is no energy oscillating back and forth between the source and load. Therefore, we should expect that the reactive power equals zero. This requirement is satisfied using Budeanu’s definition but not Fryze’s definition.
i (t) u (t)
In-phase current
(7:452)
The reactive power defined by Equation 7.443 equals zero while the reactive power is defined by Equation 7.449 equals
i (t)
u (t)
ip (t) ¼
(7:449)
The comparison of Budeanu’s and Fryze’s definitions of the reactive power shows how misleading it is to apply the same name, ‘‘reactive power,’’ for notions having different definitions. Let us illustrate this statement with an example. A source of a cosine voltage is loaded with the ideal diode with a nonlinear characteristic (see Figure 7.39)
(7:451)
The in-phase component is
The reactive power is defined by the product Q ¼ URMS Iq, RMS
(7:450)
P (t)
A generalized theory of electric power by use of Hilbert transforms was presented by Nowomiejski.24 He considered voltages and currents with finite average power; that is, finite RMS defined by the equations
URMS t
FIGURE 7.39 (a) A source of sine voltage loaded with a diode, (b) the voltage and current waveforms, (c) the in-phase component of the current, (d) the quadrature component of the current, and (e) the waveform of the instantaneous power.
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u ðT u 1 u ¼ t lim u2 (t)dt T)1 2T
(7:455)
T
IRMS
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u ðT u 1 u t ¼ lim i2 (t)dt T)1 2T T
(7:456)
7-53
Hilbert Transforms
In the case
The apparent power is defined as S ¼ URMS IRMS
and the active and reactive powers are defined by means of the relations ðT
¼ lim 1 P T)1 2T
i(t) ¼ const u(t)
(7:457)
u(t)i(t)dt
(7:458)
T
(7:465)
the quadrature component defined by Equation 7.439 equals zero and the distortion power equals zero, too. Otherwise, the distortion power is given by vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u ðT u 1 u i2q (t)dt D ¼ U t lim T)1 2T
(7:466)
T
and
Let us define a power factor rD using the relation ¼ lim 1 Q T)1 2T
ðT
u(t)i(t)dt
P rD ¼ 2 2 P þD
(7:459)
T
The power factor is a measure of the efficiency of the utilization of the power supplied to the load being equal to unity only, if the distortion power D ¼ 0. The cross-correlation of the instantaneous voltage and current waveforms is defined by the integral
or
¼ lim 1 Q T)1 2T
ðT
^(t)i(t)dt u
(7:460)
ðT
1 rui (t) ¼ lim T)1 2T
T
where ^ indicates the Hilbert transform. Nowomiejski has not explicitly defined the notion of the quadrature power (see Equation 7.422) but in fact the integrand in Equations 7.459 and 7.460 equals Q(t). However, a new quantity called distortion power was defined. Generally, for each value of T the identity
u(t)i(t t)dt
T
þ
T
u(t)i(t)]2 dtdt
ðT ðT
This function enables us to introduce the frequency domain interpretations of the above-defined powers. The cross-power spectrum Q(v) is defined by the Fourier pair rui (t) () Q(v)
1 [u(t)i(t) 2
T T
1 S2 lim :T)1 2T
ðT
T
98 =
: j
for k ¼ 1, 2, . . . , N=2 1 for k ¼ 0 and N=2 1 for k ¼ N=2 þ 1, N=2 þ 2, . . . , N 1
(7:497)
This transfer function may be written in the closed form H(k) ¼ j sgn(N=2 k)sgn(k)
(7:498)
where
(7:495)
8 for x > 0
< j for k ¼ 1, 2, . . . , (N 1)=2 (7:506) (k) ¼ 0 for k ¼ 0 > : j for k ¼ N=2 þ 1, (N þ 1)=2, . . . , N 1
and the impulse response is 2 y(i) ¼ u(i) h(i) ¼ u(i) sin2 (pi=2)cot(pi=N) N i ¼ 0, 1, . . . , N 1(N even)
(7:503)
where the sign denotes a so-called circular convolution. This convolution may be written in the form
y(i) ¼
N 1 X r¼0
h(i r)u(r)
(7:504)
h(i) ¼
(n1)=2 2 X sin (2pik=N); i ¼ 0, 1, 2, . . . , N 1 (7:507) N k¼1
or h(i) ¼
1 cos (pi) 1 cot (pi=N) N cos (pi=N)
(7:508)
7.21.1 Properties of the DFT and DHT Illustrated with Examples
h(i)
7.21.1.1 Parseval’s Theorem DFT
Consider the discrete Fourier part u(i) () U(k). The discrete form of the Parseval’s energy (or power) equality has the form 2 N
cot(πi/N )
E[u(i)] ¼
0
6
12
18
24
i
N1 X i¼0
ju(i)j2 ¼
N1 1 X jU(k)j2 N k¼0
(7:509)
This equation may be used to check the correctness of calculations of DFTs and DHTs. However, the energies of the sequences u(i) and its DHT, y(i), may differ, in general, E[u(i)] 6¼ E[y(i)]
(7:510)
The explanation is given by Equation 7.505. The operator j sgn(N=2 k) sgn(k) cancels the spectral terms U(0) and U (N=2). The term U(0) has the form
FIGURE 7.41 The noncausal impulse response of a Hilbert filter (see Equation 7.20.5), N ¼ 24.
U(0) ¼
N 1 X i¼0
u(i) ¼ N uDC
(7:511)
7-57
Hilbert Transforms
where uDC is the mean value of the signal sequence u(i), or in electrical terminology, the DC term. The algorithm of DHT cancels this term. Therefore, the sequence y(i) is defined by the DHT pair DFT
uAC (i) () y(i)
(7:512)
that is, the following sequence i y(i)
0 0
1 [ cot (p=8)]=4
2 0
5 [cot (3p=8)]=4
6 0
7 [ cot (3p=8)]=4
where
where uAC(i) ¼ u(i) uDC is the alternate current component of the signal sequence (with DC term removed). The energies of the sequences uAC(i) and y(i) are given by the equation N1 X i¼1
juAC (i)j2 ¼
N1 X i¼1
jy(i)j2 þ
3 4 [ cot (3p=8)]=4 0
jU(N=2)j2 N
(7:513)
that is, the energies differ by the energy of the spectral term U(N=2) and only if this term equals zero are both energies equal.
pffiffiffi [cot (p=8)]=4 ¼ ( 2 þ 1)=4 ¼ 0:6035 . . . pffiffiffi [cot (3p=8)]=4 ¼ ( 2 1)=4 ¼ 0:1035 . . . :
The sequence y(i) and it DFT are shown in Figure 7.42c and d. The DC term defined by Equation 7.511 is uDC ¼ 1=N ¼ 0.125. For convenience, Figure 7.42e and f shows the sequence uAC(i) and its DFT. The energies are E[u (i)] ¼ 1, E[uAC(i)] ¼ 1 12=N ¼ 0.875, E[y(i)] ¼ 1 12=N 12= N ¼ 1 2=N ¼ 0.75.
7.21.1.2 Shifting Property
Example
DFT
Consider the discrete Fourier pair u(i) () U(k). It can be shown that
Consider the signal given by a Kronecker delta u(0) ¼ dK(i) and u(i) ¼ 0 for i 1, N ¼ 8. This sequence and its DFT are shown in Figure 7.42a and b. The circular convolution (Equation 7.503) yields in this case
DFT
u(i þ m) () e j2pmk=N U(k)
(7:515)
where m is an integer.
1 y(i) ¼ dK (i)* sin2 (pi=2) cot (pi=N) 4
(7:514)
Example
i
The spectrum of Figure 7.42b is real with all samples equal to 1. The shifted-by-one interval (m ¼ 1) delta pulse and its spectrum are
k
dK (i m) () ej2pmk=N
u (i) (a)
0
1
2
3
4
5
6
7
8
9
U (k) (b)
DFT
0
1
2
3
4
6
7
8
9
5 0
1
2
3
4
0
1
2
3
4
V (k)
5
6
6
7 8
9
8
9
7
i
k
(d) uAC (i) (e)
1
2
3
4
5
6
7
0
9 8
i
UAC (k) (f )
(7:516)
This spectrum is complex and of the form
υ (i) (c)
5
k 0
1
2
3
4
5
6
7
8
9
FIGURE 7.42 (a) The sequence u(i) consisting of a single sample dK(i), (b) its spectrum U(k) given by the DFT, (c) the samples of the discrete Hilbert transform, (d) the corresponding spectrum V(k), (e) the samples of the AC component of u(i), and ( f ) the corresponding spectrum UAC(k).
k
0
1 2 3 4 5 pffiffiffi pffiffiffi pffiffiffi 2=2 0 2=2 1 2=2 Ure (k) 1 pffiffiffi pffiffiffi pffiffiffi Uim (k) 0 2=2 1 2=2 0 2=2 jU(k)j 1 1 1 1 1 1
6
7 pffiffiffi 0 2=2 pffiffiffi 1 2=2 1 1
This example shows the general rule that shift changes in phase relations will have no effect on the magnitude of the spectrum.
7.21.1.3 Linearity DFT
Consider the discrete Fourier pairs u1 (i) () U1 (k) and DFT u2 (i) () U2 (k). Due to the linearity property the summation of the sequences yields DFT
au1(i) þ bu2 (i) () aU1 (k) þ bU2 (k)
(7:517)
7-58
Transforms and Applications Handbook
where a and b are constants. The linearity property applies also for the DHTs: DFT
au1(i) þ bu2 (i) () ay1 (i) þ by2 (i)
u (i) 1
(7:518) 0.5
Example Consider the sequence of two deltas u(i) ¼ dK(i) þ dK(i 1) for i ¼ 0 and 1 and u(i) ¼ 0 for 1 < i N 1, N ¼ 8. The DFT of this sequence may be obtained by adding to each term of the real part of the spectrum given by Equation 7.516 the number 1; that is, the terms of the spectrum of dK(i) (see Figure 7.42b). This yields the complex spectrum k 0 1 2 3 pffiffiffi pffiffiffi Ure (k) 2 1 þ 2=2 1 1 2=2 pffiffiffi pffiffiffi Uim (k) 0 2=2 1 2=2 pffiffiffi jU(k)j 2 1:847 . . . 2 0:765 . . .
4 5 pffiffiffi 0 1 2=2 pffiffiffi 0 2=2 0
0:765 . . .
6 1 1 pffiffiffi 2
7 pffiffiffi 1 þ 2=2 pffiffiffi 2=2
i
0 0
8
16
8
16
8
16
υ (i) 0.5
0
0
i
1:847 . . .
Notice that the term U(N=2) ¼ U(4) equals zero. Therefore, the energies E[uAC(i)] ¼ E[y(i)] ¼ 2 22=N ¼ 1.5 are equal. The DC term uDC ¼ 2=N ¼ 0.25.
|U (k)| 5 4 3
Example
2
Consider the sequence 1
u(i) ¼ e
0:05p[(N1)=2i]2
;
N ¼ 16
0
representing a sample Gaussian pulse as shown in Figure 7.43 (top). Figure 7.43 (middle=bottom) shows the DFT of this pulse and the DHT calculated via the DFT. The DC term equals uDC ¼ 0.2795. . . . The energies are E[u(i)] ¼ 3.1622 . . . , E[uAC(i)] ¼ E[y(i)] ¼ 1.9122 . . . , that is, the energy difference is negligible due to the negligible value of the term U(N=2).
7.21.2 Complex Analytic Discrete Sequence A sequence of complex samples of a signal and its discrete Hilbert transform does not represent an analytic signal in the sense of the definition of the analytic function. However, it is possible to define the analytic sequence of the form of a sequence of samples c(i) ¼ u(i) þ jy(i)
(7:520)
where y(i) is the DHT of u(i). Let us derive the spectrum of the DFT sequence c(i). If u(i) () U(k), then the spectrum of y(i) given by Equation 7.505, and due to the linearity property, the spectrum of the complex sequence c(i) is DFT
k
(7:519)
c(i) () U(k) þ j[ j sgn(N=2 k) sgn(k)]U(k)
FIGURE 7.43 (Top) A sequence of samples of a Gaussian pulse, (middle) the samples of the DHT, and (bottom) the samples of the magnitude of the DFT of the Gaussian pulse.
that is, DFT
c(i) () [1 þ sgn(N=2 k) sgn(k)]U(k) k ¼ 0, 1, . . . , N 1(N even)
(7:521)
The spectrum is doubled at positive frequencies and canceled at negative frequencies.
Example Consider the signals and spectra of Figure 7.42. Figure 7.44 shows the real spectra of the delta pulse and its DHT and the resulting spectrum of the complex sequence. The terms of the spectrum of u(i) are canceled at negative frequencies and doubled at positive frequencies. The DC term, i.e., U(0), is unaltered. The property that analytic sequences have a onesided spectrum makes it possible to implement antialiasing schemes of sampling.
7-59
Hilbert Transforms
U (k)
α=1 0
1
2
4
3
V (k) 0
1
2
3
5
6
7
5
6
7
8
4
8
9
9
1.0
k
jY 0.8
0.6 0.4
1.5 α=2
k
2.0 2
1
ω=5 .2
Ψ (k)
0
1
2
3
4
5
6
7
8
9
FIGURE 7.44 (Top, middle) The spectra U(k) and V(k) of Figure 7.42; (bottom) the corresponding spectrum of the analytic sequence.
(7:522)
z1 zþ1
(7:523)
α=2
ω = –5 –2
–0.2 –1
–1
–0.8
–0.6
FIGURE 7.45 The mapping of the s-plane, s ¼ a þ jv, into the z-plane, z ¼ x þ jy, defined Equation 7.524. Let us introduce the notations
This equation defines the nonlinear dependence between the angular frequency v and the normalized frequency c defined by the representation z ¼ e jc (see Equation 7.492). For s ¼ jv; that is, a ¼ 0, Equation 7.526 takes the form of a quadratic equation tan (c)v2 þ 2v tan (c) ¼ 0
(7:527)
v ¼ tan (c=2)
(7:528)
v ¼ cot (c=2)
(7:529)
and
1 a2 v2 2v ;y¼ (1 a)2 þ v2 (1 a)2 þ v2
Let us use these nonlinear relations to derive a new form of Hilbert transformations. We start with the Hilbert transformation
(7:524)
These equations are mapping a family of orthogonal lines a ¼ const. and v ¼ const. of the s-plane into a family of orthogonal of the z-plane, as shown in Figure 7.45. The magnitude of the variable is jzj ¼ SQR(x2 þ y2) giving sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (1 þ a)2 þ v2 jzj ¼ (1 a)2 þ v2
(7:525)
and the argument 2 1 a2 v 2
ω
The roots of this equation are
where s is a normalized complex frequency (normalized s ¼ s=fs ¼ sDt, where fs is the sampling frequency and Dt the sampling period). Inserting s ¼ a þ jv into Equation 7.522 and equating the real and imaginary parts yields
x
Point ω = ±∞
α=1
1þs 1s
c ¼ arg (z) ¼ tan1
0.2
–0.4
and
x¼
α
–1.5
The transfer function of an analog LTI system is defined as the quotient of the output-to-input analytic signals (see Equation 7.325), and if analytical, is an analytic function of the complex frequency s ¼ a þ jv. Similarly, the transfer function of the DLTI system defined by Equation 7.495, if analytical, is an analytic function of the complex variable z ¼ x þ jy. Let us study the problem of a conformal mapping of the s-plane into the z-plane by means of the bilinear transformations defined by the formulae
s¼
0.3
1
7.21.3 Bilinear Transformation and the Cotangent Form of Hilbert Transformations
z¼
0.1
–0 –1 α=
α=5 k
0
0.2
1 B(v) ¼ P p
1
A(h) dh hv
(7:530)
Let us introduce the notations h ¼ tan (f=2); v ¼ tan (c=2)
(7:531)
and dh ¼ 0.5[1 þ tan2(f=2)]df. We get 1 B(c) ¼ P p
(7:526)
1 ð
ðp
p
A[ tan (f=2)] 0:5[1 þ tan2 (f=2)]df tan (f=2) tan (c=2) (7:532)
7-60
Transforms and Applications Handbook
Example
By means of the trigonometric relation 1 þ tan2 (f=2) ¼ tan (f=2) þ cot [(f c)=2] (7:533) tan (f=2) tan (c=2) we get 1 B(c) ¼ 2p
Consider the square function 8 >
: 0
A[ tan (f=2)] tan (f=2)df
ðp
A[ tan (f=2)] cot [(f c)=2]df
p
(7:539)
Introducing v ¼ tan(c=2) gives
ðp
8 >
:0
p
1 2p
for jvj < a for jvj ¼ a for jvj > a
(7:534)
for jcj < cp ¼ 2 tan1 (a) for jcj ¼ cp (7:540) for jcj > cp
The Hilbert transform defined by Equation 7.538 is here cp
1 B(c) ¼ 2p
If we start with the inverse Hilbert transformation 1 A(v) ¼ P p
1 ð
1
B(h) dh hv
ð
cp
f tan df 2
cp
(7:535)
1 2p
ð
cot
cp
fc df 2
(7:541)
the same derivation gives 1 2p
þ
B[ tan (f=2)] tan (f=2)df
p
1 2p
3
ðp
p
B[ tan (f=2)]cot [(f c)=2]df
(7:536) ψp = 0.4 π
The first term of Equation 7.534 is a constant depending only on the even part of A[tan(f=2)], while the first term of Equation 7.536 depends only on the odd part of B[tan(c=2)]. If we use instead of Equation 7.528 the next root defined by Equation 7.529, then Hilbert transformations (7.534) and (7.536) have the alternative form:
B(ψ)
A(c) ¼
The first integral equals zero and the result of the second integration (Cauchy Principal Value (CPV) value) is
ðp
–3
B(c) ¼
1 2p
2p ð
–10
A[cot (f=2)] cot (f=2)df
ψ
10
3
0
A(c) ¼
1 2p
2p ð 0
1 2p
2p ð
A[cot (f=2)] cot [(f c)=2]df
(7:537)
ψp = 0.1 π
B(ψ)
B[cot (f=2)] cot (f=2)df
0
þ
1 2p
2p ð 0
B[cot (f=2)] cot [(f c)=2]df
(7:538) –3 –10
The Hilbert transforms in the cotangent form are periodic functions of the variable c.
FIGURE 7.46
ψ
The function B(c) given by Equation 7.542.
10
7-61
Hilbert Transforms c þ c p 1 sin 2 B(c) ¼ ln p cp c sin 2
(7:542)
Figure 7.46 shows B(c) for two values of cp:0.4p and 0.1p corresponding to the normalized frequencies v ffi 0.726 and 0.155. The functions A(c) and B(c) are periodic with the period of 2p.
7.22 Hilbert Transformers (Filters) The Hilbert transformer, also called a quadrature filter or wideband 908 phase shifter, is a device in the form of a linear two-port whose output signal is a Hilbert transform of the input signal. Hilbert transformers find numerous applications, for example, in radar systems, SSB modulators, speech processing, measurement systems, schemes of sampling band-pass signals, and many other systems. They are implemented as analog or digital filters. The transfer function of the ideal analog Hilbert filter is (see Equation 7.10) H(jf ) ¼ F[1=(pt)] ¼ jH( jf )jejw( f ) ¼ j sgn( f )
(7:543)
Hence, the transfer function is given by 8 < j H(jf ) ¼ 0 : j
for f > 0 for f ¼ 0 for f < 0
7.22.1 Phase-Splitter Hilbert Transformers Analog Hilbert transformers are mostly implemented in the form of a phase splitter consisting of two parallel all-pass filters with a common input port and separated output ports, as shown in Figure 7.47. The transfer functions of the all-pass filters are H1 ( jf ) ¼ e jw1 ( f ) ; H2 ( jf ) ¼ e jw2
(7:546)
The magnitude of both functions equals 1. The antisymmetry of the phase functions allows us to consider only the positive frequency part. The phase difference of the harmonic signals at the output ports of the phase splitter should be d( f ) ¼ w1 ( f ) w2 ( f ) ¼ p=2; all f > 0
(7:544)
The magnitude is jH( jf)j ¼ 1 and the phase function is w( f ) ¼ arg [H(jf )] ¼ (p=2) sgn( f )
The performance of analog Hilbert transformers depends on design and alignment. Having in mind that ideal alignment is impossible and that even by good initial alignment it is detoriated by aging and various physical changes; for example, temperature, humidity, pressure, vibrations, and others, the use of extremely sophisticated design methods and implementations may be unreasonable. Differently, the performance of digital Hilbert transformers may depend only on design. Because the magnitude of the transfer function defined by Equation 7.544 equals 1, all-pass filters are frequently used in analog and digital implementations of Hilbert transformers.
(7:545)
Notice that the convention with a þ sgn by w( f) results in a negative slope of the phase function. The last equation explains the terminology ‘‘quadrature filter’’ or ‘‘wide-band 908 phase shifter.’’ The ideal Hilbert filter is noncausal and physically unrealizable. Causality implies the introduction of an infinite delay. In any practical implementation of the Hilbert filter, the output signal is a delayed and more or less distorted Hilbert transform of the input signal. The spectrum of the input signal should be band-limited between the low-frequency edge f1 and high-frequency edge f2 of the pass-band. The necessary delay depends only on f1. Inside the pass-band W ¼ f2 f1, it is possible to get an approximate version of the transfer function defined by Equation 7.543. Good approximations require sophisticated methods of design and implementations. Hilbert transformers can be implemented in the form of analog or digital convolvers using the time definition of the Hilbert transforms given by Equations 7.3 and 7.4 (analog convolutions) or by Equation 7.503 (discrete circular convolution). An other implementation uses so-called quadrature filters.
(7:547)
The realization of this requirement is possible in a limited frequency band between the low-frequency edge f1 and the highfrequency edge f2, as shown in Figures 7.52 through 7.55. Therefore, the spectrum of the input signal should be band limited between f1 and f2. Due to unavoidable amplitude and phase errors, the output signals of the phase splitter approximately are forming a Hilbert pair. The phase functions of the all-pass filters defined by Equation 7.546 should be inside the band W ¼ f2 f1, approximately linear in the logarithmic frequency scale, but are nonlinear in a linear scale. This nonlinearity introduces phase distortions. Therefore, the output signals are form-
Port a H1( j f )
x(t)
H2( j f ) Port b
FIGURE 7.47 A phase splitter Hilbert transformer, where H1( jf ) and H2( jf) are all-pass transfer functions.
7-62
Transforms and Applications Handbook
Low-pass Port a
R
H1( j f )
x(t)
1 jX
x(t) Equalizer
Output R jX H2( j f )
–1 Port b
(a)
Complementary high-pass
FIGURE 7.48 The series connection of a phase equalizer and the Hilbert transformer of Figure 7.47. 1 R
ing a distorted in relation to the input signal Hilbert pair. The distortions can be removed using a suitable phase equalizer connected in series to the input port, as shown in Figure 7.48. By proper phase equalization the output signals are forming an undistorted pair of Hilbert transforms.
C
x(t) Output R C –1 (b)
7.22.2 Analog All-Pass Filters Hilbert transformers in the form of phase splitters are implemented using all-pass filters. A convenient choice is the all-pass consisting of two complementary filters, a low-pass and a highpass, as shown in Figure 7.49a. The impedance Z( jv) ¼ X( jv) is a loss-less one-port (pure reactance). The transfer function of this all-pass has the form H( jv) ¼
R jX(v) ; R þ jX(v)
v ¼ 2pf
w(v) ¼ arg[(R jX(v))2 ] ¼ tan1
L C
R L
2RX(v) R2 X 2 (v)
(7:549)
1
x(t)
(7:548)
The magnitude of this function equals one for all f and the phase function is
R
Output
–1
C
(c)
FIGURE 7.49 An all-pass consisting of (a) a low-pass and a complementary high-pass, (b) a first-order RC low-pass and complementary CR high-pass, and (c) a second-order RLC low-pas and complementary RLC high-pass.
The insertion X ¼ 1=vC (see Figure 7.49b) yields the phase function of a first-order all-pass w(y) ¼ tan1
2g ; y ¼ vRC ¼ vt 1 g2
(7:550)
The insertion X ¼ vL 1=vC (see Figure 7.49c) yields a phase function of a second-order all-pass w(y) ¼ tan
1
2(1 y2 )qy (1 y2 )2 q2 y2
where pffiffiffiffiffiffi y ¼ v=vr , vr ¼ 1= LC pffiffiffiffiffiffiffiffiffi q ¼ vr RC ¼ R C=L
(7:551)
The phase functions defined by Equations 7.550 and 7.551 are shown in Figure 7.50 in linear and logarithmic frequency scales. The second-order function best shows linearity in the logarithmic scale for q ¼ 4. Notice that the phase functions are continuous if we remove the phase jumps by p by changing the branch of a multiple-valued tan1 function, similar to that in Figure 7.22. To get a wider frequency range of Hilbert transformers, higher order all-passes have to be applied. But more practical is the use of a series connection of first-order all-passes with appropriate staggering of the individual phase functions. For a given frequency band W ¼ f2 f1, optimum staggering yields the smallest value of the RMS phase error. The local value of the phase error is defined as a difference between d( f ) given by Equation 7.547 and p=2. Therefore, the local error is
7-63
Hilbert Transforms
1( f )
0
10
20
ωRC 30 ω/ωr
0.1
1
10
40
ωRC ω/ωr
0
1
1
2
2
Equation 7.550
Equation 7.550
1( f )
3
3
4
4
q=8
Equation 7.551
q=4
5
Equation 7.551
5
q=4
q=2 6
6 (a)
(b)
FIGURE 7.50 (a) Nonlinear phase functions of the first-order all-pass given by Equation 7.550 and the second-order all-pass given by Equation 7.551. (b) The same functions in a logarithmic frequency scale. The second-order function shows best linearity for q ¼ 4.
e( f ) ¼ d( f ) þ p=2
(7:552)
The design methods of 908 phase splitters were described by Dome9 in 1946. Later Darlington,8 Orchard,27 Weaver,38 and Saraga33 described design methods based on a Chebyshev approximation of a desired phase error. Tables and diagrams of these approximations can be found in Bedrosian.2
7.22.3 A Simple Method of Design of Hilbert Phase Splitters Analog Hilbert transformers are designed using models of a given filter consisting of loss-less capacitors, low-loss inductors, ideal resistors, and ideal operational amplifiers. More accurate models that take into account spurious capacitances, inductances, and other spurious effects are sophisticated and rarely applied at the design stage. The alignment of circuits with an accuracy better than 0.5%–1% is difficult to achieve. Having in mind the above arguments, the required accuracy of design of the parameters of the phase splitter is limited. Therefore, the simple method of design using a personal computer may be effective in many applications and is presented here. The method consists of two steps. In the first step, the phase function w1( f ), given by Equation 7.546, is linearized in the logarithmic frequency scale. In the second step, the phase function w2( f ) is obtained by shifting the function w1( f ) in order to get a minimum value of the RMS phase error defined by Equation 7.547. The lower and upper frequency edges f1 and f2 are chosen as abcissae at which the error function diverges. The method is illustrated by four examples of design of Hilbert transformers given by the circuit models in Figure 7.51.
Example First example: The Hilbert transformer of this example is implemented using two first-order all-pass filters (see Figure 7.51a). The phase function of the first filter is
x(t)
(a)
All-pass τ
All-pass aτ
y(t)
ˆ y(t)
y(t)
All-pass τ
All-pass aτ
All-pass bτ
All-pass abτ
All-pass τ
All-pass aτ
All-pass bτ
All-pass cτ
All-pass acτ
All-pass bcτ
x(t)
(b)
ˆ y(t)
y(t)
x(t)
(c)
ˆ y(t)
FIGURE 7.51 The phase splitter Hilbert transformer using (a) firstorder all-pass filters, (b) a series connection of two first-order all-passes, (c) three first-order all-passes, and (continued)
7-64
Transforms and Applications Handbook
w2 ( f ) ¼ tan1
1 R
y(t) C
C
2ay , Y ¼ 2pf RC a2 y 2 1
(7:554)
giving the minimum RMS phase error. The functions w1( f ), w2( f ), and the error function e( f) are shown in Figure 7.52. Simple computer calculations yield the value of a ¼ 0.167 giving the normalized frequency edges y1 ¼ 1.75 and y2 ¼ 3, 5, and the RMS phase error eRMS ¼ 0.012. The pass-band equals one octave.
L
L
R –1
Second example: The phase splitter of this example is implemented using two first-order all-pass filters in each chain (see Figure 7.51b). The phase function of the first filter is
x(t) 1 R
w1 ( f ) ¼ tan1
aL
ˆ y(t)
Y ¼ 2pf RC
aC
aC –1
(d)
FIGURE 7.51 (continued)
(d) second-order all-passes.
(see Equation 7:550 w1 ( f ) ¼ tan1
2y 2ay 1 þ tan , y2 1 a2 y 2 1 (7:555)
In the first step, we have to find the shift parameter a to get the best linearity of w1( f ) in the logarithmic scale. Small changes of a introduce a tradeoff between the RMS phase error and the pass-band of the Hilbert transformer. In the second step we have to find the value of the shift parameter b in the phase function
R aL
w2 ( f ) ¼ tan1
2y ; y2 1
y ¼ 2pfR:
y ¼ 2pf RC ¼ 2pf t
(7:553)
2by 2aby 1 þ tan , b2 y 2 1 a2 b2 y 2 1 (7:556)
yielding the minimum of the RMS phase error. Figure 7.53 shows an example with a ¼ 0.08 and b ¼ 0.24 giving the normalized edge frequencies y1 ¼ 1.6 and y2 ¼ 30 ( f2=f1 ¼ 18.75 or more than 4 octaves) with eRMS ¼ 0.016.
The first step is abandoned because w1( f ) has no degree of freedom for linearization. In the second step we have to find the shift parameter denoted a in the phase function ε( f ) 0.05 1( f )
0 2( f )
2πf τ –0.05 Y1
0
1
1.75
Y2 3.5
y = 2πfτ
10
–1 2( f )
–2
1( f )
π
FIGURE 7.52 The phase functions and the phase error of the Hilbert transformer of Figure 7.51a.
7-65
Hilbert Transforms
ε( f ) 1( f )
0.05 0
2( f )
–0.05 1 y1
0.1
0
y2
10 1.6
y = 2πtτ
100
30 2( f )
–1
Δ
= – 2π
a = 0.08 b = 0.24
–2 –3 1( f )
–4 –5
W –6
FIGURE 7.53 The phase functions and the phase error of the Hilbert transformer of Figure 7.51b.
Third example: The phase splitter consists of three firstorder all-passes in each chain (see Figure 7.51c). The phase functions are ε( f )
2y 2ay þ tan1 2 2 y2 1 a y 1 2by 1 þ tan b2 y 2 1
w1 ( f ) ¼ tan1
0.05 0
(7:557) 0
and 2cy 2cay 1 þ tan w2 ( f ) ¼ tan c2 y 2 1 c2 a2 y 2 1 2cby þ tan1 2 2 2 c b y 1 1
2(1 y 2 )qy , (1 y 2 )2 q2 y 2
Y ¼ 2pf RC
1
10
100
y = 2πfτ
–2
(7:558)
Fourth example: The phase splitter consists of one secondorder all-pass in each chain (see Figure 7.51d). The phase functions are
y2
y1
–1
Good linearity of the phase function w1( f) depend on the shift parameters a and b. The first step yields a ¼ 0.08 and b ¼ 0.008. In the second step the parameter c ¼ 0.24 yields the minimum value of the RMS phase error. Figure 7.54 shows the phase functions and the error distribution e( f ). The RMS phase error is eRMS ¼ 0.025. The edge frequencies are y1 ¼ 1.8, y2 ¼ 300 giving f2=f1 ¼ 166 (more than 7 octaves). A smaller phase error may be achieved at the cost of frequency range.
w1 ( f ) ¼ tan1
y = 2πfτ
–0.05
(7:559)
–3 1( f )
–4 –5
2( f )
2( f )
1( f )
–6 –7 –8
FIGURE 7.54 The phase functions and the phase error of the Hilbert transformer of Figure 7.51c.
7-66
Transforms and Applications Handbook
w2 ( f ) ¼ tan1
2(1 a2 y 2 )qay , (1 a2 y 2 )2 q2 a2 y 2
2
Y ¼ 2pf RC (7:560) 2
Good linearity of w1( f ) yields the value q ¼ 4 (see Figure 7.55). The minimum value of the RMS phase error yields the shift parameter a ¼ 0.232. The phase functions and the error distribution are shown in Figure 7.50. The edge frequencies are y1 ¼ 0.5 and y2 ¼ 9 giving f2=f1 ¼ 18 with eRMS ¼ 0.0186. The bandwidth is about the same as in the second example with two first order all-passes in each chain.
1 Hilbert transform
1
0 0
π
–1 Input signal
7.22.4 Delay, Phase Distortions, and Equalization
(a)
where v1 ¼ 2p f1 ¼ 1.75=t was chosen near the low-frequency edge of the pass-band W. The spectrum of this signal is enclosed inside W. The waveforms of this signal and its Hilbert transform are shown in Figure 7.56a. The phase-distorted Hilbert pair at the output ports of the phase splitter is shown in Figure 7.56b. The phase distortions can be removed by connecting a phase equalizer in series to the input port, predistorting the input signal (see the waveform of Figure 7.56d). The required phase functions of the equalizer may have the form
0.05 0 –0.05
0
Output signals without equalizer
Delay
Port b Port a Output signals with equalizer (c)
Output signal of the equalizer (d)
FIGURE 7.56 The waveform given by (a) the truncated Fourier series (7.557) and of its Hilbert transform, (b) the distorted Hilbert pair at the output with no equalization, (c) the equalized undistorted and delayed Hilbert pair, and (d) the input signal predistorted by the equalizer.
e( f )
10
0 2( f )
–2
(b)
4 1 1 1 sin (v1 t) þ sin (3v1 t) þ sin (5v1 t) þ sin (7v1 t) p 3 5 7 (7:561)
1( f )
Port b Port a
–2
The phase functions of the all-pass filters used to implement the Hilbert transformer are, disregarding the small phase errors, linear in the logarithmic frequency scale, but nonlinear in a linear frequency scale. Let us investigate the phase distortions due to that nonlinearity for the Hilbert filter of the second example. Consider a wide-band test signal given by the Fourier series of a square wave truncated at the seventh harmonic term: x(t) ¼
–1
20
30 2π fτ
ε( f )
–1 εRMS = 0.0186
–1
ω0 = 10 –2
ω0 = 30
q=4
–2 –3
a = 0.232
–3
1( f )
–4
–4
ω0 = 0 Multiply by 2
1( f )
–5
–5
–6 0.01
0.1
1 Y1 = 0.5
10 Y2 = 9
100 2πf τ
FIGURE 7.55 The phase functions and the phase error of the Hilbert transformer of Figure 7.51d.
–6
FIGURE 7.57 The phase functions of the equalizer given by Equation 7.552 for the phase function w2( f ) given by Equation 7.556.
7-67
Hilbert Transforms
wequalizer ( f ) ¼ wL ( f ) w2 ( f )
(7:562) HN ( jf ) ¼ 2j
where w2( f ) is given by Equation 7.550 and
(N1)=2 X
b(i) sin [i2pf t0 ];
i¼1
t0 ¼
1 2W
(7:565)
with dw ( f ) wL ( f ) ¼ w2 ( f0 ) þ 2 df
f ¼f0
( f f0 )
is a linear phase function tangential to w2( f ) at f ¼ f0. Figure 7.57 shows the phase function of the equalizer for three different values of the abcissae f0. Figure 7.56c shows the delayed and practically undistorted output waveforms of the equalized Hilbert transformer with f0 ¼ 0. The delay is given by the slope of the phase function
t0 ¼
2 2 pi b(i) ¼ sin pi 2
(7:563)
dw2 ( f ) ¼ 2tb(1 þ a) df f0 ¼0
(7:564)
giving the delay t0 ¼ 0.5065 s (t ¼ 1). Another method of linearization of the phase function is given in Ref. 21.
Different from the implementations of Hilbert transformers with all-pass filters, where the design amplitude equals error zero and the phase error is distributed over the pass-band, here the roles are interchanged. The amplitude error is distributed over the passband and there is no phase error (linear phase). The RMS amplitude ripple decreases with the increasing number of tapes of the delay line (increasing number of coefficients b(n)). The transversal Hilbert transformer, disregarding the small distortions due to the amplitude ripple, produces at the output a delayed undistorted signal and its Hilbert transform. However, analog implementations are rarely used in favor of digital implementations in the form of FIR (Finite Impulse Response) Hilbert transformers.
7.22.5 Hilbert Transformers with Tapped Delay-Line Filters
Y( j f )
Tapped delay-line filters often referred to as transversal filters may be used as phase equalizers. Such a filter enables the approximation of a given transmittance H( jf ) with a desired accuracy. Therefore, a Hilbert filter may be implemented using a tapped delay line,15,34 (see Figure 7.58). If the spectrum of the input signal is band-pass limited such that X( f ) ¼ 0 for j f j > W, then the transfer function of the ideal Hilbert transformer given by Equation 7.544 may be truncated at j f j ¼ W. The tapped delay-line Hilbert filter may be designed using a periodic repetition of this truncated function, as shown in Figure 7.59. The expansion of this function in a Fourier series yields, using truncation, the following approximate form of the transfer function
…
j
–2W
–W
0
–j
2t0
b(–n)
…
2t0
b(–3)
t0
b(–1)
…
2t0
t0
b(1)
b(3)
2t0
b(n)
Summer
ˆ – nt ) Y2(t) = X(t 0 FIGURE 7.58 A tapped delay line Hilbert transformer.
W
2W
f
…
FIGURE 7.59 A truncated at W and periodically repeated transfer function of an ideal Hilbert transformer (see Equation 7.544).
Y1(t) = X(t – nt0) X(t)
(7:566)
7-68
Transforms and Applications Handbook
7.22.6 Band-Pass Hilbert Transformers The transfer function of a band-pass Hilbert transformer may be defined as the frequency-translated transfer function of a low-pass Hilbert transformer. The transfer function of an ideal low-pass with linear phase is given by the formula
as illustrated in Figure 7.60a and c. The impulse response of such a Hilbert transformer is hH (t) ¼ F 1 [HH ( jf )] ¼
HLP ( jf ) ¼ P[f =(2W)]ej2pf t
(7:567)
1 [1 cos 2pW(t t)] p(t t)
(7:571)
or
where t is the time delay and P(x) has the form 8 for jxj < 0:5 0:5
(7:568)
This is illustrated in Figure 7.60. The impulse response of this filter is hLP (t) ¼ F 1 [HLP ( jf )] ¼ 2W
sin X X
HH ( jf ) ¼ HLP ( jf ) e
2 sin2 [pW(t t)] p(t t)
(7:572)
This is illustrated in Figure 7.61b. If W goes to infinity the mean value of hH(t) taken over the period T ¼ 1=W approximates the distribution 1=(p(t t)). The transfer function of an ideal band-pass filter is given by
(7:569)
where X ¼ 2p W(t t). The response, as shown in Figure 7.61 is noncausal, but for large delays t is nearly causal. The transfer function of the Hilbert transformer derived from Equation 7.567 is given by j[0:5p sgn( f )þ2pf t]
hH (t) ¼
HBP ( jf ) ¼
Y Y f þ f0 f f0 þ ej2pf t 2W 2W
(7:573)
This is illustrated in Figure 7.62a and b. The impulse response of this filter is
hBP (t) ¼ 2W
(7:570)
sin X cos [2pf0 (t t)] X
(7:574)
|HLP| = |HH| arg HLP 2π Wτ –W
0
W
f
(a)
f (b)
arg HH
0.5π f –0.5π
(c)
FIGURE 7.60 The transfer function of the ideal low-pass: (a) magnitude, (b) linear phase function, and (c) phase function of a Hilbert transformer derived from the low-pass function.
7-69
Hilbert Transforms
hLP (t) 1 W = 0.5 τ=5
0.75 0.5 0.25 5
10
t
–0.25 1 τ – 2W
(a)
1 τ + 2W
hH (t) 1.5
W = 0.5 τ=5
1 1 π (t – 5)
0.5
5
10
t
–0.5
–1
(b)
–1.5
FIGURE 7.61 Impulse responses of (a) the low-pass and (b) the corresponding Hilbert transformer. Transfer functions are shown in Figure 7.60.
and is shown in Figure 7.63a. The transfer function of an ideal band-pass Hilbert transformer derived from the transfer function (Equation 7.573) is HHBP ( jf ) ¼ HBP ( jf ) exp{j 0:5p[sgn( f þ f0 ) þ sgn( f f0 )]}
(7:575)
This is illustrated in Figure 7.62a and c. The impulse response of this Hilbert transformer is
hHBP (t) ¼
2 sin2 [pW(t t)] cos [2pf0 (t t)] p(t t)
(7:576)
and is shown in Figure 7.63b. Consider the response of the band-pass Hilbert transformer to a band-pass signal u1(t) ¼ x(t) cos (2p f0t) where x(t) has no spectral terms for j f j > W and f0 > W. This response has the form
u2 (t) ¼ ^x(t t) cos [2pf0 (t t)]
(7:577)
that is, the modulating signal x(t) is replaced by the delayed version of its Hilbert transform. Notice that due to Bedrosian’s theorem the Hilbert transform of the input signal (see Section 7.13) has the form u2 (t) ¼ x(t t) sin [2pf0 (t t)]
(7:578)
that is, only the carrier is Hilbert transformed, compared to signal (7.577), for which the envelope is transformed. The transfer function of a band-pass producing at the output the Hilbert transform in agreement with Bedrosian’s theorem is given by the equation HHBP (jf ) ¼ j sgn( f ) HBP (jf )
(7:579)
where HBP( jf ) is given by Equation 7.21.31 and is shown in Figure 7.64.
7-70
Transforms and Applications Handbook
|HBP| = |HHBP| 1
–f0 –W
–f0
–f0 + W
f0 – W
0
f0
f f0 + W
(a) arg HBP
2π f0τ
f –f0
f0
(b)
arg HHBP
π 2π f0τ
–f0
f0
f
(c)
FIGURE 7.62 The transfer functions of an ideal band-pass filter and of the corresponding Hilbert transformer: (a) the magnitude, (b) the phase function of the band-pass, and (c) the Hilbert transformer.
A possible implementation of a band-pass Hilbert transformer defined by Equation 7.573 is shown in Figure 7.65. It consists of a linear phase lower side-band band-pass, analogous upper sideband, band-pass, and a substractor. Figure 7.66 shows the implementation of such a Hilbert transformer by use of a SAW (surface acoustic wave) filter.
7.22.7 Generation of Hilbert Transforms Using SSB Filtering
The ideal discrete-time Hilbert transformer is defined as an allpass with a pure imaginary transfer function, that is, if H(e jc ) ¼ Hr (c) þ jHi (c), Hr (c) ¼ 0 all f
then
(7:581)
and
The Hilbert transform of a given signal may be obtained by band-pass filtering of a DSB AM signal. The SSB signal has the form (see Section 7.17) uSSB (t) ¼ x(t) cos (2pF0 t) ^x(t) sin (2pF0 t)
7.22.8 Digital Hilbert Transformers
(7:580)
where F0 is the carrier frequency. Such a signal can be obtained by band-pass filtering of a DSB AM signal. A synchronous demodulator using the quadrature carrier sin (2p F0t) generates at his output the Hilbert transform ^ x(t).
8 < j H(e ) ¼ jHi (c) ¼ 0 : j jc
0 < þ1, sgn q ¼ 0, > : 1,
for q > 0 for q ¼ 0 for q < 0:
(8:95)
The methods needed to work with these inverse Fourier transforms is given by Lighthill (1962) and Bracewell (1986). By use of the derivative theorem F1 {2piq} ¼ d0 (p), where the prime denotes first order derivative with respect to variable p. The other transform is given in terms of a Cauchy principal value, n o 1 1 1 sgn q ¼ 23 F : 2pi 2p p
1 ðp ð ^ f p (p, j) 1 dp: f (x, y) ¼ 2 3 df pjx 2p 0
1 *i [ f (t); t ! x] ¼ p
1 f (x, y) ¼ 2p
0
(8:96)
By using the derivative theorem for convolution and the properties of the delta function, ^
^
f (p, j) f (p, j) *d(p) ¼ : f (p, j)*d (p) ¼ qp qp ^
0
1
f (t)dt , tx
(8:98)
ðp 0
^
*i [f p (p, j); p ! j x]df:
(8:99)
For reasons that will become apparent in the subsequent discussion it is extremely desirable to make the following definition for the Hilbert transform of the derivative of some function, say g,
Now, Equation 8.94 becomes ^ 1 0 df f (p, j)*d (p)*3 : p p¼jx
1 ð
where the Cauchy principal value is understood. Thus, the inversion formula can be written as
1 1 : F1 {jqj} ¼ d0 (p)* 2 3 2p p
ðp
1
Here, the Cauchy principal value is related to the integral over p. It has been placed outside for convenience. Sometimes the 3 is dropped altogether; in this case it is ‘‘understood’’ that the singular integral is interpreted in terms of the Cauchy principal value. The inversion formula (Equation 8.97) can be expressed in terms of a Hilbert transform (see also Chapter 7). The Hilbert transform of f (t) is defined by Sneddon (1972) and Bracewell (1986),
It follows that
1 f (x, y) ¼ 2 2p
(8:97)
g (t) ¼
1 *i [gp (p); p ! t] for n ¼ 2: 4p
(8:100)
If this is done, the inversion formula for n ¼ 2, is given by f (x, y) ¼ 2
ðp 0
h^ i df f (t, j)
t¼jx
:
(8:101)
8-21
Radon and Abel Transforms
8.9.2 Three Dimensions
8.10 Abel Transforms
The inversion formula in three dimensions is actually easier to derive because no Hilbert transforms emerge. The path through Fourier space is used again with the unit vector j given in terms of the polar angle u and azimuthal angle f, j ¼ ( sin u cos f, sin u sin f, cos u): The feature space function f(x) ¼ f(x, y, z) is found from the inverse 3D Fourier transform,
f (x) ¼
~ F1 3 f (qj)
¼
1 ð
2
dqq
0
ð
dj ~f (qj)ei2pqjx :
(8:102)
jjj¼1
Here, the integral over the unit sphere is indicated by ð
2p ð
dj ¼
ðp
df
0
jjj¼1
sin u du:
0
^
Now recall that ~f is given by the 1D Fourier transform of f , and ^ from the symmetry properties of f the integral over q from 0 to 1 can be replaced by one-half the integral from 1 to 1. 1 f (x) ¼ 2
2
ð
jjj¼1
¼
1 2
ð
dj4
1 ð
1
3
dqq2~f (qj)ei2pqp5
p¼jx
8.10.1 Singular Integral Equations, Abel Type
:
An integral equation is called singular if either the range of integration is infinite or the kernel has singularities within the range of integration. Singular integral equations of Volterra type of the first kind are of the form (Tricomi, 1985)
djF1 [q2~f (qj)]p¼jx
jjj¼1
Now from the inverse of the 1D derivative theorem F1 [q2~f ] ¼
2
ðx g(x) ¼ k(x, y) f (y) dy
^
1 q f 1 ¼ f , 4p qp2 4p pp ^
1 8p2
ð
jjj¼1
h^ i dj f pp (p, j)
p¼jx
:
(8:103)
r2 c(j x) ¼ jjj2 [cpp (p)]p¼jx ¼ [cpp (p)]p¼jx : The last equality follows because j is a unit vector. These observations lead to the inversion formula 1 2 r 8p2
ð
jjj¼1
^
f (j x, j) dj:
(8:105)
where the kernel satisfies the condition k(x, y) 0 if y > x. If k(x, y) ¼ k(x y), then the equation is of convolution type. The type of kernel of interest here is k(x y) ¼
Another form for Equation 8.103 comes from the observation that for any function of j x
f (x) ¼
x > 0,
0
one form of the inversion formula is f (x) ¼
In this section we focus attention on a particular class of singular integral equations and how transforms known as Abel transforms emerge. Actually, it is convenient to define four different Abel transforms. Although all of these transforms are called Abel transforms at various places in the literature, there is no agreement regarding the numbering. Consequently, an arbitrary decision is made here in that respect. There is an intimate connection with the Radon transform; however, that discussion is delayed until Section 8.11. There are some very good recent references devoted primarily to Abel integral equations, Abel transforms, and applications. The monograph by Gorenflo and Vessella (1991) is especially recommended for both theory and applications. Also, the chapter by Anderssen and de Hoog (1990) contains many applications along with an excellent list of references. A recent book by Srivastava and Bushman (1992) is valuable for convolution integral equations in general. Other general references include Kanwal (1971), Widder (1971), Churchill (1972), Doetsch (1974), and Knill (1994). Another valuable resource is the review by Lonseth (1977). His remarks on page 247 regarding Abel’s contributions ‘‘back in the springtime of analysis’’ are required reading for those who appreciate the history of mathematics. Other references to Abel transforms and relevant resource material are contained in Section 8.11 and in the following discussion.
(8:104)
1 (x y)a
0 < a < 1:
This leads to an integral equation of Abel type,
g(x) ¼
ðx 0
f (y) 1 dy ¼ f (x)* a , x > 0, (x y)a x
(8:106)
0 < a < 1: Integral equations of the type in Equation 8.106 were studied by the Norwegian mathematician Niels H. Abel (1802–1829) with particular attention to the connection with the tautochrone
8-22
Transforms and Applications Handbook
problem. This work by Abel (1823, 1826a,b) served to introduce the subject of integral equations. The connection with the tautochrone problem emerges when a ¼ 1=2 in the integral equation. This is the problem of determining a curve through the origin in a vertical plane such that the time required for a massive particle to slide without friction down the curve to the origin is independent of the starting position. It is assumed that the particle slides freely from rest under the action of its weight and the reaction of the curve (smooth wire) that constrains its movement. Details of this problem are discussed by Churchill (1972) and Widder (1971). One way to solve Equation 8.105 when k(x, y) ¼ k(x y) is by use of the Laplace transform (see Chapter 5); this yields
Another form of Equation 8.108 can be found if g(y) is differentiable. One way Ð to find thisÐ other solution is to use integration by parts, u dv ¼ uv v du, with u ¼ g(y) and dv ¼ (x y)a1dy, ðx
(x y)
g(þ0)x a 1 þ g(y)dy ¼ a a
ðx
(x y)a g0 (y)dy:
0
0
When this expression is multiplied by sin ap=p and differentiated with respect to x the alternative expression for Equation 8.109 follows, 2 3 ðx sin ap 4g( þ 0) g0 (y) f (x) ¼ dy 5: þ p x 1a (x y)1a
(8:107)
G(s) ¼ F(s)K(s):
a1
(8:110)
0
The solution for F(s) can be written in two forms, Remark G(s) 1 ¼ [sG(s)] F(s) ¼ K(s) sK(s)
(8:108)
The second form is used when the inverse Laplace transform of 1=K(s) does not exist.
Example 8.29
It is tempting to take a quick look at Equation 8.106 and assume that g(0) ¼ 0. This is wrong! The proper interpretation is to do the integral first and then take the limit as x ! 0 through positive values. This is why we have written g(þ0) in Equation 8.110. The above Equation 8.110 also follows by taking into consideration the convolution properties and derivatives for the Laplace transform. We observe that Equation 8.108 can be written in two alternative forms,
Solve Equation 8.106 for f (x). From Equation 8.107 and Laplace transform tables (Chapter 5),
1 G(s) ¼ L{ f (x)}L a ¼ F(s)sa1 G(1 a): x
F(s) ¼ s[G(s)H(s)] ¼ [sG(s)] [H(s)], Where H (s) is defined by
To find F(s) we must invert the equation F(s) ¼
H(s) ¼
s [G(a)sa G(s)]: G(a)G(1 a)
1 : sK(s)
The inversion gives
The inversion yields
s [G(a)sa G(s)] f (x) ¼ L1 G(a)G(1 a) 8 99 8x < == 0
2r f2 (r) ¼ p
1 ð
1 f3 (r) ¼ p
1 ð
f2 (r)dr 1
^f3 (x) !3 { f3 (r); x} ¼ 2
1 ð
x>0
^f4 (x) !4 { f4 (r); x} ¼ 2
ðx
1
r f4 (r)dr
0
,
x>0
(8:118c)
1 r 2 )2
, x > 0:
ðr 0
x ^f1 (x)dx (r 2 x2 )2
2 d f2 (r) ¼ p dr
1 ð
1 d f3 (r) ¼ pr dr 1 d f4 (r) ¼ pr dr
ðr 0
(8:119a)
1
r
1 ð r
1
(8:120c)
(x2 r 2 )2
1 ð r
r ^f3 (x)dx
(8:120d)
1
(8:121)
:
x(x2 r 2 )2
(8:119b)
1 r 2 )2
x ^f3 (x)dx 1
(8:119c)
(x2 r 2 )2
x ^f4 (x)dx
1:
(r 2 x2 )2
dv ¼
du ¼ r ^f 0 (x)dx, dx
x(x2
1
v¼
1 r cos1 , r x
:
r 2 )2
After doing the integration by parts, take the derivative with respect to r to get Equation 8.120c. Some important observations From the definitions of the transforms !i, it follows that !3 { f (r)} ¼ 2!2 {rf (r)} !4 { f (r)} ¼ 2!1 {rf (r)}
!4 {r1 f1 (r)} ¼ 2^f1 (x)
x ^f2 (x)dx (x2
u ¼ r ^f3 (x),
(8:118d)
Note the change from y ! r to agree with the short tables of transforms given in Appendix 8.B. Also note the change g ! ^f , and the use of subscripts to keep track of which transform is being applied. The corresponding inversion expressions are 2 d f1 (r) ¼ p dr
(8:120b)
In these equations it is assumed that the transform vanishes at infinity, ^f (1) 0, and the prime means derivative with respect to x. There is yet another form that is useful for f3. The result comes from a study of the Radon transform (Deans, 1983, 1993):
(8:118b)
(r2 x2 )2 (x2
1
To verify that this indeed reduces to Equation 8.120c, let the integration by parts be done in Equation 8.121 with
r f3 (r)dr
x
^f 0 (x)dx 3
1 d f3 (r) ¼ p dr ,
(8:120a)
(x2 r 2 )2
(8:118a)
(r 2 x2 )2
x
1
(r 2 x2 )2
r ^f4 (0) 1 ð ^f 0 (x)dx 4 : f4 (r) ¼ þ pr p (r 2 x2 )12 0
(x2 r 2 )2
0
0
^f 0 (x)dx 2
r
r
ðr ^0 f1 (x)dx
(8:119d)
!3 {r f2 (r)} ¼ 2^f2 (x) 1
2 d ^ f1 (r) !1 !1 {x^f1 (x)} 1 {f1 (x)} ¼ p dr 2 d ^ !2 {x^f2 (x)}: f2 (r) !1 2 {f2 (x)} ¼ p dr
(8:122a) (8:122b) (8:122c) (8:122d) (8:122e) (8:122f)
These equations (along with obvious variations) can be used to find transforms and inverse transforms. A few samples are provided in the examples of Section 8.10.4.
8-25
Radon and Abel Transforms
8.10.3 Fractional Integrals
Example 8.32
The Abel transforms are related to the Riemann–Liouville and Weyl (fractional) integrals of order 1=2; these are discussed along with an extensive tabulation in Chapter 13 of Erdélyi et al. (1954). In the notation of this reference, the Riemann–Liouville integral is given by
g(y; m) ¼
1 G(m)
ðy 0
f (x)(y x)m1 dx,
Consider the Abel transform ^f1 (x) ¼ !1 {a r} ¼ pa x: 2 This is a simple case where ^f1 (x) is not zero at x ¼ 0; here, ^f1 (0) ¼ pa=2 and ^f 0 (x) ¼ 1. If Equation 8.120a is used to 1 verify the transform, the calculation is
(8:123) 2 pa 2r þ f1 (r) ¼ p 2 p
0
and the Weyl integral is given by
h(y; m) ¼
1 G(m)
1 ð y
f (x)(x y)m1 dx:
(8:124)
Now in Equation 8.123 let m ¼ 1=2, make the replacement y ! x2 , and change the variable of integration x ¼ r2 to obtain.
pffiffiffiffi 1 p g x2 , 2
¼2
ðr
ðx 0
(x2
1 r 2 )2
2 d p dr
ðr
1 2pax
0
¼ a r:
x2 1
(r 2 x 2 )2
dx ¼ a r:
From Equation 8.122c we know the transform !4 {r 1 (a r)} ¼ pa 2x:
:
Clearly, this form of the Riemann–Liouville integral can be converted to Equation 8.118d by the appropriate replacements. By a similar argument, the Weyl integral (Equation 8.124) can be converted to Equation 8.118c. This leads to the following useful rule for finding Abel transforms !3 and !4 from the tables in Chapter 13 of Erdélyi et al. (1954).
1
(r 2 x 2 )2
Verification of this inverse for Equation 8.119a follows by using the appropriate integral formulas from Appendix 8.A, and application of the derivative with respect to r:
2
r f (r )dr
dx
Inversion formulas (Equation 8.119d) and (Equation 8.120d) apply for this case.
Example 8.33 It is instructive to apply inversion formulas (8.119c), (8.120c), and (8.121) to the same problem. From Appendix 8.B, we use 1
!3 {x(r=a)} ¼ 2(a2 x 2 )2 x(x =a):
Rule 1. Replace: m ! 12 2. Replace: x ! r 2 (column on left). pffiffiffiffi 3. Replace: y ! x2 and multiply the transform by p (column on right).
Application of Equation 8.119c gives
1 d pr dr
We close this section with a few useful examples. These are especially valuable for those concerned with the analytic computation of Abel transforms or inverse Abel transforms.
1
2x(a2 x 2 )2 dx
r
It is easy to verify that this rule works by its application to cases that yield results quoted in Appendix 8.B for !3. Verification of the rule for !4 follows immediately from the use of standard integral tables. Although the rule works most directly for the !3 and !4 transforms, it can be extended to apply to finding !1 and !2 transforms by use of the formulas in Equations 8.122a through f. Finally, it is interesting to note that these integrals lead to an interpretation for fractional differentiation and fractional integration. A good resource for details on this concept is the monograph by Gorenflo and Vessella (1991).
8.10.4 Some Useful Examples
ða
(x 2
1 r 2 )2
¼
2 d pr dr
ða
1
ða
2 d pr dr
ða
1
(a2 x 2 )2 (x 2 r 2 )2
r
2 d ¼ pr dr
þ
x(a2 x 2 )dx a2 x dx 1
1
(a2 x 2 )2 (x 2 r 2 )2
r
x 3 dx 1
1
(a2 x 2 )2 (x 2 r 2 )2 2 d a2 p 2 d 2 p ¼ þ (a þ r 2 ) pr dr 2 pr dr 4 r
¼ 0 þ 1 ¼ 1: Application of Equation 8.120c gives
1 p
ða r
2x dx (a2
1 x 2 )2 (x 2
1 r 2 )2
¼
2 p
ða r
x dx (a2
1 x 2 )2 (x 2
1
r 2 )2
¼ 1:
8-26
Transforms and Applications Handbook From formula (Equation 8.122e) with ^f1 (x) ¼ sin bx,
Application of Equation 8.121 gives
1 d p dr
ða r
2
2r(a x(x 2
1 x 2 )2
dx
1 r 2 )2
¼
2 d p dr
ða r
2 d ¼ p dr
ða
2 d þ p dr
ða
r
2
r(a x )dx x(a2
2 d 2 d 1 !1 {x sin bx} ¼ p t J1 (bt) ¼ bt J0 (bt): p dt p dt 2
2
1
1
x 2 )2 (x 2 r 2 )2
This means that
ra2 dx 1
1
!1 1 { sin bx} ¼ bt J0 (bt),
x(a2 x 2 )2 (x 2 r 2 )2 or equivalently
rx dx 1 x 2 )2 (x 2
1 r 2 )2
(a2 2 d 2 p 2 d rp ¼ þ (a r) p dr 2ar p dr 2 r
¼ 0 þ 1 ¼ 1:
sin bx : b
!1 {r J0 (br)} ¼
And by the same technique, from Equation 8.122f !2 {rJ0 (br)} ¼
Evaluation of the various integrals above follows from material in Appendix 8.A.
cos bx : b
From Equation 8.122f with ^f2 (x) ¼ x 1 sin bx,
Example 8.34 The following Bessel function identities are used in this example. q v fx Jv (bx)g ¼ bx v Jv1 (bx): qx
(8:125a)
q v fx Jv (bx)g ¼ bx v Jvþ1 (bx): qx
ðx 0
cos br dr
1, (x 2 r 2 )2
2 d 2 d 1 !2 { sin bx} ¼ p J0 (bt) p dt p dt 2
¼ b J1 (bt) or !2 {J1 (br)} ¼
(8:125b)
sin bx : bx
From the formulas developed above for the !2 transforms and Equation 8.122a,
It follows from the formulas p J0 (bx) ¼ 2
1 !1 sin bx} ¼ 2 {x
p J0 (bx) ¼ 2
1 ð x
sin br dr (r 2
1 x 2 )2
!3 { cos br} ¼ p x J1 (bx),
,
!3 {r 1 sin br} ¼ p J0 (bx), !3 {J0 (br)} ¼
for the Bessel function J0 that !1 { cos br} ¼
p J0 (bx), 2
2 cos bx , b
and !3 {r 1 J1 (br)} ¼
and !2 { sin br} ¼
p J0 (bx): 2
Differentiation of the previous two expressions with respect to the parameter b yields the formulas !1 {r sin br} ¼
px J1 (bx) 2
and
2 sin bx : bx
Additional formulas similar to those in the previous example are contained in Sneddon (1972) and Gorenflo and Vessella (1991). These authors also make use of the formulas of this example to make the connection between the Abel transform and the Hankel transform. This connection is also discussed in Section 8.11 in the more general context of the Radon transform.
Example 8.35 Use the rule in Section 8.10.3 to compute
px !2 {r cos br} ¼ J1 (bx): 2
!4 {r 2 v2 }:
8-27
Radon and Abel Transforms From item (7) of Table 13.1, Riemann–Liouville fractional integrals, of Erdélyi et al. (1954) pffiffiffiffi pG(v) 2v1 x !4 {r 2v2 } ¼ : G v þ 12
A special case is provided by v ¼ 2. This leads to the expression 2
ðx 0
3
r dr (x 2
1 r 2 )2
3
¼
4x : 3
8.11 Related Transforms and Symmetry, Abel and Hankel The direct connection of the Radon transform and the Fourier transform is used extensively throughout earlier sections of this chapter. Several other transforms are also related to the Radon transform. Some of these are related by circumstances that involve some type of symmetry. The Abel and Hankel transforms emerge naturally in this context. Other related transforms follow more naturally from considerations of orthogonal function series expansions. In this section some of these relations are explored and examples provided to help illustrate the connections.
^
f (p) ¼
¼
1 ð
1 ð
f
1 1 1 ð
f
1
¼2
1 ð 0
f
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ y2 d(p x) dx dy
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p2 þ y2 dy pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p2 þ y2 dy: ^
Clearly, because p appears only as p2, the function f (p) is even and it is sufficient to always choose p > 0. A change of variable r2 ¼ (p2 þ y2) yields ^
f (p) ¼ 2
1 ð
j pj
r f (r) (r 2
The Abel transform is closely connected with a generalization of the tautochrone problem. This is the problem of determining a curve through the origin in a vertical plane such that the time required for a particle to slide without friction down the curve to the origin is independent of the starting position. It was the generalization of this problem that led Abel to introduce the subject of integral equations (see Section 8.10). More recent applications of Abel transforms in the area of holography and interferometry with phase objects (of practical importance in aerodynamics, heat and mass transfer, and plasma diagnostics) are discussed by Vest (1979), Schumann et al. (1985) and Ladouceur and Adiga (1987). A very good description of the relation of the Abel and Radon transform to the problem of determining the refractive index from knowledge of a holographic interferogram is provided by Vest (1979); in particular, see Chapter 6, where many references to original work are cited. Minerbo and Levy (1969), Sneddon (1972), and Bracewell (1986) also contain useful material on the Abel transform. Many other references are contained in Section 8.10. Suppose the feature space function f (x,y) is rotationally symmetric and depends only on (x2 þ y2)1=2. Now, knowledge of one set of projections, for any angle f, serves to define the Radon transform for all angles. For simplicity, let f ¼ 0 in the definition ^ ^ (Equation 8.5). Then f (p, f) ¼ f (p, 0); because there is no dependence on angle there is no loss of generality by writing ^ this as f (p). With these modifications taken into account, the definition becomes
dr:
This equation is just the defining equation for the Abel transform (Bracewell, 1986), designated by
fA (p) ¼ !{ f (r)} ¼ 2
1 ð
j pj
8.11.1 Abel Transform
p2 )1=2
r f (r) (r 2
p2 )1=2
dr:
(8:126)
The absolute value can be removed if p is restricted to p > 0 and fA(p) ¼ fA(p). Remark about notation: The Abel transform used here is !3 of Section 8.10; that is, ! !3. The Abel transform can be inverted by using the Laplace transform, Section 8.10, or by using the Fourier transform (Bracewell, 1986). For purposes of illustration, the method employed by Barrett (1984) is used here. Equation 8.13, with n ¼ 2, coupled with the observation that the Radon transform operator R ¼ ! when f(x,y) has rotational symmetry, becomes F 1 ! f ¼ F2 f :
(8:127)
Moreover, for rotationally symmetric functions, the F2 operator is just the Hankel transform operator of order zero, *0. (More on the Hankel transform appears in Section 8.12 and in Chapter 9.) This means that
F2 f ¼ fH (q) ¼ 2p
1 ð
f (r)J0 (2pqr)r dr:
0
From the observation that F2 ¼ *0, and from the reciprocal property of the Hankel transform, *0 ¼ *1 0 , we have *0 f ¼ F1 fA
8-28
Transforms and Applications Handbook
or f ¼ *1 0 F1 fA ¼ *0 F1 fA : It follows that the inverse Abel transform operator is given by !1 ¼ *0 F1 :
(8:128)
From Equation 8.128 the first step in finding the inverse Abel transform is to determine the Fourier transform of fA,
FfA ¼
1 ð
fA (p)e
i2pq p
1
dp ¼ 2
1 ð
replaced by the variable x. This notation is used in Section 8.10 and in Appendix 8.B. Because the Abel transform is a special case of the Radon transform, all of the various basic theorems for the Radon transform apply to the Abel transform. One way to make use of this is to apply the theory of the Radon transform to obtain general results. Then observe that for all rotationally symmetric functions the same results apply to the Abel transform. Some examples of Radon transforms already worked out illustrate the idea.
Example 8.36 fA (p) cos (2p q p)dp:
0
Consider Example 8.3 in Section 8.5. The feature space function has the required rotational symmetry, so it follows immediately that the corresponding Abel transform is
The last step follows because fA(p) is an even function. Integration by parts gives
FfA ¼
1 pq
1 ð
1 ð 0
1 pq
1 ð
!{x(r)} ¼
dp fA0 (p)
0
1 ð
0
2
!{r 2 er } ¼
0
for 0 < r < p:
r
fA0 (p)(p2 r 2 )1=2 dp:
(8:131)
pffiffiffiffi p 2 2 (2p þ 1)ep : 2
(8:132)
In some cases it is just as easy to apply the definition of the Abel directly; for example, the transform of (a2 þ r2)1 is given by
! (a2 þ r 2 )1 ¼ 2
Hence, the inverse is found from 1 ð
for p < 1 for p > 1.
Example 8.37
The integral over q is tabulated (Gradshteyn et al., 1994); it vanishes for 0 < p < r and gives
1 f (r) ¼ p
2(1 p2 )1=2 , 0,
fA0 (p) sin (2pq p)dp
dq sin (2p q p)J0 (2p q r):
1 2 (p r 2 )1=2 2p
Another rotationally symmetric case worked out for the Radon transform is from the last part of Example 8.26 in Section 8.7. The corresponding Abel transform is
or, after simplification and interchanging the order of integration, 1 ð
(8:130)
0
dq q J0 (2p q r)
f (r) ¼ 2
pffiffiffiffi p2 pe :
From Example 8.9 of that same section, if x (r) represents the characteristic function of a unit disk, then
fA0 (p) sin (2pqp)dp,
where it is assumed that fA(p) ! 0 as p ! 0. The prime means differentiation with respect to p. Now the inverse of Equation 8.126 is given by
f (r) ¼ 2p
2
!{er } ¼
1 ð p
r dr (r 2 p2 )1=2 (r 2 þ a2 )
:
The change of variables z2 ¼ r2 þ a2 leads to a form that is easy to evaluate; see Appendix 8.A,
(8:129)
This equation and Equation 8.126 are an Abel transform pair. Other forms for the inversion are given in Section 8.10. It may be useful to observe that, for rotationally symmetric functions, if the angle f in the Radon transform is chosen f ¼ 0, then the p that appears in these formulas is just the same as x, the projection of the radius r on the horizontal axis. For this reason, in many discussions of the Abel transform the variable p used here is
!{(a2 þ r 2 )1 } ¼
p (p2
þ a2 )1=2
:
(8:133)
Example 8.38 Suppose the desired transform is of (1 r2)1=2 restricted to the unit disk or f (r) ¼ (1 r 2 )1=2 x(r):
8-29
Radon and Abel Transforms One way to do this is to find the Radon transform of this function and identify the result with the Abel transform. From the definition of the Radon transform, taking f ¼ 0, and restricting the integral to the unit disk D, ð f (r, f) ¼ (1 x 2 y 2 )1=2 d(p x)dx dy:
The polar form of the 2D Fourier transform is given by ~f (q, f) ¼
^
The integral over x is easy using the delta function, and the remaining integral over y is accomplished by observing that over the unit disk y2 þ p2 ¼ 1, thus ^
f ¼
pffiffiffiffiffiffiffi2ffi 1p
1=2 (1 p2 ) y 2 dy:
(8:134)
x y ^ ^ p Rf , ¼ a2 f (p, aj) ¼ a f , j : a a a
0
1 ð
f (r) Jv (2pqr)r dr:
0
This is where the Hankel transform of order v comes in, by definition, *v { f (r)} ¼ 2p
1 ð
f (r) Jv (2pqr)r dr:
(8:136)
0
(8:137)
This equation can be related to the Radon transform by first finding the Radon transform of f, and then applying the Fourier transform as indicated in Equation 8.13. In polar form,
or
^
f (p, f) ¼ (8:135)
By following the approach used in the last example, it is possible to find a whole class of Abel transforms. These are listed in Appendix 8.B. More results for Abel transforms appear in sections that follow, especially in the section on transforms restricted to the unit disk.
8.11.2 Hankel Transform See Chapter 9 for details about Hankel transforms. By using an approach similar to that in Section 8.11.1 it is possible to find the connection between the Hankel transform of order v and the Radon transform. Note that throughout this discussion, if v ¼ 0 the results here correspond to results for the Abel transform. Let the feature space function be given by a rotationally symmetric function multiplied by eivu, f (x, y) ¼ f (r) eivu :
db ei(vb2pqr cos b) :
~f (q, f) ¼ (i)v einf *v { f (r)}:
1=2 ) r pa p2 p ¼ x 1 2 x a a 2 a
n r o p p ! (a2 r 2 )1=2 x ¼ (a2 p2 )x : a 2 a
0
2p ð
Thus,
The scaled Abel transform follows, with r ! r=a, !
dr rf (r)
~f (q, f) ¼ 2p eivf eivp=2
Now suppose it is desired to scale this result to a disk of radius a. The scaling can be accomplished by application of Section 8.3.2 in the form
r2 1 2 a
1 ð
The integral over b can be related to a Bessel function identity from Appendix 8.A to yield
^
(
0
~f (q, f) ¼ eivu
This integral can be evaluated by use of trigonometric substitution or from integral tables (Appendix 8.A). The result is the Abel transform p f ¼ ! (1 r 2 )1=2 x(r) ¼ (1 p2 ) x(p): 2
0
eivu ei2pqr cos (uf) r f (r)dr du:
Now, after the change of variables b ¼ u f, followed by an interchange of the order of integration,
D
pffiffiffiffiffiffiffi2ffi 1p ð
2p ð 1 ð
2p ð 1 ð 0
0
eivu f (r) d [p r cos (u f)] r drdu:
Once again, the change of variables b ¼ u f is employed to obtain ^
f (p, f) ¼ eivf
1 ð
dr rf (r)
0
2p ð 0
db einb d(p r cos b):
The integration over b in this expression has been discussed by many authors, including Cormack (1963, 1964) and Barrett (1984), where details can be found leading to ^
f (p, f) ¼ 2eivf
1 ð
jpj
f (r) Tv
1=2 p p2 dr: 1 2 r r
(8:138)
Some of the more useful properties of the Chebyshev polynomials of the first kind Tv are given in Appendix 8.A. For more details, see the summary by Arfken (1985) and the interesting discussion by Van der Pol and Weijers (1934).
8-30
Transforms and Applications Handbook
In these equations the transformation x ¼ r cos f, y ¼ r cos f is used. One more transformation, r2 þ p2 ¼ r2, leads to
It is useful to identify a Chebyshev transform by 7v { f (r)} ¼ 2
1 ð
f (r) Tv
jpj
p r
1
p2 r2
1=2
dr:
(8:139) ^
f (p) ¼ 2p
1 ð
f (r)r dr, p > 0:
(8:143)
p
Then, ^
f (p, f) ¼ eivf 7v { f (r)}: The Fourier transform of Equation 8.138 must be equal to Equation 8.137. It follows that the Hankel transform is given in terms of the Radon transform by (i)v eivf *v { f (r)} ¼ FR f ¼ eivf 7v { f (r)}:
(8:140)
Or, in terms of the Chebyshev transform, because the eivf term cancels, *v { f (r)} ¼ iv F7v { f (r)}:
(8:142)
8.11.3 Spherical Symmetry, Three Dimensions An interesting generalization of the above cases arises when the function f(x, y, z) has spherical symmetry. In this case, the Radon transform of f can be found by letting both the polar angle u and the azimuthal angle f be zero. Now the unit vector j ¼ (0, 0, 1), and formula (Equation 8.7) is given by ^
¼
¼
1 ð
1 ð
1 ð
1 ð
1 ð
1 1 1
f
1 1 2p ð 1 ð
f
1 ð
f
0
¼ 2p
0
0
f
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ y2 þ z2 d(p z)dxdydz
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ y2 þ p2 dxdy
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r2 þ p2 r drdf pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r2 þ p2 r dr:
In this equation, the variable p is actually a dummy variable and it can be replaced by r, f (r) ¼
This relation between the Hankel transform and the Fourier transform of the Radon transform is a useful expression because it serves as the starting point for finding Hankel transforms without having to do integrals over Bessel functions. Several authors have made contributions in this area. For applications and references to the literature see Hansen (1985), Higgins and Munson (1987, 1988), and Suter (1991). In this section we have concentrated on how the Hankel transform relates to the Radon transform. A logical extension of some of the ideas presented in this discussion appear in Section 8.13 on circular harmonic decomposition.
f (p) ¼
^
df (p) ¼ 2p p f (p): dp
(8:141)
Note that an operator identity follows immediately, *v ¼ iv F7v :
Note that the lower limit follows from r ¼ (p2)1=2 when r ¼ 0. The interesting point is that for this highly symmetric case, the original function f can be found by differentiation,
1 ^0 f (r): 2p r
(8:144)
This same result can be found directly from the inversion methods of Section 8.9.2. Also, Barrett (1984) does the same derivation and he makes the interesting observation that (Equation 8.144) was given in the optics literature by Vest and Steel (1978), but was actually known much earlier by Du Mond (1929) in connection with Compton scattering, and by Stewart (1957) and Mijnarends (1967) in connection with positron annihilation.
8.12 Methods of Inversion The inversion formulas given by Radon (1917) and the formulas given in Section 8.9 serve only as a beginning for an applied problem. This point is emphasized by Shepp and Kruskal (1978). The main problem is that these formulas are rigorously valid for an infinite number of projections, and in practical situations the projections are a discrete set. This discrete nature of the projections gives rise to subtle and difficult questions. Most of these are related in some way to the ‘‘Indeterminacy Theorem’’ by Smith et al. (1977). After a little rephrasing, the theorem establishes that: a function f(x, y) with compact support is uniquely determined by an infinite set of projections, but not by any finite set of projections. This clearly means that uniqueness must be sacrificed in applications. Experience with known images shows^ that this is not so serious if one can come close to the actual f and then apply an approximate reconstruction algorithm. Moreover, some encouragement comes from another theorem by Hamaker et al. (1980). The main thrust of this theorem is that arbitrarily good approximations to f can be found by utilization of an arbitrarily large number of projections. Perhaps the way to express all of this is to say: even though you can’t win you must never give up! There are several other considerations about inversion. The inverse Radon transform is technically an ^ ill-posed problem. Small errors in knowledge of the function f can lead to very large errors in the reconstructed function f. Hence, problems of
8-31
Radon and Abel Transforms
stability, ill-posedness, accuracy, resolution, and optimal methods of sampling must be addressed when working with experimental data. These are obviously very important problems, and the subject of ongoing research. A thorough discussion would have to be highly technical and inappropriate for inclusion here. For those concerned with these matters, the papers by Lindgren and Rattey (1981), Rattey and Lindgren (1981), Louis (1984), Madych and Nelson (1986), Hawkins and Barrett (1986), Hawkins et al. (1988), Kruse (1989), Madych (1990), Faridani (1991), Faridani et al. (1992), Maass (1992), Desbat (1993), Natterer (1993), Olson and DeStefano (1994), and the books by Herman (1980) and Natterer (1986) are good starting points for methods and references to other important work. Good examples illustrating many of the difficulties encountered when dealing with real data along with defects in the reconstructed image associated with the performance of various algorithms are given in Chapter 7 of the book by Russ (1992). There are several methods that serve as the basis for the development of algorithms that can be viewed as discrete implementations of the inversion formula. Our purpose here is to present several of these along with reference to their implementation. Those interested in more detail and other flow charts may want to see Barrett and Swindell (1977) and Deans (1983, 1993). The first topic below, the operation of backprojection, is an essential step in some of the reconstruction algorithms. Also, this operation is closely related to the adjoint of the Radon transform, discussed in Section 8.14. More on inversion methods is contained in Section 8.13 on series.
Let G(p, f) be an arbitrary function of a radial variable p and angle f. The backprojection operation is defined by replacing p by x cos f þ y sin f and integrating over the angle f, to obtain a function of x and y, g(x, y) ¼ @ G(p, f) ¼ G(x cos f þ y sin f, f) df:
(8:145)
(8:147)
Here, the identity operator for the 1D Fourier transform is used. Now, making use of various operations from Section 8.9.1, we obtain
2@ 1 q 1 ^ F F (p, f) *f 4p2 qp p
^ @ 1 1 ¼ 2F (i2pk) F [Ff (p, f)] 2p p
f ¼
¼
^ @ 1 F {(i2pk)(ipsgnk)Ff (p, f)} 2 2p ^
¼ @F1 {jkjFf (p, f)}:
(8:148)
The inverse Fourier transform operation converts a function of k to a function of some other radial variable, say s. This observation leads to a natural definition; for convenience of notation, define ^ ~^ F(s, f) ¼ F1 {jkjFf (p, f)} ¼ F1 {jkjf (k, f)}:
(8:149)
Now the feature space function is recovered by backprojection of F, ðp
(8:150)
0
The beautiful part of this formula is that the need to use the Hilbert transform has been eliminated. From a computational viewpoint this is a real plus. For additional information on computationally efficient algorithms based on these equations, see Rowland (1979) and Lewitt (1983). 8.12.2.1 Convolution Methods
0
Note: From the definition of the backprojection operator it follows that the inversion formula (Equation 8.101) can be written as ^ f (x, y) ¼ 2 @ f (t, f):
^ f ¼ 2 @F1 Ff :
f (x, y) ¼ @ F(s, f) ¼ F(x cos f þ y sin f, f) df:
8.12.1 Backprojection
ðp
for the title of this section. There are several ways to derive the basic formula for this algorithm. Because we want to emphasize its relation to the inversion formula, the starting point is Equation 8.146. First, rewrite that equation as
(8:146)
Due to the presence of the jkj in Equation 8.149 the story is not over. This causes a problem with numerical implementation due to the behavior for large values of k. It would be desirable to have a well-behaved function, say g, such that Fg ¼ jkj. Then Equation 8.149 could be modified to read ^
8.12.2 Backprojection of the Filtered Projections The algorithm known as the filtered backprojection algorithm is presently the optimum computational method for reconstructing a function from knowledge of its projections. This algorithm can be considered as an approximate method for computer implementation of the inversion formula for the Radon transform. Unfortunately, there is some confusion associated with the name, because the filtering of the projections is done before the backprojection operation. Hence, a better name is the one chosen
F(s, f) ¼ F1 [(Fg)(Ff )]: And, by the convolution theorem, ^
F(s, f) ¼ f * g ¼
1 ð
1
^
f (p, f)g(s p)dp:
(8:151)
A function g such that Fg ¼ jkj can be found, but is not well behaved. In fact, it is a singular distribution (Lighthill, 1962).
8-32
Transforms and Applications Handbook
In view of these difficulties a slight compromise is in order. Rather than looking for a function whose Fourier transform equals jkj, try to find a well-behaved function with a Fourier transform that approximates jkj. The usual approach is to define a filter function in terms of a window function; that is, let Ig ¼ jkjw(k):
fˇ( p, φ)
f (x, y)
(8:152)
Then Equation 8.151 can be used to find the function F used in the backprojection equation. One advantage of this approach is that there is ^no need to find the Fourier transform of the projection data f ; however, it is necessary to compute the convolver function g(s) ¼ F1 {jkjw(k)}
Convolution methods
(8:153) –1
before implementing Equation 8.151. This signal space convolution approach is discussed in some detail by Rowland (1979). An approach directly aimed toward computer implementation is in Rosenfeld and Kak (1982). Excellent practical discussions of windows and filters are given by Harris (1978) and by Embree and Kimble (1991).
FIGURE 8.10
.|k |
~ |k | fˇ( k, φ)
F ( s, φ)
~
fˇ( k,
Filtered backprojection, convolution.
The true image is related by b by
8.12.2.2 Frequency Space Implementation It should be noted that there are times when it is desirable to implement the filter is Fourier space and use Equation 8.149 in the form ~^ F(s, f) ¼ F1 {jkjw(k)Ff (p, f)} ¼ F1 {jkjw(k)f (k, f)}, (8:154) ^
to approximate F before backprojecting. This has been emphasized by Budinger et al. (1979) for data where noise is an important consideration. A diagram of the options associated with the algorithm for backprojection of the filtered projections is given in Figure 8.10.
b(x, y) ¼ f (x, y) ** ¼
1 ð
1 ð
1 1
1 r f (x0 , y0 )dx0 dy0
[(x x0 )2 þ (y y0 )2 ]1=2
:
^
f ¼ F1 1 F2 f : Apply the backprojection operator to obtain b ¼ @f ¼ @F1 1 F2 f :
8.12.3 Filter of the Backprojections In this approach to reconstruction, the backprojection operation is applied first and the filtering or convolution comes last. When the backprojection operator is applied to the projections, the result is a blurred image that is related to the true image by a 2D convolution with 1=r. Let this blurred image of the backprojected projections be designated by ^
b(x, y) ¼ @f (p, f) ^
¼ f (x cos f þ y sin f, f) df: 0
(8:155)
(8:156)
This is not an obvious result; it can be deduced by considering Equation 8.13 in the form
^
ðp
φ)
(8:157)
(In this section, subscripts on the Fourier transform operator are shown explicitly to avoid any possible confusion.) There is a subtle point lurking in this equation. Suppose the 2D Fourier transform of f produces ~f (u, v). The inverse 1D operator F1 1 is understood to operate on a radial variable in Fourier space. This means ~f (u, v) must be converted to polar form, say ~f (q, f) before doing the inverse 1D Fourier transform. The variable q_ is the radial variable in Fourier space, q2 ¼ u2 þ v2. If we designate the inverse 1D Fourier transform of ~f (q, f) by f(s, f), then b(x, y) ¼ @f (s, f) ¼ @
1 ð
1
~f (q, f)ei2psq dq:
8-33
Radon and Abel Transforms
Explicitly, the backprojection operation with s ! x cos f þ y sin f gives
fˇ( p, φ)
f (x, y)
b(x, y) ¼
¼
ð ðp 1
dq ~f (q, f)ei2pq(x cos fþy sin f)
0 1
2p ð 1 ð
q1~f (q, f)ei2pqr cos (uf) q dq df,
0 1
where the replacements x ¼ r cos u and y ¼ r sin u have been made, and the radical integral is over positive values of q. We observe that the expression on the right is just the inverse 2D Fourier transform, 1 ~ b(x, y) ¼ F1 2 {jqj f },
–1 2
(8:158) ~ |q |b(u, υ)
and from the convolution theorem h i ~f } ** F1 {jqj1 } : b(x, y) ¼ F1 { 2 2
FIGURE 8.11
F2 b(x, y) ¼ jqj1~f (u, v) or ~f (u, v) ¼ jqjF2 b: Application of F1 2 ^to both sides of this equation, along with the replacement b ¼ @f , yields the basic reconstruction formula for filter of the backprojected projections.
~ b(u, υ)
2
b(x, y)
Filter of backprojections and convolution, q ¼
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ v2 .
Once the window function is selected, g can be found in advance by calculating the inverse 2D Fourier transform, and the reconstruction is accomplished by a 2D convolution with the backprojection of the projections. Options for implementation of these results are illustrated in Figure 8.11. Important references for applications and numerical implementation of this algorithm are Bates and Peters (1971), Smith et al. (1973), Gullberg (1979), and Budinger et al. (1979).
8.12.4 Direct Fourier Method The direct Fourier method follows immediately from the centralslice theorem, Section 8.2.5, in the form
(8:160) ^
Just as in the previous section a window function can be introduced, but this time it must be a 2D function. Let ~g (u, v) ¼ jqjw(u, v): Now Equation 8.160 becomes ^
g F2 @f } f (x, y) ¼ F1 2 {~ 1 ^ ¼ F2 {~g } ** @f ¼ g(x, y)**b(x, y):
.|q |
(8:159)
The last term on the right is just the Hankel transform of jqj1 that gives jrj1, and the other term yields f(x, y). These substitutions immediately verify Equation 8.156. The desired algorithm follows by taking the 2D Fourier transform of Equation 8.158,
^ f (x, y) ¼ F1 jqjF2 @f : 2
Convolution methods
(8:161)
f ¼ F1 2 F1f :
(8:162)
The important point is that the 1D Fourier transform of the projections produces ~f (q, f) defined on a polar grid in Fourier space. An interpolation is needed to get ~f (u, v) and then apply F1 2 to recover f (x, y). The procedure is illustrated in Figure 8.12. Although this appears to be the simplest inversion algorithm, it turns out that there are computational problems associated with the interpolation and there is a need to do a 2D inverse Fourier transform. For a detailed discussion see: Mersereau (1976), Stark et al. (1981), and Sezan and Stark (1984).
8-34
Transforms and Applications Handbook
fˇ( p, φ)
f (x, y)
method used by Cormack (1963, 1964) in his now famous work that many regard as the beginning of modern computed tomography.
8.13.1 Circular Harmonic Decomposition
–1 2
1
The basic ideas developed in Section 8.11.2 can be extended to obtain the major results. First, note that in polar coordinates in feature space, functions that represent physical situations are periodic with period 2p. This immediately leads to a consideration of expanding the function in a Fourier series. If f (x, y) is written for f (r, u), then the decomposition is X
f (r, u) ¼
~ f (u, υ)
Interpolation
~ fˇ(q, φ)
1 hl (r) ¼ 2p
2p ð
8.13 Series There are many series approaches to finding an approximation to the original feature space function f when given sufficient infor^ mation about the corresponding function f in Radon space. The particular method selected usually depends on the physical situation and the quality of the data. The purpose of this section is to present some of the more useful approaches and observe that the basic ideas developed here carry over to other series techniques not discussed. The approach is to give details for some of the 2D cases and quote results and references for higher dimensional cases. The first method discussed, the circular harmonic expansion, is the
f (r, u)eilu du:
(8:164)
0
The Radon transform of f can also be expanded in a Fourier series of the same form, ^
The so-called algebraic reconstruction techniques (ART) form a large family of reconstruction algorithms. They are iterative procedures that vary depending on how the discretization is performed. There is a high computational cost associated with ART, but there are some advantages, too. Standard numerical analysis techniques can be applied to a wide range of problems and ray configurations, and a priori information can be incorporated in the solution. Details about various methods, the history, and extensive reference to original work is provided by Herman (1980), Rosenfeld and Kak (1982), and Natterer (1986). Also, the discrete Radon transform and its inversion is described by Beylkin (1987) and Kelley and Madisetti (1993), where both the forward and inverse transforms are implemented using standard methods of linear algebra.
(8:163)
l
The sum is understood to be from 1 to 1, and the Fourier coefficient hl is given by
FIGURE 8.12 Direct Fourier method.
8.12.5 Iterative and Algebraic Reconstruction Techniques
hl (r) eilu :
f (p, f) ¼
X^ hl (p) eilf ,
(8:165)
l
where 2p ð
1 2p
^
hl (p) ¼
^
f (p, f)eilf df,
0
p 0,
(8:166a)
and ^
^
hl (p) ¼ (1)l hl (p):
(8:166b)
The connection between the Fourier coefficients in the two spaces can be determined by taking the Radon transform of f, as given by Equation 8.163. The polar form of the transform gives
^
f (p, f) ¼
2p 1 X ð ð l
0
0
eilu hl (r) d[p r cos (u f)] r dr du:
Now, the change of variables b ¼ u f leads to an expression similar to one obtained in Section 8.11.2,
^
f (p, f) ¼
X l
e
ilf
1 ð 0
dr rhl (r)
2p ð 0
db eilb d(p r cos b): (8:167)
8-35
Radon and Abel Transforms
From the linear independence of the functions eilf, it follows by comparison of Equations 8.165 and 8.167 that ^
hl (p) ¼
1 ð
dr rhl (r)
0
2p ð 0
db eilb d(p r cos b):
From Equation 8.138 this gives the connection between the Fourier coefficients in terms of a Chebyshev transform, ^
hl (p) ¼ 2
1 ð p
1=2 p p2 hl (r)Tl dr, p 0: 1 2 r r
(8:168a)
1 ð r
^
0
hl (p) Tl
The 3D version of the expansion (Equation 8.163) is in terms of the real orthonormal spherical harmonics Slm(v), discussed by Hochstadt (1971),
pp2 r
r2
1
1=2
dp, r > 0: (8:168b)
Here the prime means derivative with respect to p. The inverse (Equation 8.168b) can be found by various techniques. These include use of the Mellin transform, contour integration, and orthogonality properties of the Chebyshev polynomials of the first and second kinds. The method used by Barrett (1984) is easy to follow, and he provides extensive reference to other derivations and some of the subtleties related to the stability and uniqueness of the inverse. The problem with this expression for ^ the inverse is that Tl increases exponentially as l ! 1 and hl is a rapidly oscillating function. The integration of the product of these two functions leads to severe cancellations and numerical instability. For a further discussion of stability, uniqueness, and other forms for the inverse, see Hansen (1981), Hawkins and Barrett (1986), and Natterer (1986). Additional details on the circular harmonic Radon transform are given by Chapman and Cary (1986).
f (r, v) ¼
X
^
hl (p) ¼
(4p) G(l þ 1)G(v) G(l þ 2v)
p
v12 p2 2v v p r hl (r)Cl 1 2 dr, r r (8:169a)
and
hl (r) ¼
(1)2vþ1 G(l þ 1)G(v) 2pvþ1 G(l þ 2v) r
1 ð p
^(2vþl)
hl
(p)Clv
p p 2 r
(8:170)
The Alm are real constants and v is a 3D unit vector, v ¼ (sin u cos f, sin u sin f, cos u): The corresponding expansion in Radon space is ^
f (p, j) ¼
X
^
Alm hl (r)Slm (j):
(8:171)
l, m
It follows from the orthogonality of the spherical harmonics that ð
^
Alm hl (p) ¼
^
f (p, j)Slm (j)dj,
(8:172)
jjj¼1
The extension to higher dimensions is presented in detail by Ludwig (1966). Other relevant references include Deans (1978, 1979) and Barrett (1984). The nD counterpart of the transform pair is given by Equations 8.168a and b is a Gegenbauer transform pair for the radial functions, 1 ð
Alm hl (r) Slm (v):
l, m
8.13.1.1 Extension to Higher Dimensions
v
^
8.13.1.2 Three Dimensions
One form of the inverse is 1 hl (r) ¼ pr
^(2vþ1)
In these equations, r 0, p 0, h1 ¼ (d=dp)(2vþ1) hl (p), ^ ^ l hl (p) ¼ (1) hl (p), and v is related to dimension n by v ¼ (n 2)=2. The Gegenbauer polynomials Clv are orthogonal over the interval [1, þ1] (Rainville, 1960) and (Szegö, 1939). This leads to questions about the integration in Equation 8.169b. And, just as mentioned in connection with Equation 8.168b, this formula is not practical for numerical implementation. However, the integral can be understood because it is possible to define Gegenbauer functions Gvl (z) analytic in the complex z plane cut from 1 to 1. For a discussion and proofs, see Durand et al. (1976).
r2
1
v12
dp:
(8:169b)
where dj is the surface element on a unit sphere. The Gegenbauer transform Equations 8.169a and b reduces to a Legendre transform for n ¼ 3, v ¼ 12, and the radial functions satisfy ^
hl (p) ¼ 2p
hl (r) ¼
1 ð
rhl (r)Pl
p
1 2p r
1 ð r
^
p dr, r
hl 00 (p)Pl
p dp: r
(8:173a)
(8:173b)
The spherical harmonics Ylm(u, f), discussed by Arfken (1985), are probably more familiar to engineers and physicists. These can be used in place of the Slm suggested here. However, various properties (real, orthonormal, symmetry) of the Slm make them more suitable for use in connection with problems involving the general nD Radon transform (Ludwig, 1966).
8-36
Transforms and Applications Handbook
For the 3D case, one possible connection is given by
Slm
8 Ylm þ Ylm* > > pffiffiffi , > > > 2 < ¼ Yl0, > > > > Ylm Ylm* > pffiffiffi , : i 2
relevant properties of the Zernike polynomials, and give some simple examples. This is followed with the transform to Radon space, and more examples. Next, the expression for the constants Als is found in terms of f, which is assumed known from experiment. Finally, to emphasize that this application also extends to Fourier space, the transform to Fourier space is illustrated, along with some observations regarding three different orthonormal basis sets.
for m ¼ 1, 2 , . . . , l for m ¼ 0 for m ¼ 1, 2 , . . . , l,
* . Note that under the parity operation where Yl, m ¼ (1)m Ylm
8.13.2.1 Zernike Polynomials
(x ! x, y ! y, z ! z),
The Zernike polynomials (see Section 1.5) can be found by orthogonalizing the powers
the well known result Ylm ! (1)lYlm carries over to the Slm(v), giving
r l , r lþ2 , r lþ4 , . . .
Slm (v) ¼ (1)l Slm (v):
8.13.2 Orthogonal Functions on the Unit Disk In most practical reconstruction problems the function in feature space is confined to a finite region. This region can always be scaled to fit inside a unit disk. Hence, the development of an orthogonal function expansion on the unit disk holds promise as a useful approach for inversion using series methods. (In this connection, note that when the problem is confined to the unit disk the infinite upper limit on all integrals in the previous section can be replaced by unity.) Orthogonal polynomials that have been used for many years in optics are especially good candidates. These are the Zernike polynomials; a standard reference is Born and Wolf (1975); also see Chapter 1. A more recent reference, Kim and Shannon (1987), contains a graphic library of 37 selected Zernike expansion terms. One reason why these functions are desirable is that their transforms (R and F) lead to orthogonal function expansions in both Radon and Fourier space. This choice for basis functions in reconstruction has been discussed by Cormack (1964), Marr (1974), Zeitler (1974), and Hawkins and Barrett (1986), and examples similar to those given here are given by Deans (1983, 1993). The approach is to assume that f(x, y) can be approximated by a sum of monomials of the form xky j. Then xky j can be written as rkþj multiplied by some function of sin u and cos u. This leads to the consideration of an expansion of the form f (r, u) ¼
1 X
l¼1
hl (r) eilu ¼
1 X 1 X
s¼0 l¼1
jlj
Als Zjljþ2s (r)eilu ,
(8:174)
in terms of complex constants Als and Zernike polynomials l (r), with m ¼ jlj þ 2s. The Radon transform of this expression Zm can be found exactly, and it contains the same constants. These constants are evaluated in Radon space, and the feature space function is found by the expansion (Equation 8.174). There are several subtle points associated with this process, and it is useful to break the problem into separate parts. First, we discuss
with weight function r over the interval [0, 1]. The exponent l is a l (r) is a degree nonnegative integer. The resulting polynomial Zm m ¼ l þ 2s and it contains no powers of r less than l. The polynomials are even if l is even and odd if l is odd. This leads to an important symmetry relation, l l (r) ¼ (1)l Zm (r): Zm
(8:175)
The orthogonality condition is given by ð1 0
l l Zlþ2s (r) Zlþ2t (r) r dr ¼
1 dst : 2(l þ 2s þ 1)
(8:176)
It follows that the expansion coefficients are given by 2(l þ 2s þ 1) Als ¼ 2p
2p ð ð1
l f (r cos u, r sin f)Zlþ2s (r)eilu r dr du:
0 0
(8:177a) In this equation l 0. To find the expansion coefficient for negative values of l, use the complex conjugate, Al, s ¼ Als* :
(8:177b)
Some simple examples are useful to gain an understanding of just how the expansion works. A short table of Zernike polynomials is given in Appendix 8.A. Methods for extending the table and many other properties are given by Born and Wolf (1975).
Example 8.39 Let the feature space function be given by f(x, y) ¼ y in the unit circle and zero outside the circle. Thus, in terms of r, f (x, y) ¼ r sin u:
8-37
Radon and Abel Transforms Here, the degree is 1 and jlj þ 2s 1. The series expansion (Equation 8.174) reduces to f (x, y) ¼ A00 Z00 þ A10 Z11 eiu þ A10 Z11 eiu : This case is easy enough to do by inspection of the table of Zernike polynomials in Appendix 8.A. The coefficients are A00 ¼ 0, A10 ¼ 2i1 , A10 ¼ 2i1 . This choice gives iu
e e f (x, y) ¼ r 2i
iu
¼
l (r)eilu ¼ R Zm
Z11 (r) sin u:
Example 8.40 This time let f(x, y) ¼ xy, so f(r, u) ¼ r2 cos u sin u. It follows immediately from the angular part of the integral in Equation 8.177a that the only nonzero coefficients are given by A20 ¼ 4i1 and A20 ¼ 4i1 . This leads to the expansion f (x, y) ¼ (A20 e2iu þ A20 e2iu )Z22 (r)
2 pffiffiffiffiffiffiffiffiffiffiffiffi2ffi 1 p Um (p)eilf , mþ1
(8:179)
with m ¼ l þ 2s. Basic properties of the Um are given in Appendix 8.A, and summaries are given by Arfken (1985) and Erdélyi et al. (1953). The Radon transform of Equation 8.174 follows immediately by use of Equation 8.179, ^
f (p, f) ¼
1 X 1 X
s¼0 l¼1
Als
2 jlj þ 2s þ 1
pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 p2 Ujljþ2s (p)eilf :
(8:180)
Some more examples serve to illustrate how the method developed here relates to transforms found in earlier sections when the function is confined to the unit disk. Also, these examples are designed to point out ways certain pitfalls can be avoided.
or f (x, y) ¼ r 2
There are various ways to evaluate this integral, and the details are not shown here. The method used by Zeitler (1974) and Deans (1983, 1993) makes use of the path through Fourier space to find the transformed function in Radon space. The important result is that the orthogonal set of Zernike polynomials transforms to the orthogonal set of Chebyshev polynomials of the second kind,
e2iu e2iu ¼ r 2 cos u sin u: 4i
Example 8.42 Example 8.41 Let f(x, y) ¼ x (x2 þ y2). Now, changing to r and u gives f(r, u) ¼ r3 cos u. It is tempting to take a quick look at the table and say the expansion must contain A30 and Z33 because this polynomial is equal to r3. This is not the correct thing to do! A quick inspection of the angular part of Equation 8.177a reveals that A30 vanishes. The nonzero constants are A11 ¼ A11 ¼ 16, and A10 ¼ A10 ¼ 13. This gives the correct expansion
If f(x, y) ¼ 1 on the unit disk and zero elsewhere, the expansion 0 in terms of Zernike polynomials is just ffi f ¼ Z0 , with A00 ¼ 1. pffiffiffiffiffiffiffiffiffiffiffiffi ^ 2 From Equation 8.180, f ¼ 2 1 p , because U0 ¼ 1. Note that this is just another way of doing Example 8.9.
Example 8.43 If f (x, y) ¼ x 2 ¼ r 2 cos2 u ¼ 12r 2 (1 þ cos 2u) on the unit disk, then f (x, y) ¼
1 2 f (x, y) ¼ Z31 (r) cos u þ Z11 (r) cos u ¼ r 3 cos u: 3 3
This serves to identify the coefficients Als and by use of Equation 8.180
8.13.2.2 Transform of the Zernike Polynomials We need to find the Radon transform of a function of the form
^
f ¼
pffiffiffiffiffiffiffiffiffiffiffiffiffi1 2 1 1 p2 2U0 þ U2 þ U2 cos 2f : 4 3 3
f (x, y) ¼ Zlm (r)eilu :
After simplification,
It is adequate to consider l 0, because the negative case follows by complex conjugation. The angular part transforms to eilf and the radial part must satisfy Equation 8.168a with upper limit 1,
R{x 2 } ¼
^
hl (p) ¼ 2
ð1 p
l Zm (r)Tl
p r
p2 1 2 r
1=2
dr, p 0:
1 1 0 Z þ Z20 þ Z22 cos 2u: 4 0 2
pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 p2 2p2 cos2 f þ (1 p2 ) sin2 f : 3
Now note that if f (x, y) ¼ y 2 ¼ 12r 2 (1 cos 2u), the change is (cos f $ sin f) in the equation for R{x2}, and
(8:178) R{y 2 } ¼
pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 p2 2p2 sin2 f þ (1 p2 ) cos2 f : 3
8-38
Transforms and Applications Handbook
Finally, by linearity, the transform of f(x, y) ¼ x2 þ y2 is given by the sum of the above transforms R{x 2 þ y 2 } ¼
2 pffiffiffiffiffiffiffiffiffiffiffiffi2ffi 2 1 p (2p þ 1): 3
Example 8.44 Let f(x, y) ¼ 1 r2 on the unit disk. By using the methods of the earlier examples in this section
f ¼
z00
1 1 1 Z00 þ Z20 ¼ Z00 Z20 : 2 2 2
8.13.2.3 Evaluation in Radon Space In the previous section, Equation 8.180 was used to find Radon transforms when the constants Als can be determined by knowing the feature space function. Here the idea is to determine ^ the same constants by knowledge of the Radon space function f . It is easy to solve for the constants directly from Equation 8.180. 0 Multiply both sides by eil fUl0 þ 2t and integrate over p and f. Then use the orthogonality equation for the Um in Appendix 8.A to find the constants,
Als ¼
jlj þ 2s þ 1 2p2
2p ð ð1
^
f (p, f)eilf Ujljþ2s dp df:
(8:181)
0 1
From Equation 8.180
Example 8.47 1 pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 pffiffiffiffiffiffiffiffiffiffiffiffi2ffi 1 p U2 : f ¼ 2 1 p2 U0 2 23 ^
Or, after substitution for U0 and U2 from Appendix 8.A,
This simplest test of Equation 8.181 is for^ the inverse ffi the pffiffiffiffiffiffiffiffiffiffiffiffiof problem of Example 8.42. We assume that f ¼ 2 1 p2 with l ¼ s ¼ 0, then f ¼ 1 on the unit disk
pffiffiffiffiffiffiffiffiffiffiffiffiffi ^ 4 f ¼ (1 p2 ) 1 p2 : 3
A00 ¼
Another way to obtain this is to use Examples 8.42 and 8.43 and linearity.
1 2p2
2p ð ð1
0 1
2
pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 p2 dp df
ð1 pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 ¼ 1 p2 dp ¼ 1: p 1
Example 8.45
8.13.2.4 Transform to Fourier Space
For f(x, y) ¼ x(x2 þ y2) as in Example 8.41, it follows from knowing that Als that
The Radon transform of the basis set given in Equation 8.179 transformed one orthogonal set to another orthogonal set. It is interesting to examine the Fourier transform of the basis set. It turns out that this also leads to another orthogonal set. Details are given by Zeitler (1974) and Deans (1983, 1993). The important result is that
^
1 pffiffiffiffiffiffiffiffiffiffiffiffi2ffi 1 U3 þ 2U1 cos f 1p 3 2 pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 ¼ p(2p2 þ 1) 1 p2 cos f: 3
f ¼
Example 8.46 It may be worthwhile to emphasize that there are certain transforms that cannot be found by a naive application of the pffiffiffiffiffiffiffiffiffiffiffiffiffiffi Zernike polynomials. To illustrate, suppose f (x, y) ¼ x x 2 þ y 2 . Although this has the form f ¼ xr ¼ Z22 cos u, it is not a simple sum over monomials xky j, and the method of this section does not apply. The transform can be found by use of the technique in Example 8.11 of Section 8.5. The solution is
l Jlþ2sþ1 (2pq) (r)eilu ¼ (i)l (1)s eilf : F2 Zlþ2s q
This equation is obtained using the symmetric form of the Fourier transform (see Equation 8.14). These Bessel functions are orthogonal with respect to weight function q1, and have been studied by Wilkins (1948), 1 ð 0
"
pffiffiffiffiffiffiffiffiffiffiffiffiffi!# ^ 1 pffiffiffiffiffiffiffiffiffiffiffiffi2ffi p2 1 þ 1 p2 1 p þ log : f (p, f) ¼ 2p cos f 2 2 p Clearly, this does not follow by Zernike decomposition of xr.
(8:182)
Jjljþ2sþ1 (q) Jjljþ2tþ1 (q)q1 dq ¼
dst : 2(jlj þ 2s þ 1)
The Fourier space version of Equation 8.174 is ~f (q, f) ¼
1 X 1 X
s¼0 l¼1
(i)l (1)s Als eilf
Jjljþ2sþ1 (2pq) : q
(8:183)
8-39
Radon and Abel Transforms
Example 8.48 The Fourier transform of the characteristic function of the unit disk, Example 8.42, with A00 ¼ 1 and l ¼ s ¼ 0, is given by J1(2pq)=q.
Example 8.49 For the function in Example 8.44, the expansion (Equation 8.183), with A00 ¼ 12 and A01 ¼ 12, yields J1 (2pq) J3 (2pq) J2 (2pq) : F2 {1 r } ¼ þ ¼ 2q 2q pq2
2p ð ð1 0 0
¼ 2p ð ð1
0 1
Jn1 (z) þ Jnþ1 (z) ¼
2n Jn (z) z
with n ¼ 2 and z ¼ 2pq.
Example 8.50 Repeat Example 8.43 with transforms to Fourier space using Equation 8.183.
F2 {x 2 } ¼
J1 (2pq) J3 (2pq) J3 (2pq) cos 2f 4q 4q 2q
F2 {y 2 } ¼
J1 (2pq) J3 (2pq) J3 (2pq) þ cos 2f 4q 4q 2q
F2 {x 2 þ y 2 } ¼
J1 (2pq) J3 (2pq) J1 (2pq) J2 (2pq) : ¼ 2q 2q q pq2
The last part follows from the identity in Example 8.49. Also, note that the result for x2 þ y2 follows directly from Examples 8.48 and 8.49 and linearity.
8.13.2.5 Some Final Observations It is possible to find orthogonal function expansions that transform to each other in all three spaces. In feature space the Zernike polynomials, defined on the unit disk, are orthogonal with weight function r over the interval 0 r 1. In Radon space the Chebyshev polynomials of the second kind emerge, orthogonal pffiffiffiffiffiffiffiffiffiffiffiffiffi on the interval 1 p 1 with weight function 1 p2 . These are both defined on finite intervals and consequently, as is to be expected, in Fourier space the interval in infinite, 0 q 1. The orthogonal functions are no longer polynomials, they are orthogonal Bessel functions with weight function q1. The orthogonality integrals over the three spaces, including the angles, are given by
p dll0 dss0 , jlj þ 2s þ 1 h
2p ð ð 1 0
0
¼
(8:184a)
i* 0 0 pffiffiffiffiffiffiffiffiffiffiffiffiffi jlj jl j Ujljþ2s (p)eilf Ujl0 jþ2s0 (p)eil f 1 p2 dp df
¼ p2 dll0 dss0 ,
2
The last equality follows from the Bessel function identity
h i* 0 0 jlj jl j Zjljþ2s (r)eilu Zjl0 jþ2s0 (r)eil u r dr du
(8:184b)
* 0 Jjljþ2sþ1 (q)eilf Jjl0 jþ2s0 þ1 (q)eil f q1 dq df
p dll0 dss0 : jlj þ 2s þ 1
(8:184c)
8.14 Parseval Relation In the notation of Section 8.1.2, let inner products in nD be designated by ð h f , gi ¼ f *(x)g(x)dx: If the nD Fourier transforms of f and g are designated by ~f and ~g , the Parseval relation for the Fourier transform is given by h f , gi ¼ h ~f , ~g i:
(8:185)
The integral on the right is over all Fourier space. If g ¼ f, then the integrals are normalization integrals. This guarantees that if f is normalized to unity, then its Fourier transform is also normalized to unity. The corresponding expression for the Radon transform is considerably more complicated, and we need to extend some of the previous work in order to give a general result. First, define the adjoint for the Radon transform. If inner products in Radon space are designated by square brackets, then Ry is ense that h f , Ry gi ¼ [R f , G]:
(8:186)
Here, G is a function of the variables in Radon space, G ¼ G(p, j), and the adjoint operator Ry converts G to a function of x, designated by g(x) ¼ Ry G(p, j). For example, in 2D the adjoint is just two times the backprojection operator, Ry ¼ 2B.
8-40
Transforms and Applications Handbook When Equations 8.186 and 8.190 are combined, we obtain the desired form for the Parseval relation for the Radon transform,
Example 8.51 It is instructive to see how Equation 8.186 comes from the definitions,
hf , gi ¼ hf , Ry Gi ¼ [R f , IG]
ð hf , Ry gi ¼ dx f (x)g(x) ð ð ¼ dx f (x) dj G(j x, j)
¼ [Rf , YRRy G]
¼ [Rf , YRg] ^
jjj¼1
1 ð ð ð ¼ dx f (x) dj dp G(p, j) d(p j x)
¼
¼
1
dp dx f (x) d(p j x) G(p, j)
jjj¼1
1 ð
ð
1 ð
dpf (p, j) G(p, j)
ð
dj
dj
jjj¼1
ð
^
1
¼ [Rf , G]: The significance of this result is more apparent after making a generalization of Section 8.9 to include the nD inversion formula. Define the operator Y to cover both the even and odd dimension cases (Ludwig, 1966), (Deans, 1983, 1993):
(8:192)
Example 8.52
Feature space:
hf , f i ¼ ¼
(8:187) n even,
h~f , ~f i ¼
(8:189)
By starting with Ry G ¼ g ¼ Ig ¼ Ry YRRy G,
1 1 1 ð 1 ð
ex
2
y 2 x 2 y 2
e2x
e
2
2y 2
dx dy
dx dy
1 1
(8:190)
1 ð
1 1 1 ð
2
2
2
2
2
2
pep (u þv ) pep (u þv ) du dn 1 ð
2 2
2 2
e2p u e2p n du dn
1 1
pffiffiffiffi pffiffiffiffi p p p ¼ p2 pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi ¼ : 2p2 2p2 2
Radon space: Verification in Radon space is not as easy as the other two cases due to the presence of the Hilbert transform. The entire calculation is shown in^ detail, because there are pffiffiffiffi 2 some tricky parts. First note that qf =qp ¼ 2 p p ep . Then Equation 8.192 is " ^ !# pffiffiffiffi p2 1 qf [f , Yf ] ¼ pe , *i qp 4p pffiffiffiffi p2 1 pffiffiffiffi p2 2 p *i (pe ) ¼ pe , 4p h i 1 2 2 ¼ ep , *i (p ep ) : 2 ^
if follows that the identity operating on functions in Radon space is given by
1 ð
¼ p2
(8:188)
This leads to the operator identity operating in feature space, I ¼ Ry YR:
1 ð
Fourier space: Note that q2 ¼ u2 þ v2. Then
where Nn ¼ 12(2pi)1n . This reduces to Equation 8.100 for n ¼ 2 and to Equation 8.103 for n ¼ 3. With this definition the inversion formula for the Radon transform is given by ^ ^ f ¼ Ry f ¼ Ry Y f ¼ Ry YRf :
1 ð
pffiffiffiffi pffiffiffiffi p p p ¼ pffiffiffi pffiffiffi ¼ : 2 2 2
n odd
p¼t
I ¼ YRRy :
^
hf , f i ¼ [f , Yf ]:
Verify the Parseval relation (Equation 8.192) explicitly in all 2 2 three spaces for f (x, y) ¼ ex y . This looks simple, but it demonstrates the difficulty of dealing with Radon space compared with feature space and Fourier space.
^
y g(t) ¼ Yg # 8 " n1 > q > > Nn g(p) > > qp < p¼t ¼ " ( )# n1 > > q > Nn > g(p) > : i *i qp
(8:191)
An important special case is for g ¼ f, then
1
jjj¼1
^
¼ [f , Yg]
^
8-41
Radon and Abel Transforms Ð Because there is no angle dependence, the integral dj ¼ 2p. jjj Hence, the last inner product becomes 1 ð
2p [f , Yf ] ¼ 2 ^
^
¼
1 ð
dp e
p2
1
dp e
2
ds
ses sp
^
f m0 (p, f) ¼
s2
1
1
1 ð
1
1 ð
p2
1 p
se : ds sp
Now the problem is to demonstrate that this double integral yields p=2. Change the order of integration to get ^
1 ð
^
[f , Y f ] ¼
2
ds ses
1
1 ð
2
dp
1
ep : sp
From page 227 of Davis and Rabinowitz (1984), this becomes ^
^
[f , Y f ] ¼
1 ð
ds ses
1
2
pffiffiffiffi s2 p se
ð1
2 2
dpes p :
1
Another change in the order of integration followed by evaluation of the definite integrals (Appendix 8.A) yields the desired result, ^ ^ pffiffiffiffi f ,Y f ¼ p
ð1
1
dp
1 ð
2 2
ds s2 e(2p )s
1
pffiffiffiffipffiffiffiffi ð1 dp p p ¼ 2 (2 p2 )3=2 ¼p
ð1 0
¼
p : 2
1
dp (2 p2 )3=2
8.15 Generalizations and Wavelets Mathematical generalization of the Radon transform and some of the more technical applications are discussed in the recent publications edited by Grinberg and Quinto (1990) and Gindikin and Michor (1994). There are many other references and the reader interested in some of the more abstract treatments will find these two books good entry points to the literature. A generalization that has important applicability in the area of image reconstruction in nuclear medicine is known as the attenuated Radon transform. One way to define this transform is to modify Equation 8.4 to read ^
f m (p, f) ¼
1 ð
1
2
f (pj þ tj0 )
exp 4
1 ð t
3
m(p j þ s j0 ) ds5dt:
If the attenuation term m is a constant, say m0, that vanishes outside a finite region, then this equation reduces to what is often referred to as the exponential Radon transform,
(8:193)
1 ð
1
em0 t f (p j þ tj0 )dt:
(8:194)
These transforms are fundamental in single photon emission computed tomography (SPECT), and to a lesser degree in positron emission tomograph (PET) where corrections can be introduced to compensate for attenuation (Budinger et al., 1979). For details see Natterer (1979, 1986), Tretiak and Metz (1980), Clough and Barrett (1983), Hawkins et al. (1988), Hazou and Solmon (1989), and Nievergelt (1991). One of the most recent and certainly one of the most exciting new developments is the use of the wavelet transform in connection with the Radon transform. The application of wavelets to inversion of the Radon transform has been investigated by Kaiser and Streater (1992). They make use of a change of variables to connect a generalized version of the Radon transform to a continuous wavelet transform. Work along related lines was done by Holschneider (1991) where the inverse wavelet transform is used to obtain a pointwise and uniformly convergent inversion formula for the Radon transform. Berenstein and Walnut (1994) use the theory of the continuous wavelet transform to derive inversion formulas for the Radon transform. The inversion formula they obtain is ‘‘local’’ in even dimensions in the following sense (stated for 2D): to recover f to a given accuracy in a circle of radius r about a point (x0, y0) it is sufficient to know only those projections through a circle of radius r þ a about (x0, y0) for some a > 0. The accuracy increases as a increases. In a related paper, Walnut (1992) demonstrates how the Gabor and wavelet transforms relate to the Radon ^ transform. He finds inversion formulas for f based on Gabor and wavelet expansions by a direct method and by the filtered backprojection method. More work on wavelet localization of the Radon transform is in the papers by Olson and DeStefano (1994), and Olson (1995). As mentioned in Section 8.9, they emphasize that one problem with the Radon transform in two dimensions (most relevant in medical imaging) is that the inversion formula is globally dependent upon the line integral of the object function f. A fundamental important aspect of their work is that they are able to develop a stable algorithm that uses properties of wavelets to ‘‘essentially localize’’ the Radon transform. This means collect line integrals which pass through the region of interest, plus a small number of integrals not through the region. Recent work by Rashid-Farrokhi et al. (1997) makes use of the properties of wavelets with many vanishing moments to reconstruct local regions of a cross section using almost completely local data. A comprehensive discussion of the Radon transform and local tomography is given in the book by Ramm and Katsevich (1996). The work by Donoho (1992) on nonlinear solution of linear inverse problems by wavelet-vaguelette decomposition is relevant
8-42
Transforms and Applications Handbook
to the inversion of Abel-type transforms and Radon transforms. This method serves as a substitute for the singular value decomposition of an inverse problem, and applies to a large class of ill-posed inverse problems. Another important applied generalization is related to fan beam and cone beam tomography. Recent work in these areas can be found in papers by Natterer (1993), Kudo and Saito (1990, 1991), Rizo et al. (1991), Gullberg et al. (1991), and in the book by Natterer (1986). In recent work by Wood and Barry (1994), the Winger distribution is combined with the Radon transform to facilitate the analysis of multicomponent linear FM signals. These authors provide several references to other applications of this combined transform, now known as the Radon–Wigner transform.
8.16 Discrete Periodic Radon Transform A natural extension of the continuous Radon transform that can be used for discrete data sets has been developed by Gertner (1988). Further work by Hsung et al. (1996) demonstrates many details regarding both the forward and inverse discrete periodic Radon transform (DPRT). The application of this transform to N 3 N sets of data when N is prime will be demonstrated here. This is the simplest case; however, further generalizations are possible and for these we refer the reader to Hsung et al. (1996). Clearly, a prime factor algorithm is not greatly restrictive since there are primes just a little greater than any power of two and zeros can be added with absolutely no consequence of importance. The purpose here is to make as much contact with the continuous transform as possible, while defining a discrete transform and its inverse. The extension here is applied directly to the continuous transform defined in Section 8.2.1 for two dimensions.
8.16.1 The Discrete Version of the Image The function f defines the image in terms of coordinates (x, y). Here we let x and y be discrete and vary from 0 to P 1, where P is prime. Moreover, for values of x and y greater than or equal to P we define the periodic extension of f such that for positive integers l, n
This square bracket notation is especially useful for some of the formulas. In terms of an image, this amounts to reproducing the image over and over again in both the vertical and horizontal directions. This will be illustrated when interpreting the discrete transform.
8.16.2 A Discrete Transform The prime factor DPRT is defined by three equations: ^
f (b, l ) ¼ ^
P1 X
^
f (b, 0) f (b, $ ) ¼ ^
f (b, m) ¼
f (b, y) vertical
y¼0
P1 X
f (x, b) horizontal
x¼0
P1 X f x, [b þ mx]p m ¼ 1, 2, . . . , P 1: x¼0
In all of these equations b ¼ 0, 1, . . . , P 1. Note that for computational and coding purposes the last two equations can be combined by letting m vary from 0 to P 1. At this point, an example followed by a generalization will be especially valuable.
Example 8.53 ^
Suppose P ¼ 5 and we wish to calculate f (1, 2). Set b ¼ 1 and m ¼ 2, then ^
f (1, 2) ¼ f (0, 1) þ f (1, 3) þ f (2, 0) þ f (3, 2) þ f (4, 4):
The graphical interpretation is shown in Figure 8.13, where the periodic extension is shown explicitly on the right. Also, note that for this P ¼ 5 case: tan u ¼ m for m ¼ 1, 2 and tan u ¼ m P
for m ¼ 3, 4:
y 4 3
f (x þ lP, y þ nP) ¼ f (x, y): This means that knowledge of f(x, y) for x and y in the set {0, 1, 2, . . . , P 1} serves to define f everywhere. For example, suppose P ¼ 3, then f(4, 1) ¼ f(1, 1) and f(6, 8) ¼ f(0, 2). To make this more precise, if the residue of a modulo P is designated by a mod P [a]p , Definition
2 1
y
0
4
4
3
3
2
2
1
1 x
0
then
0
f (x, y) ¼ f ([x]p , [y]p ):
1
FIGURE 8.13
2
3
4
x
0 0
1
Use of the periodic property.
2
3
4
8-43
Radon and Abel Transforms The generalization of the previous example to arbitrary values of P follows by induction. The slope variable takes on the values
^
The transform f (b, m) is given by m
P1 Pþ1 , , . . . , P 1: m ¼ 1, 2, . . . , 2 2 The angles u (slope angle) and f (used in continuous case) are related by p P1 p , 0 1 is a fixed dilation step. The discrete translation factor is expressed as t ¼ kt0 si0 , where k is integer. The translation depends on the dilation si0 . The corresponding discrete wavelets are written as i h si 0 t kt0 s0 i=2 ¼ s0 h si 0 t kt0 i=2
hi,k (t) ¼ s0
(10:26)
The discrete wavelet transform with the dyadic scaling factor with s0 ¼ 2 is effective in the computer implementation.
10.3.1 Timescale Space Lattices The discrete wavelet transform evaluated at discrete times and scales performs a sampling in the timescale space. The timescale joint representation of a discrete wavelet transform is a grid along the scale and time axes. To show the sampling we consider localization points of the discrete wavelets in the timescale space. The sampling along the time axis has the interval t0 si0 , that is proportional to the scale si0 . The time sampling step is small for small scale wavelet analysis and is large for large scale wavelet analysis. With the varying scale the wavelet analysis will be able to ‘‘zoom in’’ on singularities of the signal using more concentrated wavelets of very small scale. For this detailed analysis the time sampling step is very small. As only the signal detail is of interest, only a few small time translation steps would be needed. Therefore, the wavelet analysis provides a more efficient way to represent transient signals. There is an analogy between the wavelet analysis and the microscope. The scale factor si0 corresponds to the magnification
10-15
Wavelet Transform
or the resolution of microscope. The translation factor t corresponds to the location where one makes observation with the microscope. If one looks at very small details, the magnification and the resolution must be large, that corresponds to a large and negative-valued i. In this case the wavelet is very concentrated. The step of translation is small, that justifies the choice t ¼ kt0 si0 . For large and positive-valued i, the wavelet is spread out, and the large translation steps kt0 si0 are adapted to this wide width of the wavelet analysis function. This is another interpretation of the constant-Q analysis property of the wavelet transform, discussed in Section 10.2.3. The behavior of the discrete wavelets depends on the scale and time steps s0 and t0. When s0 is close to 1 and t0 is small, the discrete wavelets are close to the continuous wavelets. For a fixed scale step s0, the localization points of the discrete wavelets along the scale axis are logarithmic as log s ¼ i log s0, as shown in Figure 10.7. The frequency sampling interval has the unit as octave in the music. One octave is the interval between two frequencies having a ratio of two. One octave frequency band has the bandwidth equal to one octave. The discrete time step is t0 si0 . We choose usually t0 ¼ 1. Hence, the time sampling step is a function of the scale and is equal to 2i for the dyadic wavelets with s0 ¼ 2. Along the t-axis the localization points of the discrete wavelets depends on the scale. The intervals between the localization points at the same scale are equal and are proportional to the scale si0 . The translation steps are small for small positive-valued i with the small scale wavelets, and are large for large positive-valued i with large scale wavelets. In the localization of the discrete wavelets in the timescale space shown in Figure 10.7, where the scale axis is logarithmic, log s ¼ i log 2, and the localization is uniform along the time axis t with the time steps proportional to the scale factor s ¼ 2i.
10.3.2 Wavelet Frame With the discrete wavelet basis a continuous function f(t) is transformed to a sequence of wavelet coefficients
τ
log s
FIGURE 10.7 space.
Localization of the discrete wavelets in the time-scale
ð W f (i, k) ¼ f (t)hi,*k(t)dt ¼ < f , hi, k >
(10:27)
A raising questing for the discrete wavelet transform is how well the function f(t) can be reconstructed from the discrete sequence of wavelet coefficients: f (t) ¼ A
XX i
W f (i, k)hi, k (t)
(10:28)
k
where A is a constant that does not depend on f(t). Obviously, if s0 is close enough to 1 and t0 is small enough, the set of wavelets approaches as continuous. The reconstruction (Equation 10.28) is then close to the inverse continuous wavelet transform. The signal reconstruction takes place without restrictive conditions other than the admissible condition on the wavelet h(t). On the other hand, if the sampling is sparse, s0 ¼ 2 and t0 ¼ 1, the reconstruction (Equation 10.28) can be achieved only for some special choices of the wavelet h(t). The theory of wavelet frames provides a general framework, that covers the above-mentioned two extreme situations. It permits one to balance between the redundancy, which is the sampling density in the scale-time space, and the restriction on the wavelet h(t) for the reconstruction scheme (Equation 10.28) to work. If the redundancy is large with high over-sampling, then only mild restrictions are put on the wavelet basis. If the redundancy is small with critical sampling, then the wavelet basis functions are very constrained. Daubechies [8] has proven that the necessary and sufficient condition for the stable reconstruction of a function f(t) from its wavelet coefficients Wf(i, k) is that the energy, which is the sum of square moduli of Wf(i, k), must lie between two positive bounds: Ak f k2
X j, k
j < f , hi, k > j2 Bk f k2
(10:29)
where k f k2 is the energy of f(t), A > 0, B < 1 and A, B are independent of f(t). When A ¼ B, the energy of the wavelet transform is proportional to the energy of the signal. This is similar to the energy conservation relation (Equation 10.18) of the continuous wavelet transform. When A 6¼ B there is still some proportional relation between the energies of the signal and its wavelet transform. When Equation 10.29 is satisfied, the family of the wavelet basis functions {hi,k(t)} with i, k e Z is referred to as a frame and A, B are termed frame bounds. Hence, when proportionality between the energy of the function and the energy of its discrete transform function is bounded between something greater than zero and less than infinity for all possible square integrable functions, then the transform is complete. No information is lost and the signal can be reconstructed from its decomposition. Daubechies has shown that the accuracy of the reconstruction is governed by the frame bounds A and B. The frame bounds A and B can be computed from the dilation step s0, the translation
10-16
step t0 and the basis function h(t). The closer A and B, the more accurate the reconstruction. When A ¼ B, the frame is tight and the discrete wavelets behave exactly like an orthonormal basis. When A ¼ B ¼ 1 Equation 10.29 is simply the energy conservation equivalent to the Parseval relation of the Fourier transform. It is important to note that the same reconstruction works even when the wavelets are not orthogonal to each other. When A 6¼ B the reconstruction can still work exactly for the discrete wavelet transform if for reconstruction we use the synthesis function basis, which is different from the decomposition function basis for analysis. The former constitute the dual frame of the later.
10.4 Multiresolution Signal Analysis The multiresolution signal analysis is a technique that permits us to analyze signals in multiple frequency bands. Two existing approaches of multiresolution analysis are the Laplacian pyramid and the subband coding, which were developed independently on the wavelet transform in the late 1970s and early 1980s. Meyer and Mallat [9] found in 1986 that the orthonormal wavelet decomposition and reconstruction can be implemented in the multiresolution signal analysis framework.
10.4.1 Laplacian Pyramid The multiresolution signal analysis was first proposed by Burt and Adelson in 1983 [10] for image decomposition, coding, and reconstruction. 10.4.1.1 Gaussian Pyramid The multiresolution signal analysis is based on a weighting function, which is also called a smoothing function. The original data, represented as a sequence of real numbers, c0(n), n 2 Z, is averaged in neighboring pixels by the weighting function, which can be a Gaussian function and used as the impulse response of a low-pass filter. The correlation of the signal with the weighting function reduces the resolution of the signal. Hence, after the averaging process the data sequence is down-sampled by a factor of two. The resultant data sequence c1(n) is the averaged approximation of c0(n). The averaging and down-sampling process can be iterated. For instance, they are applied at the level i ¼ 1 to the averaged approximation data c1(n) with the smoothing function, which is also dilated by a scale factor of two. Then, they are applied to c2(n) with the smoothing function, which is again dilated by two, and so on. In the iteration process the smoothing function is dilated with dyadic scales 2i with i 2 Z to average the signals at multiple resolutions. Hence, the original data is represented by a set of successive approximations. Each approximation corresponds to a smoothed version of the original data at a given resolution. Assume that the original data are of size 2N. Then, the smoothed sequence c1(n) has a reduced size 2N1. By iterating
Transforms and Applications Handbook
the process, the successive averaging and down-sampling result in a set of data sequences of exponentially decreasing size. If we imagine these data sequences stacked on top of one another, then they constitute an hierarchical pyramid structure with log2 N pyramid levels. The original data c0(n) are at the bottom or zero level of the pyramid. At ith pyramid level the signal sequence is obtained from the data sequence in the (i1)th level by ci (n) ¼
X k
p(k 2n)ci1 (k)
(10:30)
where p(n) is the weighting function. The operation described in Equation 10.30 is a correlation between ci1(k) and p(k) followed by a down-sampling by two. Note that a shift by two in ci1(k) results in a shift by one in ci(n). The sampling interval in level i is double of that in the previous level i1. The size of the sequence ci(n) is half as long as its predecessor ci1(n). When the weighting function is the Gaussian function, the pyramid of the smoothed sequences is referred to as the Gaussian pyramid. Figure 10.8 shows a part of the Gaussian pyramid. 10.4.1.2 Laplacian Pyramid By the low-pass filtering with the weighting function p(n), the high frequency detail of the signal is lost. The lost information can be recovered by computing the difference between two successive Gaussian pyramid levels of different size. In this process we have to first expand the data sequence ci(n) in two steps: (1) inserting a zero between every samples of ci(n), that is up-sampling ci(n) by two; (2) interpolating the sequence with a filter whose impulse response is p0 (n). The expand process results in a sequence c 0i1 (n) that has the same size as the size of ci1(n). In general c 0i1 (n) 6¼ ci1(n). The difference can be represented by a sequence di1(n) di1 (n) ¼ ci1 (n) c0i1 (n)
(10:31)
which contains the lost detail information of the signal. All the differences between each pair in the sequences of successive Gaussian pyramid levels form a set of sequences di(n) that constitute another pyramid, referred to as the Laplacian pyramid. The original signal can be reconstructed exactly by summing the Laplacian pyramid levels. The Laplacian pyramid contains the compressed signal data in the sense that the pixel to pixel correlation of the signal is removed by the averaging and subtracting process. If the original data is an image that is positively valued, then the values on the Laplacian pyramid nodes are both positive and negative. Their absolute values are smaller and are shifted toward zero, so that they can be represented by fewer bits. The multiresolution analysis is useful for image coding and compression. The Laplacian pyramid signal representation is redundant. One stage of the pyramid decomposition leads to a half size, low resolution signal and a full size, difference signal, resulting in
10-17
Wavelet Transform
0
c3 (n)
–1
0
1
c2 (n)
–1
–2
0
1
2
c1 (n) p–2
p2
p–1
p1
p0 c0 (n)
–4
–3
–2
p1
p0
p2 –1
p–2
p–1 1
0
p0 2
3
4
FIGURE 10.8 Multiresolution analysis Gaussian pyramid. The weighting function is p(n) with n ¼ 0 1, 2. The even and odd number nodes in c0(n) have different connections to the nodes in c1(n).
p(n)
ci (n)
2
2
p΄(n)
2 ci–1(n)
+
p΄(n)
– + FIGURE 10.9
+
ci–1 ΄ (n)
di (n)
Schematic pyramid decomposition and reconstruction.
an increase in the number of signal samples by 50%. Figure 10.9 shows the scheme for building the Laplace pyramid.
10.4.2 Subband Coding Subband coding [11] is a multiresolution signal processing approach that is different from the Laplacian pyramid. The basic objective of the subband coding is to divide the signal spectrum into independent subbands in order to treat the signal in individual subbands for different purposes. Subband coding is an efficient tool for multiresolution spectral analysis and has been successful in speech signal processing. 10.4.2.1 Analysis Filtering Given an original data sequence c0(n), n 2 Z, the lower resolution approximation of the signal is derived by low-pass filtering with a filter having its impulse response p(n).
c1 (n) ¼
X k
p(k 2n)c0 (k)
(10:32)
which is the correlation between c0(k) and p(k) down-sampled by a factor of two. The process is exactly the same as the averaging process in the Laplacian pyramid decomposition, as described in Equation 10.30. In order to compute the detail information that is lost by the low-pass filtering with p(n), a high-pass filter with the impulse response q(n) is applied to the data sequence c0(n) as d1 (n) ¼
X k
q(k 2n)c0 (k)
(10:33)
which is the correlation between c0(k) and q(k) down-sampled by a factor of two. Hence, the subband decomposition leads to a half size low resolution signal and a half size detail signal.
10-18
Transforms and Applications Handbook
10.4.2.2 Synthesis Filtering To recover the signal c0(n) from the down-sampled approximation c1(n) and the down-sampled detail d1(n), both c1(n) and d1(n) are up-sampled by a factor of two. The up-sampling is performed by first inserting a zero between each node in c1(n) and d1(n) and then interpolating with the filters p(n) and q(n) respectively. Finally, adding together the two up-sampled sequences yields c0(n). Figure 10.10 shows the scheme of the two-channel subband system. The reconstructed signal c00 (n), in general, is not identical to the original c0(n), unless the filters meet a specific constraint, that the analysis filters P(n), q(n) and the synthesis filters p0 (n), q0 (n) satisfy the perfect reconstruction condition, which will be discussed in Section 10.6.2. The scheme shown in Figure 10.10 is a two-channels system with a bank of two-band filters. The two-band filter bank can be extended to M-band filter bank by using a bank of M analysis filters followed by down-sampling and a bank of M up-samplers followed by M synthesis filters. The two-band filter bank and the M-band filter bank can also be iterated: the filter bank divides the input spectrum into two equal subbands, yielding the low (L) and high (H) bands. Then, the twoband filter bank can be again applied to these (L) and (H) half bands to generate the quarter bands: (LL), (LH), (HL) and (HH). The scheme of this multiresolution analysis has a tree structure.
10.4.3 Scale and Resolution In the multiresolution signal analysis each layer in the pyramid is generated by a bank of low-pass and high-pass filters at a given scale, that corresponds to the scale of that layer. In general, scale and resolution are different concept. Scale change of a continuous signal does not alter its resolution. The resolution of a continuous signal is related to its frequency bandwidth. In a geographic map a large scale means a global view and a small scale means a detailed view. However, if the size of the map is fixed, then enlarging the map scale would require reducing the resolution. In the multiresolution signal analysis the term of scale is that of the low-pass and high-pass filters. At each scale, the downsampling by two, which follows the low-pass filtering, halves the resolution. When a signal is transferred from a scale level i to a
p (n)
2
c1 (n)
larger scale level i þ 1, its resolution is reduced by two. The size of the approximation signal also is reduced by two. Therefore, each scale level corresponds to a specific resolution. The larger the scale, the lower the resolution.
10.5 Orthonormal Wavelet Transform The first orthonormal wavelet basis was found by Meyer when he looked for the orthonormal wavelets that are localized in both time and frequency domains. The multiresolution Laplacian pyramid ideas of hierarchal averaging the signal and computing the difference triggered Mallat and Meyer to view the orthonormal wavelet bases as a vehicle for multiresolution analysis [9]. The multiresolution analysis is now a standard way to construct orthonormal wavelet bases and to implement the orthonormal wavelets transforms. Most orthonormal wavelet bases are now constructed from the multiresolution analysis framework. In the multiresolution analysis framework the dyadic orthonormal wavelet decomposition and reconstruction use the tree algorithm that permits very fast computation of the orthonormal wavelet transform in the computer.
10.5.1 Multiresolution Signal Analysis Bases 10.5.1.1 Scaling Functions The multiresolution analysis is based on the scaling function. The scaling function is a continuous, square integrable and, in general, real-valued function. The scaling function does not satisfy the wavelet admission condition: the mean value of the scaling function f(t) is not equal to zero, but is usually normalized to unity. The basic scaling function f(t) is dilated by dyadic scale factors. At each scale level the scaling function is shifted by discrete translation factors as fi, k (t) ¼ 2i=2 f(2i t k)
where k 2 Z and the coefficient 2i=2 is a normalization constant. Here, the scaling function basis is normalized in the L2(R) norm, similar to the normalization of the wavelet described by Equation 10.2. We shall restrict ourselves to the dyadic scaling with
2
p΄(n)
c0(n)
+
q (n)
2
d1 (n)
(10:34)
2
FIGURE 10.10 Schematic two-channel subband coding decomposition and reconstruction.
q΄(n)
c΄0 (n)
10-19
Wavelet Transform
the scaling factor 2i for i 2 Z. The scaling functions of all scales 2i with i 2 Z generated from the same f(t) are all similar in shape. At each resolution level i the set of the discrete translations of the scaling functions, fi,k(t), forms a function basis that spans a subspace Vi. A continuous signal function may be decomposed into the scaling function bases. At each resolution level i, the decomposition is evaluated at discretely translated points. The scaling functions play a role of the average and smoothing function in the multiresolution signal analysis. At each resolution level, the correlation between the scaling function and the signal produces the averaged approximation of signal, which is sampled at a set of discrete points. After the averaging by the scaling functions, the signal is down-sampled by factor of two that halves the resolution. Then, the approximated signal is decomposed into the dilated scaling function basis at the next coarser resolution. 10.5.1.2 Wavelets In the multiresolution analysis framework the wavelet bases are generated from the scaling function bases. In order to emphasize the dependence of the wavelets to the scaling functions in the multiresolution analysis framework, from now on we change the notation of the wavelet and use c(t) to denote the wavelet in the discrete wavelet transform instead of h(t) in the previous sections for the continuous wavelet transform. Similarly to the scaling function, the wavelet is scaled with dyadic scaling factors and is translated at each resolution level as ci, k (t) ¼ 2i=2 c(2i t k)
(10:35)
where k 2 Z and the coefficient 2i=2 is a normalization constant. The wavelet basis is normalized in the L2(R) norm for energy normalization, as discussed in Section 10.1. At each resolution level i, the set of the discrete translations of the wavelets, ci,k(t), forms a function basis, that spans a subspace Wi. A signal function may be decomposed into the wavelet bases. At each resolution level, the decomposition is evaluated at discrete translated points. In the multiresolution analysis framework the orthonormal wavelet transform is the decomposition of a signal into approximations at lower and lower resolutions with less and less detail information by the projections of the signal onto the orthonormal scaling function bases. The differences between each two successive approximations are computed with the projections of the signal onto the orthonormal wavelet bases, as shown in the next. 10.5.1.3 Two-Scale Relation The two-scale relation is the basic relation in the multiresolution analysis with the dyadic scaling. The scaling functions and the wavelets form two bases at every resolution level by their discrete translates. Let f(t) be the basic scaling function whose translates
with integer step span the subspace V0. At the next finer resolution the subspace V1 is spanned by the set {f(2t k)}, that is generated from the scaling function f(t) by a contraction with a factor of two and by translations with half integer steps. The set {f (2t k)} can also be considered as a sum of two sets of even and odd translates, {f(2t 2k)} and {f[2t (2k þ 1)]}, all are with integer steps k 2 Z. The scaling function at resolution i ¼ 0 may be decomposed as linear combination of the scaling functions at the higher resolution level i ¼ 1, as f(t) ¼
X k
p(k)f(2t k)
(10:36)
where the discrete decomposition coefficient sequence p(k) is called the interscale coefficients, that will be used in the wavelet decomposition as the discrete low-pass filter and will be discussed in Section 10.5.4. This decomposition may be considered as the projection of the basis function f(t) 2 V 0 onto the finer resolution subspace V1. The two scale relation, or called the two-scale difference equation, (Equation 10.36) is the fundamental equation in the multiresolution analysis. The basic ingredient in the multiresolution analysis is a scaling function such that the two scale relation holds for some p(k). The sequence p(k) of the interscale coefficients in the two scale relation governs the structure of the scaling function f(t). Let c(t) 2 V0 be the basic wavelet, which can also be expanded onto the scaling function basis {f(2t k)} in the finer resolution subspace V1 as c(t) ¼
X k
q(k)f(2t k)
(10:37)
where the sequence q(k) is the interscale coefficients that will be used in the wavelet decomposition as the discrete high-pass filter and will be discussed in Section 10.5.4. Equation 10.37 is a part of the two scale relation, and is useful for generating the wavelets from the scaling functions, as shown in the next. On the both sides of the two scale relations, (Equations 10.36 and 10.37. f(t) and c(t) are continuous scaling function and wavelet. On the right-hand side of the two scale relations, the interscale coefficients, p(k) and q(k), are discrete with k 2 Z. The two scale relations express the relations between the continuous scaling function f(t) and wavelet c(t) and the discrete sequences of the interscale coefficients p(k) and q(k).
10.5.2 Orthonormal Bases We should show in this section first how the discrete translates of the scaling function and of the wavelet form the orthonormal bases at each given scale level, and then how the scaling function generates the multiresolution analysis. 10.5.2.1 Orthonormal Scaling Function Basis At a given scale level the discrete translates of a basic scaling function f(t) can form an orthonormal basis, if f(t) satisfies
10-20
Transforms and Applications Handbook
some orthonormality conditions. The scaling function can be made orthonormal to its own translates. When a basic scaling function f(t) has its discrete translates that form an orthonormal set {f(tk)}, we have < fi, k , fi, k0 >¼ 2
i
ð
fi (t k)fi (t k0 )dt ¼ dk, k0
(10:38)
Similarly, the discrete translates of a basic wavelet c(t) can form an orthonormal basis, if c(t) satisfies some orthonormality condition. At the same scale level, the wavelet can be made orthonormal to its own translates. When a basic wavelet c(t) has its discrete translates that form an orthonormal set {c(t k)}, we have < ci, k , ci, k0 >¼ 2
ð
0
ci (t k)ci (t k )dt ¼ dk, k0
(10:39)
10.5.2.3 Cross-Orthonormality The orthonormal wavelet basis is not only orthogonal to their own translates at the same scale level. The set of the wavelet translates is also orthogonal to the set of the scaling function translates at the same scale level ð < fi, k , ci, n >¼ 2i fi (t k)ci (t n)dt ¼ 0
(10:40)
for all k and n 2 Z.
Example: Orthonormal Haar’s Bases An simple example for the orthonormal wavelet basis is the historical Haar wavelet. The Haar scaling function is the simple rectangle function in the interval [0, 1).
f(t) ¼
1 0
This is the two-scale relation for the Haar’s basis described in Equation 10.36 with the interscale coefficients 1 1 p(0) ¼ pffiffiffi , p(1) ¼ pffiffiffi , 2 2 p(k) ¼ 0
10.5.2.2 Orthonormal Wavelet Basis
i
1 f(t) ¼ pffiffiffi [f(2t) þ f(2t 1)] 2
0t ¼ < f1,k , P1 f >
(10:45)
Multiplying f1,k with both sides of Equation 10.44, computing the inner products and using Equation 10.45 yields c1 (k) ¼ < f1,k , P1 f > ¼ < f1,k , f >
(10:46)
d1 (k) ¼ < c1,k , Q1 f > ¼ < c1,k , f >
(10:47)
The discrete sequences, c1(k) and d1(k), are the coefficients in the decomposition of a continuous function f(t) onto the bases {f1,k} in V1 and {c1,k} in W1, respectively, where both scaling functions and wavelets are continuous. The sequence c1 is the averaged approximation of f(t) and referred to as the discrete approximation of f(t). The sequence d1 represents the difference between the original f(t) and the approximation P1f and is referred to as the discrete wavelet transform coefficients of f(t) at the coarse resolution level i ¼ 1. 10.5.4.2 Low-Pass and High-Pass Filters The discrete expansion coefficient sequence, c1(k) and d1(k), may be calculated by c1 (k) ¼ 21=2 d1 (k) ¼ 21=2
X n
X n
c1 (k) ¼ d1 (k) ¼
c1 (k)f1,k
n
X
These are the correlations between the signal data c0 and p(n) and q(n) respectively. The correlation results are downsampled by a factor of two, because of the double-shift of p(n) and q(n) in the correlations. The discrete interscale coefficients p(n) and q(n) are called the discrete low-pass and high-pass filters, respectively. The Equations 10.48 and 10.49 may be proved as follows: substituting Equation 10.41 into Equations 10.46 and 10.47 yields:
p(n 2k)c0 (n)
(10:48)
q(n 2k)c0 (n)
(10:49)
X n
X n
hf1,k , f0,n ic0 (n) hc1,k , f0,n ic0 (n)
(10:50)
The inner product between the scaling function and the wavelet sets {f1,k} and {c1,k} at the scale i ¼ 1 and the scaling function set, {f0,n}, at the next finer scale level i ¼ 0 can be computed as < f1,k , f0,n > ¼ 2
1=2
ð t f k f(t n)dt 2
ð ¼ 21=2 f(t)f[2t (n 2k)]dt
ð < c1,k , f0,n > ¼ 21=2 c(t)f[2t (n 2k)]dt
(10:51) (10:52)
Substituting the two scale relation: f(t) ¼ c(t) ¼
X n
X
p(n)f(2t n) q(n)f(2tn)
n
into Equations 10.51 and 10.52 and using the orthonormality of the set {f(2t)} we obtain < f1,k , f0,n > ¼ 21=2 p(n 2k)
(10:53)
< c1,k , f0, n > ¼ 21=2 q(n2k)
(10:54)
Substituting Equations 10.53 and 10.54 into Equation 10.50 results in Equation 10.49. 10.5.4.3 Recursive Projections The projection procedure can be iterated. The orthonormal projections at one resolution level can continue to the next coarser resolution. At the next coarser resolution the subspace V2 and W2 are orthogonal complement, V1 ¼ V2 W2, and V1 is the direct sum of V2 and W2. We can decompose P1 f 2 V 1 into two components along V2 and W2 P1 f ¼ P2 f þ Q2 f
(10:55)
10-23
Wavelet Transform
with P2 f ¼ Q2 f ¼
X
(La)(k) ¼ 21=2 c2 (n)f2,n
(10:56)
(Ha)(k) ¼ 21=2
n
X
d2 (n)c2,n
(10:57)
n
Multiplying by f2,k both sides of expansions (Equation 10.56) and using the orthonormality of the set {f2,n}, and multiplying by f2,k both sides of expansions (Equation 10.55) and using the mutual orthonormality between f2,n and c2,k we obtain the discrete approximation c2(k) as c2 (k) ¼ < f2,k , P2 f > ¼ < f2,k , P1 f > X ¼ < f2,k , f1,n > c1 (n) n
Similarly, multiplying c2,k with both sides of expansions (Equations 10.55 and 10.57) and using the orthonormality of {f2,n} and the mutual orthogonality between f2,k and c2,n within the same scale we obtain the discrete wavelet coefficients d2(k) as d2 (k) ¼ < c2,k , Q2 f > ¼ < c2,k , P1 f > X ¼ < c2,k , f1,n > c1 (n)
(10:58)
k
It is easy to verify that similarly to Equations 10.53 and 10.54 and independently of the scale level we have for scale level i: < fi, k , fi1, n > ¼ 21=2 p(n 2k) < ci, k , fi1, n > ¼ 21=2 q(n 2k)
(10:59)
It follows that ci (k) ¼ 21=2 di (k) ¼ 21=2
X n
X n
p(n 2k)ci1 (n) q(n 2k)ci1 (n)
X n
p(n 2k)a(n) q(n 2k)a(n)
Ci ¼ Lci1 di ¼ Hci1 10.5.4.4 Wavelet Series Decomposition The approximation ci1(n) is recursively decomposed into the sequences ci(n) and di(n) by iterating the low-pass and high-pass filters, according to Equation 10.60. The successive discrete approximation sequences ci(n) are lower and lower resolution versions of the original data c0(n), each sampled twice as sparsely as their predecessor. The successive wavelet coefficient sequence di(n) represents the difference between the two approximations at resolutions levels i and i 1. Continuing up to resolution M we can represent the original function f(t) by a series of detail functions plus one smoothed approximation f (t) ¼ PM L þ QM f þ QM1 f þ þ Q1 f
The decomposition into smoothed approximations and details at larger scale can be continued as far as wanted. The procedure can be iterated as many times as wanted. The successive projections Pi f correspond to more and more blurred version of f(t). The successive projections Qi f correspond to the differences between the two approximations of f(t) at two successive scale levels. At every step i one has the orthonormal projection of Pi1 f along the subspaces Vi and Wi
k
n
Then, Equation 10.60 can be shortened to
n
Pi1 f ¼ Pi f þ Qi f X X ¼ ci (k)fi,k þ di (k)ci,k
X
(10:60)
We define the low-pass and high-pass filtering operators L and H respectively such that the operations on a sequence a(n) are
and f (t) ¼ þ
X keZ
2M=2 cM (k)f(2M t k)
M X X i¼1
keZ
2i=2 di (k)c(2i t k)
(10:61)
Equation 10.61 is referred to as the wavelet series decomposition. The function f(t) is represented as an approximation at resolution i ¼ M plus the sum of M detail components at dyadic scales. The first term in the right-hand side of Equation 10.61 is the smoothed approximation of f(t) at very low resolution i ¼ M. When M approaches to infinity the projection of f(t) on with the scaling functions of very large scale would smooth out any signal detail and converge to a constant. The function f(t) is then represented as a series of its orthonormal projections on the wavelet bases. The wavelet series decomposition is a practical representation of the wavelet expansion and points out the complementary role of the scaling function in the wavelet decomposition. Note that in the wavelet series decomposition the function f(t), the scaling function bases f(t) and the wavelet bases c(t) are all continuous. The approximation coefficients cM(k) and the wavelet coefficients di(k) with i ¼ 1, 2, . . . , M are discrete. In this sense, the wavelet series decomposition is similar to the Fourier series decomposition. The discrete approximations ci(n) and the discrete wavelet coefficients di(n) can be computed with an iterative algorithm, described by Equation 10.60. This is essentially a discrete algorithm implemented by recursive applications of the discrete
10-24
Transforms and Applications Handbook
q (n)
d1 (n)
2
c0 (n)
q (n) p (n)
d2 (n)
2
c1 (n)
2
q (n) p (n)
2
d3 (n)
c2 (n)
2
p (n)
2
c3 (n)
FIGURE 10.11 Schematic wavelet series decomposition in the tree algorithm.
low-pass and high-pass filter bank to the discrete approximations ci(n). The algorithm is called the tree algorithm. The first two stages of the tree algorithm for computing the wavelet decomposition is shown in Figure 10.11. The decomposition into coarser smoothed approximations and details can be continued as far as wanted.
Example: Decomposition with Haar Wavelets A simple example for orthonormal wavelet decomposition is that with the Haar’s orthonormal bases. Let subspace V0 be spanned by the Haar scaling function basis, {f(t k)}, defined as a rectangular function of unit width [0, 1). The projection of a function f(t) on V0 is an approximation of f(t) that is piecewise constant over the integer interval. The projection of f(t) on V1 with the orthonormal basis {f(2t k)} of the next finer resolution is piecewise constant over the half integer interval. Since V1 ¼ V0 þ W0, and
fiþ1,k ¼ 21=2 (fi,2k þ fi,2kþ1 ) ciþ1,k ¼ 21=2 (fi,2k fi,2kþ1 )
(10:62)
The discrete approximation ci þ 1(k) can be obtained directly by the orthonormal projection of f(t) onto Vi þ 1 ciþ1 (k) ¼ < f , fiþ1,k >¼ 21=2 (ci (2k) þ ci (2k þ 1))
(10:63)
The difference between the two successive approximations is obtained using Equation 10.62 and
f (t) p0 f
P1 f ¼ P0 f þ Q0 f with the projection Q0f represents the difference between the approximation in V0 and the approximation in V1. The approximations P0f in V0 and P1f in V1 and the detail Q0f are shown in Figure 10.12. In the figure the projection Q0f is constant over half integer intervals, which can be added to the approximation P0f to provide the next finer approximation P1f.
t
p–1 f
When the scale level i approaches minus infinity with finer and finer resolution the approximations Pi f will converge to the original function f(t) as closely as desired. The projection of f(t) onto the subspace Vi spanned by the Haar scaling function basis at the resolution level i is Pi f ¼
X
t
ci (k)fi,k
k
–Q0 f
with the discrete approximation coefficients t
ci (k) ¼ < f , fi,k > ¼ 2
i=2
2i (kþ1) ð
f (t)dt
2i k
As fi þ 1,k is a rectangular function with width of 2i þ 1 and fi,k is with width of 2i, it is easy to verify that
FIGURE 10.12 Orthogonal projections P0f and Q0f onto the Haar scaling function and wavelet bases. The projection at the next finer resolutions, P1f ¼ P0f þ Q0f. (From Akansu, A. N. and Haddad, R. A. Multiresolution Signal Decomposition, Academic Press, Boston, 1992. With permission.)
10-25
Wavelet Transform
Pi f Piþ1 f ¼ ¼
X k
X k
[ci (k)fi,k ciþ1 (k)fiþ1,k ] [(ci (2k)fi,2k þ ci (2k þ 1)fi,2kþ1 )
21=2 (ci (2k) þ ci (2k þ 1))21=2 (fi,2k þ fi,2kþ1 )] 1X ¼ (ci (2k) ci (2k þ 1))(fi,2k fi,2kþ1 ) 2 k ¼
1X (ci (2k) ci (2k þ 1))ciþ1,k 2 k
Hence, the projection of f(t) onto the subspace Wiþ1 is the difference between Pi f and Pi þ 1 f Qiþ1,k ¼
X k
diþ1 (k)ciþ1,k ¼ Pi f Piþ1 f
provided that diþ1 (k) ¼ 21=2 (ci (k) ci (2k þ 1))
where the inner products and are obtained in Equation 10.59 as the interscale coefficients p(n 2k) and q(n 2k). Hence, the discrete approximation ci1(n) at the next finer resolution can be obtained as the sum of two convolutions between the discrete approximation ci(n) and the low-pass synthesis filter p(n) and between the wavelet coefficients di(n) and the high-pass synthesis filter q(n). The synthesis filters are identical to the analysis filters. But the filtering operations become the convolutions for synthesis and reconstruction instead of the correlations for analysis and decomposition. To compute the convolution with the synthesis filters in Equation 10.65 one must first put zeros between each sample of the sequences ci(n) and di(n) before convolving the resulting sequences with the synthesis low-pass and high-pass filters p(n) and q(n). The process is quite similar to the expand operation in the reconstruction algorithm of the multiresolution Laplacian pyramid and the subband coding. The reconstruction process can be repeated by iterations. We define the synthesis filtering operators L0 and H0 as 1 X p(n 2k)a(k) (L0 a)(n) ¼ pffiffiffi 2 k 1 X q(n 2k)a(k) (H0 a)(n) ¼ pffiffiffi 2 k
(10:64)
The interscale coefficients that are the discrete low-pass and high-pass filers, p(n) and q(n), are given for the Haar’s bases have been given in Section 10.5.2. Hence, the iterated filtering by the low-pass and high-pass filters becomes 1 X 1 ciþ1 (k) ¼ pffiffiffi p(n 2k)ci (n) ¼ pffiffiffi (ci (2k) þ ci (2k þ 1)) 2 n 2 1 X 1 q(n 2k)ci (n) ¼ pffiffiffi (ci (2k) ci (2k þ 1)) diþ1 (k) ¼ pffiffiffi 2 n 2 that are agree with Equations 10.63 and 10.64.
and rewrite Equation 10.65 in a shorten form ci1 ¼ L0 ci þ H0 di
The block diagram shown in Figure 10.13 illustrates the reconstruction algorithm, where the up-sampling by two means putting zeros between the sample of the sequences. To reconstruct the original data c0(n) we start from the lowest resolution approximation cM. According to Equation 10.65 we have
10.5.5 Reconstruction
cM1 ¼ H0 dM þ L0 cM
10.5.5.1 Recursive Reconstruction The original signal sequence c0(n) can be reconstructed from the sequences of the approximation coefficients ci(n) and of the wavelet coefficients di(n) with 0 < i M, where i ¼ M is the lowest resolution in the decomposition. At each resolution level i we have the wavelet decomposition described by Equations 10.58 and 10.60. On multiplying the both sides of Equation 10.58 by fi1,n and integrating the both sides we obtain ci1 (n) ¼ < Pi1 f , fi1,n > X X ¼ ci (k) < fi,k , fi1,n > þ di (k) < ci,k , fi1,n > k
¼ 21=2
cM2 ¼ H0 dM1 þ L0 (H0 dM þ L0 cM )
¼ H0 dM1 þ L0 H0 dM þ (L0 )2 cM
When the discrete approximation ci1 is obtained from ci(n) and di(n), the next finer approximation ci2(n) can be obtained from the approximation ci1(n) and the wavelet coefficients di1(n).
ci (n)
2
p (n)
2
q (n)
k
X k
ci (k)p(n 2k) þ 21=2
X k
di (k)q(n 2k) (10:65)
di (n)
FIGURE 10.13
Schematic wavelet reconstruction.
+
ci–1 (n)
10-26
Transforms and Applications Handbook
The process can continue until the original sequence c0(n) is reconstructed. The reconstruction formula for the original sequence is c0 ¼
M X i¼1
(L0 )i1 H0 di þ (L0 )M cM
The orthogonality between the wavelet and the dual scaling function and between the scaling function and the dual wavelet can also be expressed as < f(t k), c(t n) > ¼ 0
(10:66)
In this reconstruction procedure it is the low-pass filtering operator L0 that is iteratively applied to generate the finer resolution discrete approximations.
< c(t k), f(t n) > ¼ 0 for any n, k 2 Z. We expect also the orthogonality between the scaling function and its dual and between the wavelet and its dual:
10.5.5.2 Discussion In summary, we have introduced in this section the orthonormal wavelet series decomposition and reconstruction. In the multiresolution analysis framework, the basic function of the orthonormal wavelet transform is the scaling function f(t) that satisfies the two scale relation. The discrete translates of the scaling functions form the orthonormal bases within each resolution level. The discrete translates of the wavelets also form the orthonormal bases within each resolution level. The scaling and the wavelet function bases are mutually orthogonal within the same resolution level. The recursive orthonormal projections on the multiresolution subspaces yield the wavelet series decomposition. The wavelet series decomposition and reconstruction are computed by iterating the discrete low-pass filters p(n) and the discrete high-pass filters q(n), in the tree algorithms, in order to compute a set of discrete wavelet transform coefficients di(n) and a set of discrete approximation coefficients ci(n). The scaling function and wavelet bases are orthonormal only with discrete translations and dilations. The decomposition of a continuous function onto the orthonormal scaling and wavelet function bases yields discrete sequences of expansion coefficients. Hence, there is an analogy of the orthonormal wavelet transform with the Fourier series decomposition.
10.5.6 Biorthogonal Wavelet Bases The biorthogonal wavelet bases give more flexibility to the filter design. We define in the multiresolution framework two hierarchies of approximation subspaces: V2 V1 V0 V1 V2 V 2 V 1 V 0 V 1 V 2 where the subspaces Vi are spanned by the translates of the i are spanned by the translates of scaling function f(t), and V the dual scaling function f(t). The wavelet subspace Wi is complementary to Vi in the finer resolution subspace Vi1, but is not an orthogonal complement. Instead, Wi is the orthogonal com i is the i. Similarly, the dual wavelet subspace W plement to V orthogonal complement to Vi. Thus,
V i1
Wi ? V i ¼ V i Wi
and W i ? Vi and Vi1 ¼ Vi W i
< f(t k), f(t n) > ¼ dk,n < c(t k), c(t n) > ¼ dk,n
The orthogonality expressed in the four preceding equations is referred to as the biorthogonality. Indeed, the biorthogonal scaling functions and wavelets can be found with the polynomial B-splines scaling functions and wavelets. The cross scale orthogonality of the wavelet and its dual can also be obtained < ci,k , cm, n >¼ di,m dk,n Any function f 2 L2 (R) can be expanded onto the biorthogonal scaling function and the wavelet bases as f (t) ¼
XX i
< f , ci,k > ci,k (t)
k
¼
XX
< f , ci,k > ci,k (t)
f (t) ¼
XX
< f , fi,k > fi,k (t)
i
k
and also
¼
i
k
XX i
< f , fi,k > fi,k (t)
k
The implementation of the wavelet transform on the biorthogonal bases is also with the discrete low-pass and high-pass filters p(n) and q(n) in the multiresolution framework. In the reconstruction from the biorthogonal wavelet transform, however, the discrete synthesis filters p0(n) and q0(n) are not identical to the analysis filters p(n) and q(n). They can have no equal length. We shall discuss the low-pass and high-pass filters for the biorthogonal wavelet transform in detail in Section 10.6.4 in the framework of the subband coding theory. The discrete iterated filters are introduced with the two scale relations as f(t) ¼ f(t) ¼
X n
X n
p(n)f(2t n) n) p0 (n)f(2t
10-27
Wavelet Transform
p (n)
2
2
p0 (n)
χ (n)
χ΄(n)
+
q (n)
2
2
q0 (n)
FIGURE 10.14 Schematic wavelet decomposition and reconstruction with the biorthogonal scaling function and wavelet bases.
10.6.1 FIR Filter Bank
and c(t) ¼ c(t) ¼
X n
X
10.6.1.1 Two Channel Filter Bank
q(n)f(2t n) n) q0 (n)f(2t
One stage of the wavelet decomposition and reconstruction with the biorthogonal filter bank is shown in Figure 10.14. 10.5.6.1 Biorthogonal Wavelet Decomposition Similar to the orthogonal decomposition in Equation 10.60 the decomposition with the biorthogonal wavelets is implemented with the biorthogonal analysis filters p(n) and q(n) as: ci (k) ¼ 21=2
X n
p(n 2k)ci1 (n)
10.6.1.2 Finite Impulse Response (FIR) Filters
and di (k) ¼ 21=2
X n
q(n 2k)ci1 (n)
10.5.6.2 Biorthogonal Wavelet Reconstruction Similar to the reconstruction with the orthonormal wavelet transform (Equation 10.65) the signal is reconstructed with the biorthogonal synthesis filters p0 (n) and p0 (n) as ci1 (n) ¼ 21=2
X k
The two-channel filter bank, shown in the Figure 10.14, is a building block of the discrete wavelet transform and subband coding. An input signal x(k) is filtered in the two channels by the low-pass filter p(k) and high-pass filter q(k), which are the analysis filters for decomposing the signal. The analysis filtering is followed by a down-sampling by two. The filters p0(k) and q0(k) are the synthesis filters for reconstructing the signal. There is up-sampling before the synthesis filters. The low-pass and high-pass filters p(k) and q(k) are discrete and usually real-valued sequences with k 2 Z. For the sake of consistency with the wavelet decomposition described in Section 10.5, we define that the analysis filtering is a correlation operation, and the synthesis filtering is a convolution operation.
ci (k)p0 (n 2k) þ 21=2
X k
di (k)q0 (n 2k)
We shall discuss the biorthogonal analysis and synthesis filters in Section 10.6.
10.6 Filter Bank The discrete and dyadic wavelet transform is computed in the multiresolution signal analysis framework with the recurring low-pass and high-pass filters, that can be designed in the multiresolution signal analysis framework with the subband coding theory. The properties of the filters may be studied equivalently in the time domain and in the frequency or the z-transform domain.
Digital filers can be classified into two groups. In the first group, the filter is a finite-extent sequence, called the finite impulse response (FIR) filter. In the second group, the filter is of infinite extent, called the infinite impulse response (IIR) filters. The FIR filters have compact supports. Only a finite number of p(k) and q(k) are not zero: p(k) 6¼ 0 and q(k) 6¼ 0, for 0 k N 1. However, in the biorthonormal filter banks the low-pass and high-pass filters can have different lengths. The Fourier transforms of the FIR filters p(k) and q(k) with compact support would have no fast decay. There is a trade-off between the compactness and the regularity of the FIR filter. 10.6.1.3 Transfer Functions Both filters p(k) and q(k) for k 2 Z have limited lengths (FIR filters). The filter length is N. Their Fourier transforms exist as P(v) ¼ Q(v) ¼
X
p(k) exp (jkv)
k
X
q(k) exp (jkv)
(10:67)
k
where k ¼ 0, 1, 2, . . . N 1 and the Fourier transforms of the filters, i.e., the transfer functions, P(v) and Q(v), are complex-valued continuous functions. Note that Equation 10.67 may be considered as the Fourier series expansions of P(v) and Q(v), therefore, both P(v) and Q(v) are periodic functions of period v ¼ 2p.
10-28
Transforms and Applications Handbook
The Fourier series expansions (Equation 10.67) are equivalent to the z-transform of the sequences p(k) and q(k). With the definition of z as z ¼ exp(jv), Equation 10.67 can be rewritten as the Laurent polynomial: P(z) ¼ Q(z) ¼
X
p(k)zk
k
X
q(k)zk
(10:68)
k
where k ¼ 0, 1, 2, . . . , N1 and both P(z) and Q(z) are complexvalued continuous functions of the complex variable z. The degree of the Laurent polynomial P(z) is then equal to N1, i.e., the length of the FIR filter p(k) minus one. The analysis filtering is a correlation operation. In the frequency domain, the filtering of the signal x(k) by the low-pass and high-pass filters, p(k) and q(k), is equivalent to multiplying the Fourier transform of x(k), X(z), by the transfer functions P(z) and Q(z). That yields X(z)P(z1) and X(z)Q(z1), respectively, where P(z1) and Q(z1) are complex conjugated P(z) and Q(z), used in the correlation operations. Note that the coefficients p(k) and q(k) are real-valued in the Laurent polynomial Equation 10.68. 10.6.1.4 Time Delay and Causality If the signal x(n) is a time sequence, then the filtering convolution operation with a filter g(k) is written as X k
10.6.1.6 Binary Coefficient Filters A binary coefficients or dyadic coefficient is an integer divided by a power of 2. In the computer, multiplication by a binary number can be executed entirely by shifts and adds without round-off error. Also, in some architectures, the filters need less time and less space. We are therefore highly interested in binary coefficient filters.
10.6.2 Perfect Reconstruction 10.6.2.1 Down-Sampling The down-sampling by factor of two is a decimation operation, that is to save the even-numbered and discard the odd-numbered components of data sequence. The down-sampling by two can be considered as being achieved in two steps. First, the signal x(n) is sampled with the double sampling interval: x0 (n) ¼
g(k)x(n k) ¼ g(0)x(n) þ g(1)x(n 1) þ
where x(n) is the current input, x(n 1) is the input earlier by one time step etc. The output has a time delay with respect to the input. The filter g(k) is a causal filter, and g(k) must be zero, because the output cannot be a function of the later input. For instance, if g(1) 6¼ 0, then the filtering output would contain a term of g(1)x(n þ 1), where x(n þ 1) is the input of one time step later. According to the causality principle the filtering output at a time step n cannot be a function of x(n þ 1) so that g(1) must be zero. 10.6.1.5 Linear-Phase Filters The low-pass FIR filters p(k) are usually real-valued and symmetric. In this case, the transfer function P(v) as the Fourier transform of p(k) is itself a real-valued and even function, with zero phase. Because p(k) is causal, p(k) is not allowed. Therefore, the filter p(k) must be shifted in time domain, centered at N=2, that corresponds to a time delay, where N is the length of the FIR filter. Hence, the symmetric and antisymmetric filters should be Symmetric p(k) ¼ p(N k) Antisymmetric p(k) ¼ p(N k)
Their transfer function P(v) would have a linear phase, and becomes P(v)exp(jmv), or P(z)zm. Its modulus jP(v)j would be even. In some wavelet bases, such as the Daubechies bases, the lowpass filters are not symmetric. A nonsymmetric filter would introduce a nonlinear phase to the transfer function, that can distort the proper registration of different frequency components. For instance, in image processing applications of the nonsymmetrical filters will introduce important image distortions.
(10:69)
x(n) for n ¼ 0, 2, 4, . . . 0 otherwise
The intermediate signal x0 (n) has the same time clock rate as that of x(n). Then, the time clock rate is reduced by two to obtain the down-sampled signal y(n) as y(n) ¼ x0 (2n) for n ¼ 0, 1, 2 . . . : Its spectrum Y(v) in the Fourier domain is two-times larger than X(v), because Y(v) ¼ X0 (v=2). Discarding the odd-numbered components leads to a loss in information. This loss of information is definitive. In the frequency domain this is aliasing error. The down-sampling process is not invertible. The down-sampling is not shift-invariant. By the downsampling by two of its output, the convolution of a filter becomes a convolution with the filter that is shifted only by even numbers, i.e., a double-shift convolution. Therefore, when the input signal is shifted by an odd number, the results of the double shift convolution can change dramatically. The Laplacian pyramid, discrete and dyadic wavelet transform, subband coding and all other multiresolution analysis using the down-sampling are all highly dependent of the relative alignment of the input signal with the sub-sampling lattices and are not shift-invariant.
10-29
Wavelet Transform
10.6.2.2 Up-Sampling
10.6.2.4 Perfect Reconstruction Condition
The up-sampling by factor of two is an expansion operation, that is to insert zeros as the odd-numbered components into the data sequence. The up-sampling is also implemented in two steps. First, insert zeros between nodes of the down-sampled signal y(n) and then increase the time clock rate by two and let
The perfect reconstruction requires ^x(n) ¼ cx(n)z m , i.e., the output is equal to the input in the building block shown in Figure 10.14 with an extra constant c. When the input x(n) passes the two channels without divided by two, but the data from the two channels are added up in the end for the output, we have c ¼ 2. If the analysis and synthesis filters are normalized to remove the constant c, we have c ¼ 1. The extra phase shift zm with m 2 Z corresponds to a possible time delay between the output and the input data sequences. The filter bank can be causal. Now, the signal passes both the low-pass filter and highpass filter channels. Each is followed by down-sampling and up-sampling, respectively as expressed in Equation 10.70. We then multiply each by the synthesis filters, P0(z) and Q0(z), respectively and sum them up. When the antialiasing condition (Equation 10.72) is satisfied, and the two aliasing terms X(z)P (z1) and X(z)Q(z1) in Equation 10.70 are canceled, the output could be identical to the input, and the building block could behave like an identity operation if:
0
y (n) ¼
y(n=2) 0
for n ¼ 0, 2, 4, . . . otherwise
In the Fourier domain the spectrum of the up-sampled data Y0 (v) is compressed by two with respect to the original Y(v), because Y0 (v) ¼ Y(2v). Thus, a low-pass filter should be used for smoothing the up-sampled signal. The up-sampling processes also is not shift-invariant. When the input signal is shifted by an odd number, the results of the up-sampling can change dramatically. 10.6.2.3 Aliasing Cancellation Aliasing error is introduced by the down-sampling. The signal x(k) is filtered in the two channels by the low-pass and high-pass filters. The filtered signals are described in the frequency domain as X(z)P(z1) and X(z)Q(z1) before the down-sampling. The combination process of the down-sampling followed by the up-sampling would put to zero all the odd-numbered components of the filtered signals. This corresponds in the z-domain to keep only the even powers of z in X(z)P(z1) and X(z)Q(Z1), that can be represented as [12]
P0 (z)P(z 1 ) þ Q0 (z)Q(z1 ) ¼ czm
(10:73)
The ensemble of Equation 10.73 for perfect reconstruction and Equation 10.72 for aliasing error cancellation is referred to as the perfect reconstruction condition. 10.6.2.5 Modulation Matrix Two 2 3 2 modulation matrices are defined as
1 [X(z)P(z 1 ) þ X(z)P(z1 )] Low pass channel output 2 1 High pass channel output [X(z)Q(z 1 ) þ X(z)Q(z 1 )] 2 (10:70) or equivalently in the frequency domain 1 [X(v)P*(v) þ X(v þ p)P*(v þ p)] 2 1 [X(v)Q*(v) þ X(v þ p)Q*(v þ p)] 2
(10:71)
M(z) ¼
"
M0 (z) ¼
"
P(z)
P(z)
Q(z) Q(z) P0 (z)
#
P0 (z)
Q0 (z) Q0 (z)
and (10:74)
#
where M(z) is the analysis modulation matrix and M0(z) is the synthesis modulation matrix. The perfect reconstruction conditions (Equations 10.72 and 10.73) condition may be summarized to
where the terms X(z)P(z1) and X(z)Q(z1) are aliasing errors introduced by the down-sampling, which are not canceled by the up-sampling. In the two channel filter bank, the alias terms X(z)P(z1) and X(z)Q(z1) can be canceled in the synthesis step by the synthesis filters P0(z) and Q0(z) with the aliasing term, X(z) P(z1), in the low-pass channel multiplied by P0(z) and the aliasing term, X(z)Q(z1), in the high-pass channel multiplied by Q0(z). Hence, the aliasing terms are canceled, if the condition
where the constant extra time delay zm are removed in the sake of simplicity. If we want the synthesis filters, P0(z) and Q0(z), to play the same role as P0(z) and Q0(z), then the aliasing cancellation Equation 10.72 becomes
P0 (z)P(z 1 ) þ Q0 (z)Q(z 1 ) ¼ 0
and the perfect reconstruction condition (Equation 10.73) becomes
(10:72)
is satisfied. The Equation 10.72 is the antialiasing condition.
½ P0 (z) Q0 (z) M(z 1 ) ¼ c½ 1
0
(10:75)
P0 (z)P(z 1 ) þ Q0 (z)Q(z1 ) ¼ 0
P0 (z)P(z 1 ) þ Q0 (z)Q(z 1 ) ¼ cz m
10-30
Transforms and Applications Handbook
then combining with Equation 10.75 we have M0 (z)t M(z 1 ) ¼ cI
(10:76)
where I is the identity matrix. Note that in the time domain, the filters with the transfer function P0(z) and Q0(z), are only shifted by one with respect to that with the transfer function P0(z) and Q0(z). Moreover, if we need the two modulation matrices be even reversible, then we need also the perfect reconstruction condition as M(z1 )M0 (z)t ¼ cI
(10:77)
that will introduce the cross-filter relation as shown in the next sections.
The relation (Equation 10.80) can be also obtained from the cross-orthonormality condition of the scaling function and wavelet bases in the frequency domain, as will shown in Section 10.7. In this section we introduced the paraunitary filter bank, which leads to the orthonormality and cross-filter ortho normality conditions. The paraunitary filter bank is a solution of the aliasing error cancellation and perfect reconstruction conditions (Equation 10.75), which becomes the modulation matrix Equations 10.76 and 10.77 with the additional constraints. In Section 10.7 we shall demonstrate the orthonormality and cross-filter normality conditions on the low-pass and high-pass filters, and then use the orthonormality and cross-filter ortho normality conditions to demonstrate the paraunitary filter bank condition. 10.6.3.4 Alternating Flip It is easy to verify that the solution
10.6.3 Orthonormal Filter Bank
Q(z) ¼ (z)(N1) P(z 1 )
10.6.3.1 Paraunitary Matrix The orthogonal filter bank is the perfect reconstruction filter bank with the synthesis filters equal to the analysis filters, P0 (z) ¼ P(z) and Q0 (z) ¼ Q(z), in the building block shown in Figure 10.14. Therefore, for the orthogonal filter bank we have from Equations 10.76 and 10.77 M(z)t M(z1 ) ¼ cI
and
M(z1 )M(z)t ¼ cI
(10:78)
The modulation matrix M(z) is a para-unitary matrix, if the constant factor c in the right-hand side of Equation 10.78 is not considered. Hence, the low-pass and high-pass filters for the discrete wavelet transform are the two-channel paraunitary filter bank. 10.6.3.2 Orthonormality Condition From the paraunitary filter bank condition (Equation 10.78) it follows that jP(z)j2 þ jP(z)j2 ¼ c
jQ(z)j2 þ jQ(z)j2 ¼ c
and and
jP(v)j2 þ jP(v þ p)j2 ¼ c
jQ(v)j2 þ jQ(v þ p)j2 ¼ c
(10:79)
We call both jP(z)j2 and jQ(z)j2 the half-band filters and shall discuss their properties later in this section. The relation (Equation 10.79) can be also obtained from the orthonormality condition of the scaling function and wavelet bases in the frequency domain, as will be shown in Section 10.7. 10.6.3.3 Cross-Filter Orthonormality From the paraunitary filter bank condition (Equation 10.78) it also follows that P(z1 )Q(z) þ P(z 1 )Q(z) ¼ 0
Q(z 1 )P(z) þ Q(z1 )P(z) ¼ 0
(10:80)
(10:81)
satisfies the cross-filter orthonormality condition (Equation 10.80), with an arbitrary even number N, which is the length of the filter in time domain. The inverse z-transform of the solution (Equation 10.81) gives the relation between the low-pass filters p(n) and high-pass filters q(n) in the time domain. In multiresolution signal analysis, the wavelet bases are generated by the basic scaling functions. Similarly, the high-pass filters in the filter bank are generated from the low-pass filter by the alternating flip relation. In the orthonormal wavelet transform the high-pass filters q(n) are obtained from the low-pass filters p(n), by the inverse z-transform of Equation 10.81: q(n) ¼ (1)n p(N 1 n) for n ¼ 0, 1, 2, . . . , N 1 (10:82) for an even N, such that the low-pass and high-pass filters p(n) and q(n) satisfy the cross-filter orthogonality, where N is the length of the FIR filters, or can be any even number. Note that an arbitrary even number can be added to N, resulting in an even number shift of p(N 1 n). 10.6.3.5 Quadrature Mirror Filters (QMF) The alternating flip filters are a solution of the cross-filter orthonormality condition (Equation 10.80). For the alternating flip filter bank, the low-pass and high-pass filters satisfy jQ(z)j2 ¼ jP(z 1 )j2 or equivalently
jQ(v)j2 ¼ jP(v þ p)j2
(10:83)
and are referred to as a pair of quadrature mirror filters. If the high-pass filter Q(v) is such that Q(v)jv¼0 ¼ 0, then from the quadrature mirror property (Equation 10.83) we have P(v)jv¼p ¼ 0. If the low-pass filter P(v) is normalized such that P(v)jv¼0 ¼ 1, then from the quadrature mirror property (Equation 10.83) the high-pass filter would be such that Q(v)jv¼p ¼ 1.
10-31
Wavelet Transform
10.6.3.6 Mirror Filters A pair of FIR filters L(v) and H(v) are referred to as mirror filters, if L(z) ¼ H(z) or equivalently
L(v) ¼ H(v þ p)
(10:84)
In the time domain the two FIR mirror filters with real-valued coefficients satisfy the relation l(k) ¼ (1)k h(k)
(10:85)
that is the inverse z-transform of Equation 10.84. The filter pair L(v) and H(v) are mirror filters because on substituting for v by vp=2 in Equation 10.84 and noting that the low-pass filter L(n) is real valued, and jL(v)j is an even function of v, we obtain p p þ v ¼ L v H 2 2
(10:86)
This is the mirror image property of jL(v)j and jH(v)j about v ¼ p=2. 10.6.3.7 Ideal Half-Band Filters Intuitively, if a low-pass filter is an ideal low-pass filter: L(v) ¼ 1 for p=2 v p=2 and P(v) ¼ 0 elsewhere, then its mirror filter H(v) is the ideal half-band high-pass filter : H(v) ¼ 0 for p=2 v p=2 and equal to 1 elsewhere. They are both brickwall filters and are rectangular functions. Hence, the input spectrum in the full band p v p is divided into two equal subbands by the analysis mirror filters L(v) and H(v). 10.6.3.8 Half-Band Filters The idea half-band filters are orthonormal. In practice of multiresolution signal analysis, it is not necessary to use the ideal lowpass and high-pass filters. A filter G(z) is a half-band filter if G(z) þ G(z) ¼ 2 or equivalently G(v) þ G(v þ p) ¼ 2
10.6.3.9 Power Complementary Filters The filter pair {L(v), H(v)} are referred to as the power complementary filters if jL(v)j2 þ jH(v)j2 ¼ c
(10:89)
where the constant c ¼ 1 or 2. This relation shows the energy complementary property of the low-pass and high-pass filters. From the orthonormality condition (Equation 10.79) and quadrature mirror condition (Equation 10.83) it follows that P(v) and Q(v) are complementary, because: jP(z)j2 þ jQ(z)j2 ¼ c or
jP(v)j2 þ jQ(v)j2 ¼ cjP(z)j2 þ jQ(z)j2 ¼ c or
jP(v þ p)j2 þ jQ(v þ p)j2 ¼ c
(10:90)
10.6.4 Orthonormal Filters in Time Domain From the orthonormal filter condition (Equation 10.79) both jP(z)j2 and jQ(z)j2 are half-band filters so that in their Laurent polynomials all the even powers of z must be zero. Note that P(z) and Q(z) are the Fourier transforms of the time domain filters p(k) and q(k) respectively. The inverse Fourier transform of the square modulus jP(z)j2 is the autocorrelation of p(k). Let a product filter Pr (z) ¼ jP(z)j2 . In the Laurent polynomial of Pr(z), the coefficients pr (2n) ¼ 0 for n 6¼ 0 and pr (0) ¼ 1. Hence, the time domain low-pass filter has the double-shift orthogonality as X k
p(k)p(k 2n) ¼ d(n)
and similarly for the high-pass filter (10:87)
The filter G(v) is half-band because on substituting for v by v–p=2 in Equation 10.87 and noting that the low-pass filter g(n) is real valued, and jG(v)j is an even function of v, we obtain p p G v ¼ G þ v 2 2
Note that the low-pass and high-pass filters in the multiresolution signal analysis are not themselves half-band, but their square modula are half-band filters, according to Equation 10.79 and are the quadrature mirror filters according to Equation 10.83.
(10:88)
There is the mirror image property of jG(v)j about v ¼ p=2, which is referred to as the half-band frequency. In the Laurent polynomial (Equation 10.68) of the half-band filters G(z) all the even powers of z must be zero, and all the odd powers of z must be canceled by each others, except the zero power term G(0) ¼ 1.
X k
q(k)q(k 2n) ¼ d(n)
From the cross-filter orthonormality (Equation 10.80) and using the similar process we can have the cross-filter orthonormality in the time domain as X k
p(k)q(k 2n) ¼ 0
All the filters are shifted in the time domain by even integers in the correlations. Therefore, at the same resolution level, the low-pass filter p(n) is orthonormal to its own translates by two or by any even numbers. The high-pass filter q(n) also is orthonormal to its own translates by two or by any even numbers. Also, the low-pass
10-32
Transforms and Applications Handbook
and high-pass filters translated by two, or any even numbers, are mutual orthogonal. This is the double-shift orthonormality of the low-pas and high-pass filters in the time domain. In the multiresolution signal analysis, the double-shift of the low-pass and high-pass filters in the time domain correspond to the filtering followed by the down-sampling by factor of two. The double-shift orthonormality implies that the orthonormal wavelet transform filters p(n) and q(n) must have even lengths.
10.6.5 Biorthogonal Filter Bank In the two-channel filter bank, shown in Figure 10.14, the synthesis filters may be different from the analysis filters, that brings more freedom in the filter design. The perfect reconstruction conditions should be still satisfied. The filter bank in this case is biorthogonal. The choice for the synthesis filters P0 (z) and Q0 (z) as P0 (z) ¼ Q(z 1 )
Q0 (z) ¼ P(z1 )
(10:91)
satisfies the alias cancellation equation: P0 (z)P(z 1 ) þ Q0 (z)Q(z 1 ) ¼ 0 Thus, the synthesis filters are associated to the analysis filters. The low-pass synthesis filter is equal to the high-pass analysis filter. They have the same length. The high-pass synthesis filter is equal to the low-pass analysis filter. They have the same length. The synthesis filters cancels the alias errors, caused by the analysis filters, and the down- and up-sampling.
With the choice (Equation 10.91) for the synthesis filters the perfect reconstruction Equation 10.73 P0 (z)P(z1 ) þ Q0 (z)Q(z 1 ) ¼ 2z m becomes (10:92)
The left-hand side of Equation 10.92 is an odd function of z. Therefore, in the right-hand side of Equation 10.92 the power m of z must be odd. We define the product filter Pr(z) as: Pr (z) ¼ P0 (z)P(z 1 ) and the normalized product filter as ~ Pr(z) ¼ P0 (z)P(z1 )zm
~ ~ Pr(z) þ Pr(z) ¼2
(10:94)
~ The normalized product filter Pr(z) has to be a half-band filter. ~ Hence, all the even powers of z in Pr(z) must be zero, except for ~ zero power term Pr(0) ¼ 2. There are only the odd powers of z in ~ the polynomial Pr(z) and all the odd powers must be canceled by each others, 10.6.5.2 Degrees and Symmetries Because the normalized product filter is half-band, and m is an odd number, the product filter Pr (z) ¼ P0 (z)P(z1 ) must be a polynomial in z of even degrees. Its two factors, the low-pass analysis and synthesis filters, P(z 1 ) and P0 (z), must both have even degrees or both have odd degrees. In the time domain, the low-pass analysis and synthesis filters must be both of odd lengths or both of even length. The symmetric or antisymmetric filters are linear-phase filters. The product filter can be symmetric, but can not be antisymmetric, ~ because the half-band filter Pr(0) 6¼ 0 also the low-pass filter can P not have a zero mean: p(k) ¼ 6 0. Hence, the low-pass analysis and synthesis filters, P(z1 ) and P0 (z), can only be both symmetric. In this case, the low-pass analysis and synthesis filters in the time domain, are both symmetric and have either both odd lengths or both even lengths. The synthesis filters are obtained from the analysis filters as shown in Equation 10.91: P0 (z) ¼ Q(z1 ) Q0 (z) ¼ P(z1 )
10.6.5.1 Product Filters
P0 (z)P(z1 ) P0 (z)P(z1 ) ¼ 2zm
where m is an odd number. Then, the perfect reconstruction condition (Equation 10.92) becomes
(10:93)
by changing z to z that alters the signs of all the coefficients of the high-pass filters in time domain, q(k) and q0 (k). When P(z 1 ) and P0 (z) are symmetric and of odd lengths, changing z to z does not change the symmetry of the high-pass filters. Then, the high-pass filters are also both symmetric and of odd lengths. When P(z 1 ) and P0 (z) are symmetric and of even lengths, changing z to z and changing the signs of the p0 (k) and q0 (k) of odd k do change the symmetry to antisymmetry, so that the high-pass filters of even lengths are both antisymmetric. 10.6.5.3 Design Biorthonormal Filters The biorthonormal filter bank is design to satisfy the perfect reconstruction condition (Equation 10.92). First, one chooses ~ the product filter Pr(z) satisfying the half-band condition (Equation 10.94). If the analysis and synthesis filters in the time domain have the lengths N and N0, respectively, the degrees of the polynomials P(z 1 ) and P0 (z) would be N1 and N0 1, ~ respectively. Then, the degree of the polynomial Pr(z) will be N þ N0 2, which is usually determined at the beginning. Then,
10-33
Wavelet Transform
one factorize the product filter Pr(z) into the low-pass analysis and synthesis filters P(z 1 ) and P0 (z). The high-pass filters can be finally determined according to the alternating flip relation (Equation 10.91). Splitting Pr(z) into P(z1 ) and P0 (z) can have some degrees of freedom which can be used for providing some useful properties, such as the linear phase filters (symmetry or antisymmetry). In one of design methods the product filter Pr(z) takes the form as
Pr (z) ¼
1þz 2
1
M
Example The pair of low-pass analysis and synthesis filters p1 ¼ [1]
and p0 7 ¼ [1 0 9 16 9 0 1]=16
are symmetric and binary coefficient filters. Balancing will produce 2=6 and 3=5 filters as p2 ¼ [11]=2
(10:95)
F(z)
and
p0 5 ¼ [1 2 6 2 1]=4
These filters are biorthonormal and of binary coefficients.
where the first term is the Fourier spectrum of a low-pass filter whose corresponding scaling function is a spline function, as will be discussed in Section 10.8.1. Note that z ¼ e jv and j(1 þ ejv )=2j ¼ jcos vj. Hence, this term insures jPr (v)j to have a zero of order M at z ¼ 1 and at v ¼ p, and to have M vanishing derivatives at v ¼ 0. The second term F(z) insures the Pr(z) to satisfy the perfect reconstruction condition and being a half-band filter. The biorthonormal wavelet transform filter banks can be designed with the lifting steps and the polyphase representation. Readers interested in the polyphase, lifting and spectral factorization are referred to other reference books [24]. The lifting steps can be considered as a balancing operation between the smoothness of the analysis and synthesis filters [24], that is moving the factor (1 þ z 1 )=2 from the synthesis filter P0 (z) to the analysis filter P(z 1 ) where P0 (z) and P(z 1 ) are two factors in the same product filter Pr (z) ¼ P0 (z)P(z 1 ). Multiplying (1 þ z 1 )=2 to P(z 1 ) corresponds to in the time domain 1 pnew (k) ¼ [pold (k) þ pold (k 1)] 2 The synthesis filter P0 (z) is divided by (1 þ z1 )=2 then 1 1 þ z1 P0new (z) ¼ P0old (z) 2
p3 ¼ [1 2 1]=4
and p0 6 ¼ [1 1 8 8 1 1]=8
and
P0new (z) ¼ 2P0old (z) z 1 P0new (z)
Hence, the synthesis filter in the time domain is changed as
old new pnew 0 (k) ¼ 2p0 (k) p0 (k 1)
The biorthonormality is preserved because
Pnew (z 1 )P0new (z) ¼ Pold (z 1 )P0old (z) This process also maintains the binary coefficients of the filters [24].
10.7 Wavelet Theory The dyadic discrete wavelet decomposition and reconstruction are computed by iterating the discrete low-pas and high-pass filters in the tree algorithm in the multiresolution signal analysis framework. The low-pass and high-pass filters for the orthonormal wavelet transform are the paraunitary 2-band perfect reconstruction (PR) quadrature mirror filter (QMF) bank, which can be designed using the subband coding theory. When computing the discrete wavelet transform one is given by a bank of low-pass and high-pass filters to iterate. The wavelet and scaling function are not given by explicit expressions during the wavelet transform computation. They even have no closed forms for many wavelets. However, in the wavelet theory an extra regularity condition is imposed on the scaling function and the wavelets. The orthonormal wavelet transform can be applied to continuous functions and therefore serves as a transform tool for analytic signals. The multiresolution Laplacian pyramid and the subband coding are discrete. The multiresolution wavelet transform algorithm is also essentially discrete. But the algorithm leads to a wavelet series expansion that decomposes a continuous function into a series of continuous wavelet functions. The novelties in the wavelet theory with respect to that developed in the subband theory are the wavelet decomposition of continuous signal functions into the continuous scaling function and wavelet bases; the regularity of the scaling function, wavelet as well as the quadrature mirror filters; the localization of the scaling function and wavelet in both time and frequency domains; the zero-mean condition on the high-pass filter and the generation of the continuous scaling function and wavelet by iterating the low-pass and high-pass filters. The basic properties of the orthonormal scaling functions and wavelets are the orthonormality and regularity, which are applied to the discrete low-pass and high-pass filters as well. Most analysis on the filter properties will be done in the Fourier domain. The knowledge on those properties is useful for designing and using the wavelet bases.
10-34
Transforms and Applications Handbook
10.7.1 Orthonormality The scaling function and the wavelets can be orthonormal to their own discrete translates at each resolution level, constructing orthonormal bases, in the condition that the scaling function and wavelet satisfy the orthonormality conditions. The orthonormality conditions can be expressed in the frequency domain. 10.7.1.1 Orthonormality Conditions Consider a generic basic scaling function f(t) that is, in most cases, real valued. At a given scale, its discrete translations form an orthonormal set {f(t k)}, such that: ð
f(t k)f(t k0 )dt ¼ dk,k0
k, k0 2 Z
The orthonormality of the discrete translations of f(t) is equivalent to the fact that the autocorrelation of f(t) evaluated at discrete time steps (k k0 ) must be zero everywhere except at the origin k ¼ k0 . The Fourier transform of the autocorrelation of a function is equal to the squared modulus of the Fourier transform of that function. Hence, the orthonormality of the basic scaling function in the Fourier domain may be written as ð
jF(v)j2 exp (jnv)dv ¼ 2pdn,0
n
jF(v þ 2np)j2 ¼ 1
n
jC(v þ 2np)j2 ¼ 1
(10:97)
(10:98)
10.7.1.2 Poisson Summation Formula To prove the orthonormality condition (Equations 10.97 and 10.98) we need to use the Poisson summation formula X n
1 X f (x þ 2pn) ¼ F(n) exp ( jnx) 2p n
n
jF(v þ 2np)j2 ¼
1 X R(n) exp ( jnv) 2p n
where R(n) is the Fourier transform of jF(v)j2 . If F(v) satisfy the orthonormality condition (Equation 10.96) R(n) would be equal to zero for n 6¼ 0 and equal to 2p for n ¼ 0, that proves the orthonormality condition (Equation 10.97), and similarly we have Equation 10.98. 10.7.1.3 Discussion
The sum of the series of its Fourier spectrum intensity jF(v)j2 discretely translated by 2np must be equal to one. Similarly, the orthogonality condition for a basic wavelet is that its Fourier spectrum satisfies: X
X
(10:96)
where n ¼ k k0 with n 2 Z, and F(v) is the Fourier transform of f(t). Hence, the Fourier transform of the scaling function, jF(v)j2 , evaluated at discrete frequency steps n must be equal to zero except at the origin n ¼ 0. We shall prove that the orthonormality condition (Equation 10.96) for the basic scaling function may be expressed as X
If f(x) is a delta function, then Equation 10.99 is the well known Fourier transform of a comb function. If f(x) is continuous and has a compact support smaller than 2p, then the left-hand side of Equation 10.99 is a periodic function, and the right-hand side of Equation 10.99 is the Fourier series expansion of that periodic function, where F(n) is the Fourier transform of f(x). The Poisson summation formula then corresponds to the simple Fourier P series decomposition of the periodic function f (x þ 2np). The Poisson summation formula is valid when f(x) satisfies some regularity conditions and has a compact support such P that the series f (x þ 2np) converges to a periodic function of period 2p. Assume that the Fourier spectrum jF(v)j2 of the basic scaling function f(t) is regular, and has a compact support. Let jF(v)j2 be the f(x) in the Poisson summation formula, (Equation 10.99), we obtain
(10:99)
To gain an insight of the orthonormality condition (Equation 10.97) we expand a function g(t) onto the orthonormal basis of the translates {f(t k)} with k 2 Z that is g(t) ¼
X k
c(k)f(t k) ¼ f(t)*
X k
c(k)d(t k)
where * denotes the convolution and c(k) are the coefficients of expansion. In the Fourier domain this expansion becomes G(v) ¼ F(v)
X k
c(k) exp (jkv) ¼ F(v)M(v)
where M(v) is defined as M(v) ¼
X
c(k) exp (jkv)
k
which is a periodic with period 2p: M(v) ¼ M(v þ 2np). According to the Parseval’s relation of the Fourier transform
1 2p
2p ð 0
jM(v)j2 dv ¼
X n
jc(n)j2
10-35
Wavelet Transform
and similarly,
We can compute the energy of g(t) by 1 ð
1
1 jg(t)j dt ¼ 2p
1 ð
2
1
¼
1 2p 1 2p
1 X
n¼1 2p ð 0
v v C(v) ¼ Q F 2 2
2
jF(v)j jM(v)j dv
1 1 X ¼ 2p n¼1
¼
2
2p(nþ1) ð 2pn
2p ð 0
2
2
jF(v)j jM(v)j dv
jF(v þ 2np)j2 jM(v þ 2np)j2 dv
jM(v)j2
1 X
n¼1
jF(v þ 2np)j2 dv
where we used the property that F(v þ 2np) is periodic of period 2p. From the orthogonality condition (Equation 10.97) we can write ð
jg(t)j2 dt ¼
X n
jc(n)j2
(10:100)
This is the energy conservation relation for the expansion onto the orthonormal scaling function and wavelet bases, and is similar to the energy conservation relation (Equation 10.18) for the continuous wavelet transform in Section 10.2.1. According to the wavelet frame theory in Section 10.3.2, Equation 10.100 means the frame is tight, the discrete scaling function basis behaves like an orthonormal basis.
10.7.2 Two Scale Relations in Frequency Domain The two scale relations in the multiresolution analysis are the basic relations between the continuous scaling function f(t), wavelet c (t) and the discrete low-pass and high-pass filters, p(n) and q(n): f(t) ¼ c(t) ¼
X k
X k
p(k)f(2t k)
X k
q(k)f(2t k)
ð p(k) f(2t k) exp (jvt)dt
" # v 1 X p(k) exp ( jkv=2) F ¼ 2 k 2 v v F ¼P 2 2
where P(v) and Q(v) are the Fourier transform of the sequences of the low-pass and high-pass filters, as defined in Equations 10.67 and 10.68. Both P(v) and Q(v) are periodic functions of period 2p. According to Equation 10.101 the Fourier transform F(v) of the coarser resolution scaling function f(t) is the product of the twice wider Fourier transforms F(v=2) of the finer resolution scaling function f(2t) and that of the low-pass filter P(v=2). Equation 10.101 is a recursion equation. The recursion can be repeated m times to yield F(v=2), F(v=4) . . . so on, that gives
F(v) ¼
m v v Y P i F m 2 2 i¼1
(10:103)
When m approaches to infinity and 1=2m tends to zero we have the Fourier transform of the continuous scaling function expressed as F(v) ¼
1 Y v P i 2 i¼1
(10:104)
provided that the scaling function f(t) is normalized with respect to the L1(R) as ð
f(t)dt ¼ F(0) ¼ 1
Similarly, we can replace the second term F(v=2) in the righthand side of Equation 10.102 with the infinite product derived in Equation 10.104 and obtain 1 v Y v P C(v) ¼ Q 2 i¼2 2i
(10:105)
It can be proved that if for some e > 0 the sequence of interscale coefficients p(n) satisfies
In the multiresolution wavelet decomposition the low-pass filter p(n) plays the role of the weighting function and the high-pass filter q(n) is used to compute the detail information. The Fourier transform of the two scale relations gives F(v) ¼
(10:102)
(10:101)
X n
jp(n)jjnje < 1
then the infinite product on the right-hand side of Equation 10.104 converges pointwise and the convergence is uniform. That is, the low-pass filter p(n) decays as fast as ne. This is a very mild condition, because in most practical cases the lowpass filters p(n) are the FIR filters with only a limited number of p(n) 6¼ 0. The two Equations 10.104 and 10.105 express relations between the Fourier transforms of the continuous scaling function and wavelet and the infinite product of the Fourier transforms of the low-pass and high-pass filters.
10-36
Transforms and Applications Handbook
X
10.7.2.1 Filters Orthonormality Using the Fourier domain two scale relation (Equation 10.101) and the orthonormality condition (Equation 10.97) we can write X n
X n
jF(2v þ 4np)j2 ¼ jP(v)j 2
X 2 n
jF(v þ 2np)j2 ¼ jP(v)j2
jF(2v þ 2(2n þ 1)p)j ¼ jP(v þ p)j
2
¼ jP(v þ p)j
2
X n
jF(v þ (2n þ 1)p)j
2
n
X n
F(2v þ 4np)C*(2v þ 4np) X n
F(2v þ 2(2n þ 1)p)C*(2v þ 2(2n þ 1)p) ¼ 0
On substituting the Fourier domain two scale relations (Equations 10.101 and 10.102) for F(v) and C*(v) into the above expression and using the periodicity of period 2p of P(v) and Q(v) we have P(v)Q*(v)
jP(v)j2 þ jP(v þ p)j2 ¼ 2
(10:106)
This is the orthonormality condition for the square modulus of the low-pass filter P(v) in the Fourier domain. Similarly, the orthonormality condition for the high-pass filter c(v) in the Fourier domain is jQ(v)j2 þ jQ(v þ p)j2 ¼ 2
(10:107)
Both Equations 10.106 and 10.107 are identical to Equation 10.79 introduced in Section 10.6.3. 10.7.2.2 Cross-Filter Orthogonality The scaling functions and the wavelets must be mutually orthogonal within the same scale: ð
f(t n0 )c(t k)dt ¼ 0
0
for all n , k 2 Z. In the Fourier domain the condition for the cross-filter orthogonality can be written as ð
F(v)C*(v) exp (jnv)dv ¼ 0
(10:108)
where n ¼ n0 k and n 2 Z. Using the Poisson summation formula (Equation 10.99): X n
1 X f (x þ 2np) ¼ F(n) exp ( jnx) 2p n
and assuming that the product F(v)c*(v) is regular and of finite support and let it be the f(x) in the Poisson summation formula, and using the cross-filter orthogonality condition (Equation 10.108) we have the Fourier transform of F(v)c*(v) equal to zero and
(10:109)
We separate the translations of 4kp and of (2k þ 1)2p of the product F(v)c*(v) an rewrite (Equation 10.109) as
þ
which correspond to the summation of F(v) translated by 4np and by (2n þ 1)2p respectively. Adding these two equations and applying again the orthonormality condition (Equation 10.97) to the two summations of F(v) on the left-hand side of the preceding equations we have
F(v þ 2np)C*(v þ 2np) ¼ 0
X n
X n
jF(v þ 2np)j2 þ P(v þ p)Q*(v þ p)
jF(v þ (2n þ 1)p)j2 ¼ 0
Using the orthonormality condition for F(v) described in Equation 10.97 we have P(v)Q*(v) þ P(v þ p)Q*(v þ p) ¼ 0
P*(v)Q(v) þ P*(v þ p)Q(v þ p) ¼ 0
This is the cross filter orthogonality condition on the low-pass and high-pass filters P(v) and Q(v) in the Fourier domain, which is identical to Equation 10.80 introduced in Section 10.6.3. 10.7.2.3 Paraunitary Matrix We observe the orthonormality conditions for the low-pass and high-pass filters and the cross-filter orthogonality in terms of the z-transform as P(z)P(z 1 ) þ P(z)P(z1 ) ¼ 2
Q(z)Q(z 1 ) þ Q(z)Q(z1 ) ¼ 2 P(z)Q(z1 ) þ P(z)Q(z1 ) ¼ 0
(10:110)
P(z 1 )Q(z) þ P(z1 )Q(z) ¼ 0
and choose the alternating flip filter bank as a solution for the cross-filter orthogonality as Equation 10.81 Q(z) ¼ (z)(N1) P(z 1 )
(10:111)
where N is an arbitrary even number. That leads to jQ(z)j2 ¼ jP(z)j2
and
jQ(v)j2 ¼ jP(v þ p)j2
(10:112)
The conjugate quadrature filters jP(z)j2 and jQ(z)j2 are the mirror filters, as defined in Equation 10.84.
10-37
Wavelet Transform
The first two equations in 10.110 are jP(z)j2 þ jP(z)j2 ¼ 2
jQ(z)j2 þ jQ(z)j2 ¼ 2
(10:113)
From Equations 10.112 and 10.113 we have jP(z)j2 þ jQ(z)j2 ¼ 2 and
jP(z)j2 þ jQ(z)j2 ¼ 2 and
jP(v)j2 þ jQ(v)j2 ¼ 2
jP(v þ p)j2 þ jQ(v þ p)j2 ¼ 2
1 0 Q(z) ¼ 2 Q(z) 0 1
v exp (jv=2) 2 v Q(v) ¼ j21=2 sin exp (jv=2) 2 P(v) ¼ 21=2 cos
(10:114)
The filter pair {P(z), Q(z)} are power complementary as defined in Equation 10.90. The orthogonality conditions described in Equation 10.110 and the power complementary properties described in Equation 10.114 are equivalent to the requirement that the 2 3 2 modulation matrix defined in Equation 10.74 should be paraunitary: p(z 1 ) P(z1 ) p(z) Q(z 1 ) Q(z 1 ) P(z)
The two-scale relations of the Haar scaling functions and the Haar wavelets are obtained in Section 10.5.2. On substituting p the interscale coefficients of the Haar’s bases: p(n) ¼ 1= 2 for p p n ¼ 0, 1 and p(n) ¼ 0 otherwise, q(0) ¼ 1= 2, q(1) ¼ 1= 2 and q(n) ¼ 0 otherwise according to Equations 10.63 and 10.64 into the Fourier transform of p(n) and q(n), we obtain the quadrature mirror filters of the Haar’s bases as
(10:115)
The paraunitary properties are useful for designing the compactly supported orthonormal scaling function and the wavelet bases. All the properties for orthonormality, cross-filter orthonormality, alternating flip filters, conjugate quadrature mirror filters, complementary filters and the paraunitary filter banks, and relations (Equations 10.106 through 10.115) have been introduced and discussed in Section 10.6. from the perfect reconstruction property of the filter bank. However, the orthonormality of P(v) and Q(v) and the cross-filter orthonormality are obtained here from the orthonormality of the scaling function and wavelet bases.
It is easy to verify that the Haar’s quadrature mirror filters satisfy all the orthonormality conditions because: v þ pi 1 h 2 v cos þ cos2 ¼1 2 2 2 v v þ p i 1h jQ(v)j2 þ jQ(v þ p)j2 ¼ sin2 þ sin2 ¼1 2 2 2 jP(v)j2 þ jP(v þ p)j2 ¼
and P(v)Q*(v) þ P(v þ p)Q*(v þ p)
1 v v 1 vþp vþp sin ¼0 ¼ j cos sin j cos 2 2 2 2 2 2
We have also that jP(v)j2 þ jQ(v)j2 ¼ 1
jP(v)j2 þ jQ(v)j2 ¼ 1
Example: Orthonormality of the Haar’s Bases and that the matrix Let us consider the orthonormality condition for the Haar’s bases, as an example. We know that the Haar’s bases are orthonormal at every scale. The Fourier transforms of the Haar scaling functions and wavelets are F(v) ¼ ejv=2
sin (v=2) v=2
are paraunitary.
C(v) ¼ ejv=2
sin2 (v=2) v=4
10.7.3 Orthogonal Filters in Time Domain
We have for the scaling function F(v)jv¼0 ¼ 1 and for the wavelet C(v)jv¼0 ¼ 0. It can be verified that the orthonormality condition expressed as X n
X n
are satisfied.
p(v) Q(v) P(v þ p) Q(v þ p)
jF(v þ 2np)j2 ¼ 1 jC(v þ 2np)j2 ¼ 1
10.7.3.1 Double-Shift Orthonormality When the basic scaling function and the wavelet satisfy the orthonormality condition, their discrete translates with integer translation steps form two orthonormal bases, and those two bases are mutually orthogonal < f0,k , f0,n > ¼ dk,n < c0,k , c0,n > ¼ dk,n < c0, k ,f0,n > ¼ 0
10-38
Transforms and Applications Handbook
ð
In Section 10.5.4 we obtained (Equations 10.53 and 10.54) from the two scale relations < f1,k , f0,n > ¼ 21=2 p(n 2k) < c1,k , f0,n > ¼ 21=2 q(n 2k)
Hence, at the resolution level i ¼ 1 the inner products of two translated scaling functions and wavelets may be written in terms of p(n) and q(n) as < f1,k , f1,k0 > ¼ ¼ < c1,k , c1,k0 > ¼ ¼ < c1,k , f1,k0 > ¼ ¼
X n,m
X n
X n,m
X n
X n,m
X n
q(n 2k)p(n 2k ) ¼ 0
The low-pass filter p(n) in time domain should satisfy the equal contribution constraint, stipulating that all the node in one resolution level contribute the same total amount to the next level and that the sum of all the weights for a given node n is independent of n. Hence, the weighting function should satisfy: p(2n) ¼
X n
p(2n þ 1)
In an example of multiresolution signal analysis shown in Figure 10.8 the odd number nodes and the even number nodes in the data sequence ci1(n) have two different connections with the low-pass filter p(n) because of the down-sampling by 2 of ci(k). The even nodes in c0(n) are connected to c1(n) with the weighting factors p(2), p(0), and p(2), the odd nodes are connected to c1(n) with the weighting factors p(1) and p(1). When the preceding relation is satisfied the sums of the weights are equal for odd and even nodes in c1(n). From the orthonormality condition (Equation 10.97) X k
at v ¼ (2k þ 1)p we have X k
jF(v þ 2kp)j2 ¼ 1
we find that if the scaling function is normalized such that its mean value is unity
jP[(2k þ 1)p]j2 jF[(2k þ 1)p]j2 ¼ 0
As F[(2k þ 1)p] 6¼ 0 we must have P[(2n þ 1)p] ¼ 0 and equivalently for the low-pass filter in the time domain we have, according to the Fourier transform (Equation 10.67) P(p) ¼
X k
(1)k p(k) ¼ 0
but P(0) ¼
0
10.7.3.2 Equal Contribution Constraint
n
F(2v) ¼ P(v)F(v)
q(n 2k)p(m 2k0 ) < f0,n , f0,m >
The double-shift orthonormality and cross-filter orthonormality also have been obtained from the paraunitary matrix properties of the filter bank in Section 10.6.4.
X
k
q(n 2k)q(m 2k0 ) < f0,n , f0,m > q(n 2k)q(n 2k0 ) ¼ dk, k0
F(0) ¼ 1
then at v ¼ 0, we have F(2kp) ¼ 0 for k ¼ 1, 2, . . . . Therefore, P jF[2(2k þ 1)p]j2 ¼ 0. Using the two scale relation we have
p(n 2k)p(m 2k0 ) < f0,n , f0,m > p(n 2k)p(n 2k0 ) ¼ dk, k0
f(t)dt ¼ 1 and
X k
p(k) ¼ 1
Addition and subtraction of the two preceding equations yield, respectively X k
p(2k) ¼ 1 and
X k
p(2k þ 1) ¼ 1
This is the equal constraint condition for the multiresolution signal analysis filters.
10.7.4 Wavelet and Subband Filters The discrete orthonormal wavelet transform low-pass and highpass filters are simply the 2-band paraunitary perfect reconstruction quadrature mirror filters developed in the subband coding theory. The novelties of the wavelet transform are 1. Continuous function bases The wavelet transform is defined on the scaling function and wavelet bases, which are continuous function bases of continuous variables, so that the wavelet transform can serve as a mathematical transform tool to analogue signal functions. The subband coding technique is based on the discrete filters and applied to discrete data. 2. Zero-mean high-pass filter: Applying the wavelet admissible condition C(v)jv¼0 ¼ 0 to the Fourier domain two-scale relation (Equation 10.105), it follows that Q(v)jv¼0 ¼ 0
(10:116)
The high-pass filter q(n) in the time domain must have a zero mean.
10-39
Wavelet Transform
3. Regularity of the scaling function and wavelet On substituting the zero mean property of the high-pas filter, Q(v)jv¼0 ¼ 0, into the quadrature mirror filter property (Equation 10.112) we have P(v)jv¼p ¼ 0. The low-pass filter P(v) must contain at least one term of (1 þ ejv ) or (1 þ z1 ), which equals to zero at v ¼ p or z ¼ 1. The regularity of the low-pass filter P(v) insures the iterations of the low-pass filter to converge, as will be discussed in Section 10.7.5.
10.7.5 Regularity The regularity of the wavelet basis functions is an important property of the wavelet transform, that results in the localization of the wavelet transform in both time and frequency domains. In Section 10.2.2 we discussed the regularity condition for the continuous wavelet transform. For the wavelet transform coefficients to decay as fast as sn þ 1=2 with an increase of (1=s), where s is the scale factor, the wavelet c(t) must have the first n þ 1 moments of the order 0, 1, . . . n equal to zero, and equivalently, the Fourier transform C(v) of the wavelets must have the first n derivatives of the order up to n equal to zero about zero frequency v ¼ 0. In this section we shall discuss the regularity condition on the orthonormal scaling functions and wavelets, and on the quadrature mirror filters P(v) and Q(v) in the multiresolution analysis framework. We shall discuss the regularity condition in a slightly different way from that in Section 10.2.2. The regularity conditions are applied for ensuring convergence of the reconstruction from the orthonormal wavelet decomposition. However, the regularity conditions obtained in both approaches are equivalent. 10.7.5.1 Smoothness Measure The regularity is a measure of smoothness for scaling functions and wavelets. The regularity of the scaling function is determined by the decay of its Fourier transform F(v) and is defined as the maximum value of r such that jF(v)j
c (1 þ jvj)r
for v 2 R. Hence, the jF(v)j has exponential decay as vM, where M r. This in turn implies that f(t) is (M 1)-times continuously differentiable, and both f(t) and c(t) are smooth functions. 10.7.5.2 Convergence of Wavelet Reconstruction The reconstruction from the wavelet series decomposition is described by Equation 10.66
c0 ¼
M X i¼1
(L0 )i1 H0 di þ (L0 )M cM
where the synthesis filtering operators applied to a sequence a(k), L0 and H0, are defined as 1 X p(n 2k)a(k) (L0 a)(n) ¼ pffiffiffi 2 k 1 X q(n 2k)a(k) (H0 a)(n) ¼ pffiffiffi 2 k
Note that in the reconstruction it is the low-pass filter L0 that is iterated. The problem of the convergence of the wavelet reconstruction may be formulated for a particular example, where the original function to be decomposed is the scaling function itself. In this case the wavelet series coefficients must be cM ¼ d0,n and d0 ¼ ¼ dM ¼ 0, where the sequence d0,n has only one nonzero entry for n ¼ 0. The reconstruction formula becomes c0 (n) ¼ (L0 )M cM It is therefore important to study the behavior of the iterated filtering operator (L0 )i cM for large i. Ideally we want (L0 )i cM to converge to a reasonably regular function when i tends to infinity. However, when i approaches to infinity (L0 )i cM can converge to a continuous function, or to a function with finite discontinuities, even to a fractal function. The sequence (L0 )i cM may also not converge at all. The condition for the reconstruction to converge is the regularity of the scaling function. With a graphic representation shown in Figure 10.15, we represent the sequence cM (n) at the resolution level i ¼ M by a rectangular function h0(t): h0 (t) ¼
1 1=2 t 1=2 0 otherwise
Assume that the sequence cM (n) has the time clock rate of 1. At the next finer resolution level i ¼ M 1 the sequence cM1 (n) is CM1 (n) ¼ L0 cM (n) ¼
X k
p(n 2k)d0,k ¼ p(n)
In fact, to compute cM1 (n) we first increase the time clock rate such that cM (n) is with a time interval of length 1=2. The cM (n) is then convolved with the discrete filter p(n) that has also the time interval of 1=2. The amplitude of cM1 (n) is equal exactly to p(n), as shown in Figure 10.14. We represent cM1 (n) by a piecewise constant function, h1(t), that is constant over the interval of 1=2. It is easy to see that the h1(t) may be expressed as h1 (t) ¼
X n
p(n)h0 (2t n)
Continuing for computing cM2 (n) ¼ (L0 )2 cM (n) we put a zero between each node of the sequence cM1 (n) and increase the time
10-40
Transforms and Applications Handbook
cM, η0
–1/2
0
1/2
–1/2
0
1/2
cM–1, η1
–1
1
–1
–1/2
–1/4 0 1/4
1/2
1
–1
–1/2
0
1/2
1
cM–2, η2 –1
0
1
FIGURE 10.15 Reconstruction from cM(n) ¼ 1, for n ¼ 0 and cM(n) ¼ 0 for n ¼ 0 and the corresponding rectangle function h0(t). The time clock rate is equal to 1 for cM(n), 1=2 for cM1(n) and 1=4 for cM2(n). (From Daubechies, I., Commun. on Pure and Appl. Math, XLI, 909, 1988. With permission.)
clock rate. Thus, at this resolution level both the data sequences c(n) and the filter p(n) have the time interval of 1=4. Their convolution yields the sequence cM2 (n). We represent cM2 (n) by the piecewise constant function h2(t) of a step of length 1=4, as shown in Figure 10.15. It is easy to verify that X p(n)h1 (2t n) h2 (t) ¼ n
Similarly hi (t) ¼ (L0 )i cM (n) is a piecewise constant function with a step of length 2i and X p(n)hi1 (2t n) hi (t) ¼
10.7.5.3 Construction of Scaling Function The above recursion process with the low-pass filter operator L0, associated to the low-pass filter p(n), is a reconstruction of the scaling function f(t). Starting from the rectangular function h0, the recursion gives the values of f(t) of half-integers. Then, the recursion gives f(t) at the quarter-integers, and ultimately, at all dyadic point t ¼ k=2i. Finer and finer detail of f(t) is achieved by the recursion when the number of iterations i approaches to infinity. Therefore, the basic scaling function f(t) is constructed from the discrete lowpass filter p(n). This process is useful to compute the continuous scaling and wavelet functions, f(t) and c(t) from the discrete lowpass and high-pass filters, p(n) and q(n).
n
When i approaches to infinity (L0 )i cM can converge to h/ and lim h1 (t) ¼ f(t)
i!1
in the condition that the scaling function f(t) is regular, that we shall discuss next.
10.7.5.4 Regularity of Quadrature Mirror Filter The regularity of the scaling function should not only ensure that the reconstructed scaling function h1 (t) converges, but also ensure that (1) h1 (t) is sufficiently regular or the Fourier transform of h1 (t) has sufficient decay, and (2) hi (t) converges to h1 (t) point-wisely when i approaches to infinity. Daubechies [13]
10-41
Wavelet Transform
has proven that the above two conditions can be satisfied when the Fourier transform of the low-pass filter p(n), satisfies M 1 þ ejv F(ejv ) P(v) ¼ 2
cos
(10:117)
v sin v ¼ 2 2 sin (v=2)
The first infinite product term in Equation 10.119 is therefore
or, written in terms of the z-transform, as P(z 1 ) ¼
but
M 1 þ z 1 F(z 1 ) 2
where M > 1 and F(z1) is a polynomial in z1, or in ejv with real coefficients and satisfies some conditions so that the infinite product 1 Y k=2 (10:118) F(z ) 2k(NM1) k¼0
converges and is bounded. Because the low-pass filter p(n) is a FIR filter, p(n) 6¼ 0 only for n ¼ 0, 1, . . . , N 1, then its Fourier transform, P(v), is a polynomial in ejv or in z1, of degree N 1. Hence, F(z1) is a polynomial in z1 of degree N 1 M where N is the length of p(n). According to Equation 10.117 the quadrature mirror filter P(v) must have M zeros at v ¼ p or z ¼ 1. We know that P(v) must have at least one zero at v ¼ p, because according to the wavelet admissible condition for the high-pass filter (Equation 10.116): Q(v)jv¼0 ¼ 0 and the quadrature mirror filter condition (Equation 10.112) jP(v þ p)j2 ¼ jQ(v)j2 Hence,
M Y M sin (v=2) M sin (v=2i ) ¼ lim lim M!1 M!1 2M sin (v=2Mþ1 ) 2 sin (v=2iþ1 ) i¼1 sin (v=2) M ¼ (v=2) and 1 sin (v=2) M Y jv=2i ) jF(v)j ¼ F(e i¼1 v=2
(10:120)
The first term in the right-hand side of Equation 10.120 contributes to the exponential decay of F(v) as vM. The second term is bounded according to the condition (Equation 10.118). The number M of zeros of the quadrature mirror filter P(v) at v ¼ p, or at z ¼ 1 is a measure of flatness of P(v) at v ¼ p. From the regularity conditions (Equations 10.117 and 10.120) we see that the exponential decay of the scaling function F(v) and the flatness of the quadrature mirror filter P(v) are equivalent. When the scaling function F(v) has the exponential decay as vM, the quadrature mirror filter P(v) has number M of zero at v ¼ p. M is a measure of the regularity of the scaling function. 10.7.5.6 Smoothness in Time Domain
P(v)jv¼p ¼ 0 P(v) must contain at least one term of (1 þ ejv ), the power M in Equation 10.117 must be at least equal to one. However, the regularity condition (Equation 10.117) requires P(v) to have more zeros with M > 1 to insure convergence of the wavelet reconstruction.
The regularity also implies the smoothness of the low-pass filter p(n) in the time domain. We rewrite Equation 10.117 as vM F(v) P(v) ¼ exp (jMv=2) cos 2 Its rth derivative is
10.7.5.5 Regularity of Scaling Function dr P(v) vMr ¼ cos gr (v) dvr 2
Regularity condition (Equation 10.117) implies that v M jP(v)j ¼ cos jF(ejv )j 2
On substituting the preceding relation into the infinite product form (Equation 10.104) of the Fourier transform F(v) of the scaling function we obtain the regularity condition on F(v) as M 1 1 Y v Y jv=2i F(e ) jF(v)j ¼ cos iþ1 2 i¼1 i¼1
(10:119)
(10:121)
where [cos(v=2)]Mr is the minimum power of cos(v=2) that the rth derivative contains and gr(v) is the residual terms of the derivative. The term [cos(v=2)]Mr makes the rth derivative of P(v) equal to zero at v ¼ p for r ¼ 0, 1, . . . , M 1. On the other hand, because P(v) is the Fourier transform of p(n), we have d r P(v) X ¼ (jn)r p(n) exp (jnv) dvr n
10-42
Transforms and Applications Handbook
Then,
orthonormality condition and the regularity condition in slightly different ways. More filter banks and wavelet filter coefficients may be founded in MATLAB1 Wavelet Toolbox and in WaveLab (http:==playfair.stanford.edu=wavelab), and in many other computer software.
dr P(v) dvr
v¼p
¼ (j)r
X
nr (1)n p(n) ¼ 0
n
(10:122)
for r ¼ 0, 1, . . . , M 1. Hence, the low-pass filter p(n) is a smooth filter. Substituting the alternating flip filter expression
10.8.1 B-Spline Bases (N1)
Q(z) ¼ (z)
1
P(z )
where N is an even number, into the regularity condition (Equation 10.122) we have
Q(z) ¼
1z 2
M
z N F(z 1 )
10.8.1.1 B-Spline
or in terms of the Fourier transform frequency v as
The B-spline of degree n is generated by repeated (n þ 1)-fold convolutions [14] of the rectangular function:
vM Q(v) ¼ sin gr (v) 2
0
where gr(v) is the residual terms. The [sin(v=2)]M term insures that Q(v) has the vanishing derivatives at v ¼ 0 dr Q(v) ¼0 dvr v¼0
As Q(v) is the Fourier transform of q(n), the rth derivative of Q(v) is equal to zero at v ¼ 0 for r ¼ 0, 1, . . . , M 1 X n
r n q(n) ¼ 0
One of the basic methods for constructing the orthonormal wavelet families involves the B-spline functions, which are familiar in the approximation theory for interpolating a given sequence of data points. In this section we give a brief description of the multiresolution analysis with the B-spline scaling function and wavelet bases, and the related low-pass and highpass filters.
(10:123)
The high-pass filter q(n) has vanishing moment of order r. Both the low-pass filter p(n) and high-pass filter q(n) are FIR filters: p(n) 6¼ 0 only for n ¼ 0, 1, . . . , N 1 and N is the length of p(n). The compactness of p(n) and q(n), described by (N 1) is in contrast with the regularity of P(v) and Q(v) and of the scaling and wavelet functions, described by M, because according the regularity condition (Equation 10.119) M < N 1. There is a trade-off between the compactness of the filters and the regularity of the scaling function and wavelet. We will discuss this issue in Section 10.8.3.
10.8 Some Orthonormal Wavelet Bases In this section we summarize some orthonormal wavelet bases. In general, the orthonormal wavelet bases generated in the multiresolution analysis framework are associated to the orthonormal scaling function bases. Different orthonormal scaling function and wavelet bases are designed to satisfy the
b (t) ¼
1 for 0 t < 1 0 otherwise
(10:124)
where n is an arbitrary positive integer. The nth degree B-spline is then bn (t) ¼ (bn1 *b0 )(t) 1 ð ð1 ¼ bn1 (t x)b0 (x)dx ¼ bn1 (t x)dx
(10:125)
0
1
Equation 10.125 is recursive. Because of the repeated convolutions, the nth degree B-spline has a support of size (0, n þ 1) in time. This support increases with the degree n. The B-spline is a bell-shaped and symmetric function with respect to the center of its support t ¼ 1=2. There is also the central B-spline which is defined such that it is symmetric with respect to the origin. The B-spline of zero degree b0(t) has been used as the Haar scaling function. It is not even differentiable. The B-spline of first degree is the linear spline, which is a triangle function, called the hat function. Its first derivative is not continuous. The B-spline of second degree is the quadratic spline. It has continuous first derivatives. The B-spline of degree 3 is the cubic spline. It has the first and second continuous derivatives. The higher-order B-splines with the degree n > 1 are smooth bell shaped functions and have continuous derivatives of orders up to n 1. According to the definition, the Fourier transform of the B-spline of degree n is
Bn (v) ¼
1 ejv jv
nþ1
¼ ej(nþ1)v=2
sin (v=2) nþ1 (10:126) v=2
10-43
Wavelet Transform
Its modulus, jBn (v)j has the exponential decay as 1=vn þ 1. When n increases, the B-spline becomes more regular but its support becomes less compact. There is then the typical trade-off between the regularity and the compactness of the B-splines.
The polynomial splines are linear combinations of the translated B-splines. The polynomial splines f n(t) can be used to interpolate a given sequence of data point {s(k)}: X k
Hence, the low-pass filter in time is p(k) ¼ [1, 4, 6, 4, 1].
10.8.2 Lemarie and Battle Wavelet Bases
10.8.1.2 Spline Interpolation
f n (t) ¼
P(z) ¼ [(1 þ z 1 )=2]4 ¼ (1 þ 4z 1 þ 6z 2 þ 4z 3 þ z 4 )=16
c(k)bn (t k)
When the polynomial coefficients equal to the signal data c(k) ¼ s(k), f n(t) is a cardinal spline interpolator, that has piecewise polynomial segments of n-degree. Hence, the B-spline of degree n is the interpolation function. When n ¼ 0, the sequence of points {s(k)} with equal intervals is interpolated by b0(t) which is a staircase function, as that shown in the case of the Haar scaling function approximation. When n ¼ 1, the sequence of points are connected by the straight line segment in each interval [k, k þ 1]. When n > 2, the data points is interpolated by a function that is, in each interval, a polynomial in degree n.
The Lemarie and Battle wavelet bases are a family of orthonormal scaling functions and wavelets, that are associated to the B-spline, with an additional condition that the Lemarie and Battle polynomial scaling functions and wavelets are orthonormal within the given resolution level. The orthonormality is obtained by imposing the orthonormality constraints on the B-spline scaling functions and wavelets. 10.8.2.1 Nonorthonormal Spline Basis When the spline of degree m 1, bm1 (v), is used as the scaling function. The orthonormality condition is not satisfied, because from Equation 10.126 we have X k
jBm1 (v þ 2pk)j2 ¼
((v=2) þ pk) X v ¼ sin2m (v=2) 2m 2
10.8.1.3 Spline Scaling Function The B-spline itself can be used as the scaling function. The scaling function can be dilated and translated to form the spline scaling function bases at different resolution levels. However, the B-spline of degree n has a support of (0, n þ 1). Their integer translates do not necessarily form an orthonormal basis within a resolution level. When the B-spline of degree m 1 is used as the scaling function F(v) ¼ Bm1 (v), the corresponding low-pass filter P(v) can be derived as that follows. From the two-scale relation (Equation 10.101) F(2v) ¼ P(v)F(v)
m 1e 1 ejv ¼ P(v) and 2jv jv m m 1 þ ejv 1 þ z1 ¼ P(v) ¼ 2 2 2jv
k
(10:128)
where with x ¼ v=2 we define X 2m
(x)
2m 1 x þ pk
X k
From the complex analysis we have
cot x ¼ lim
n!1
n X
k¼n
1 x þ nk
We can differentiate this identity 2m 1 times to obtain
and Equation 10.126 we have
X sin [(v=2) þ pk]2m
X
m
2m
(10:127)
The coefficients of the corresponding low-pass filter, p(k), in the time domain can be obtained from Equation 10.127 by the inverse z-transform.
Example When the cubic spline is the scaling function in the multiresolution analysis, the corresponding low-pass filter
(x) ¼
X k
1 1 d 2m1 cot x 2m ¼ (2m 1)! dx2m1 (x þ pk)
(10:129)
Substituting Equation 10.129 into Equation 10.128, we obtain X k
jbm1 (v þ 2pk)j2 ¼
sin2m (x) d2m1 cot x (2m 1)! dx2m1
(10:130)
where x ¼ v=2. Hence, for the zero-degree spline m ¼ 1, we have X k
jb0 (v þ 2pk)j2 ¼ 1
10-44
Transforms and Applications Handbook
The Haar scaling function is orthonormal. However, for the spline with m ¼ 2, we have
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi P (v) 2m P P(v) ¼ 2m 2m (2v)
X k
v 1 2 jb1 (v þ 2pk)j2 ¼ þ cos2 3 3 2
The Fourier transform of the corresponding orthonormal wavelet can be derived also from the two scale relation
which is between 1=3 and 1. The linear spline scaling function basis is not orthonormal. In general, the higher degree spline scaling function bases are not orthogonal. 10.8.2.2 Lemarie–Battle Basis The Lemarie and Battle’s multiresolution basis [15] is built from the B-spline. Lemarie has found a scaling function that is associated to the (m 1)th degree splines and its integer translates form an orthonormal basis within the same resolution level. The Lemarie–Battle scaling function is given by its Fourier transform as
v v C(v) ¼ Q F 2 2 where the conjugate quadrature mirror filter Q(v) satisfying the orthonormal condition is obtained from the alternating flip property as described in Equation 10.82 Q(z) ¼ (z)(N1) P(z1 )
Q(v) ¼ ej(N1)(vþp) P*(v þ p) where N is an even number. Hence, v þ p v C(v) ¼ ej(N1)(vþp)=2 P* F 2 2
!1=2 1 X 1 F(v) ¼ m v (v þ 2pk)2m k ¼
vm
(10:132)
1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P 2m (v)
(10:131)
(10:133)
Example The Lemarie and Battle scaling function basis from the cubic spline with m 1 ¼ 3 can be obtained from Equations 10.129 and 10.131 as [25]
1=2 1 d7 v cot 315 dv7 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 16 315 sin (v=2) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 4 6 4 315 cos (v=2) þ 515 cos (v=2) sin2 (v=2) þ 231 cos2 (v=2) sin4 (v=2) þ 17 sin6 (v=2) v
F(v) ¼ v4
hX
(v þ 2pk)8
i1=2
¼ 8v4
It is easy to verify that the orthonormality condition X k
jF(v þ 2pk)j2 ¼ 1
P From the expression for 2m (v), the Fourier transforms F(v) and C(v) of the scaling function and wavelet and the quadrature mirror filters, P(v) and Q(v), can be calculated. Table 10.1 gives the first 12 coefficients of the discrete lowpass filter p(n) that are the impulse response of P(v) useful for the wavelet series decomposition. The coefficients of the
is satisfied. The Lemarie–Battle scaling function F(v) defined in Equation 10.131 can be computed using Equation 10.129. 10.8.2.3 Quadrature Mirror Filters The low-pass quadrature mirror filter P(v) can be obtained from the two scale relation in the Fourier domain, described (Equation 10.101) F(2v) ¼ P(v)F(v) According to Equation 10.131 we obtain
TABLE 10.1 Low-Pass Filter p(k) of the Lemarie–Battle Wavelet Basis Associated with the Cubic B-Spline k
p(k)
0
0.542
1 2
0.307 0.035
3 4 5
0.078 0.023
0.030
q(k)
k
p(k)
q(k)
0.189
6
0.012
0.005
7 8
0.013 0.006
0.054 0.027
0.099 0.312
0.099
9
0.006
0.018
0.189
10
0.003
0.017
0.161
11
0.002
0.000
10-45
Wavelet Transform Ψ(ω) ψ (t)
0
1
t
0
ω
FIGURE 10.16 Lemarie–Battle wavelet and its Fourier transform associated with the second order B-spline. (From Sheng, Y. et al. Opt. Eng., 31, 1840, 1992. With permission.)
high-pass filter q(n) obtained from the low-pass filter p(n) with Equation 10.133 are also given. Figure 10.16 shows the Lemarie–Battle wavelet c(t) and its Fourier transform C(v), which is given by Equation 10.133. The wavelet is associated to the linear spline of degree m ¼ 1. Hence, the wavelet consists of the straight line segments between the discrete nodes. When m increases the Lemarie–Battle scaling function and wavelet associated to high order B-splines become smoother. The Lemarie–Battle wavelet is symmetrical to t ¼ 1=2 and has no compact support. The wavelet c(t) decays slowly with time t.
10.8.3 Daubechies Bases The Daubechies wavelet basis [13] is a family of orthonormal, compactly supported scaling and wavelet functions, that have the maximum regularity and the maximum flatness at v ¼ 0 and v ¼ p for a given length of the support of the quadrature mirror filters. The Daubechies basis is not given in closed form. The decomposition and the reconstruction are implemented by iterating the discrete low-pass and high-pass filters p(n) and q(n). 10.8.3.1 Maximum Flatness Filter The Daubechies scaling and wavelet functions are built based on the consideration for the regularity condition, (Equation 10.117), and the orthonormality, (Equation 10.106), on the quadrature mirror filter P(v) in the Fourier domain, expressed as M 1 þ ejv F(ejv ) P(v) ¼ 2 and
The square modulus of the low-pass filter jP(v)j2 is half-band. The length N of the discrete low-pass filters p(n) is to be chosen first. The discrete high-pass filters q(n) has the same length of N. The low-pass filter p(n) of a compact support of length N is the FIR filter or called the N-tap filter, p(n) 6¼ 0 only for n ¼ 0, 1, . . . , N 1. Its Fourier transform P(v) is a polynomial in ejv of degree N 1, according to the definition (Equation 10.67):
P(v) ¼
N 1 X
p(k) exp (jkv)
k¼0
In the regularity condition (Equation 10.117), M > 1 is the regularity, and F(ejv) is a polynomial in ejv with real coefficients. Since P(v) is a polynomial in ejv of degree N 1, the polynomial F(ejv) in ejv is of degree N 1 M. The quadrature mirror filter P(v) and its impulse response p(n) is determined by the choice of the polynomial F(ejv). We consider the square modulus relation for jP(v)j2 , which is, from Equation 10.117: vM jP(v)j2 ¼ cos2 jF(ejv )j2 2
(10:134)
where jP(v)j2 should be a polynomial in cos2(v=2) and sin2(v=2) of degree N 1. Because the polynomial F(ejv) has real-valued coefficients, F*(ejv) ¼ F(ejv) and jF(ejv )j2 is a symmetric polynomial and can be rewritten as a polynomial in cos v or, equivalently, as a polynomial in sin2(v=2), rewritten as G [sin2(v=2)]: v jF(eiv )j2 ¼ G sin2 2 which is of degree
jP(v)j2 þ jP(v þ p)j2 ¼ 1
L¼N 1M
(10:135)
10-46
Transforms and Applications Handbook
Introducing a variable y ¼ cos2(v=2), Equation 10.134 can be written as 2
M
jP(v)j ¼ (y) G(1 y)
jP(v þ p)j2 ¼ (1 y)M G(y)
(10:136)
Combining these regularity conditions with the orthonormality condition we obtain yM G(1 y) þ (1 y)M G(y) ¼ 1 This equation can be solved for G(y), which is a polynomial in y of minimum degree M 1. Daubechies chose the minimum degree for G(y) as L ¼ M 1. Compared with Equation 10.135, we have M ¼ N=2
TABLE 10.2 The Low-Pass Filter of the Daubechies Wavelet Bases with the Support of the Filter N ¼ 2M and M ¼ 2, 3, . . . , 10 M ¼ 2
M ¼ 3
M ¼ 4
M ¼ 5
Hence, the term (cos2(v=2))M has the maximum value of M. The regularity M of the Daubechies scaling function is then the maximum and increases linearly with the width of their support, i.e., with the length N of the discrete filters p(n) and q(n), because M ¼ N=2. In Equation 10.136 the term [cos(v=2)]M insures jP(v)j to have M zeros at v ¼ p, and to have M vanishing derivatives at v ¼ 0. The term G(sin2(v=2)) ensures jP(v þ p)j to have L ¼ (M 1) zeros at v ¼ 0, and to have M 1 vanishing derivatives at v ¼ p. This corresponds to the unique maximally flat magnitude response of the frequency responses of the Daubechies low-pass and high-pass filters. Daubechies solved for P(v) from jP(v)j2 by spectral factorization.
M ¼ 6
10.8.3.2 Daubechies Filters in Time Domain The values of the coefficients pM(n) for the cases M ¼ 2, 3, . . . , 10 are listed in Table 10.2, where the filter length N ¼ 2M. For the most compact support M ¼ 2 and N ¼ 4 the discrete low-pass filter is given as pffiffiffi pffiffiffi 1 p(0) ¼ (1 þ 3)= 2 ¼ 0:483 4 pffiffiffi pffiffiffi 1 p(1) ¼ (3 þ 3)= 2 ¼ 0:836 4 pffiffiffi pffiffiffi 1 p(2) ¼ (3 3)= 2 ¼ 0:224 4 pffiffiffi pffiffiffi 1 p(3) ¼ (1 3)t= 2 ¼ 0:13 4
(10:137)
The discrete high-pass filter q(n) can be obtained from p(n) by the alternating flip relation
M ¼ 7
n
PM(n)
0 1 2 3 0 1 2 3 4 5 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 12 13
.482962913145 .836516303738 .224143868042 .129409522551 .332670552950 .806891509311 .459877502118 .135011020010 .085441273882 .035226291882 .230377813309 .714846570553 .630880767930 .027983769417 .187034811719 .030841381836 .032883011667 .010597401785 .160102397974 .603829269797 .724308528438 .138428145901 .242294887066 .032244869585 .077571493840 .006241490213 .012580751999 .003335725285 .111540743350 .494623890398 .751133908021 .315250351709 .226264693965 .129766867567 .097501605587 .027522865530 .031582039318 .000553842201 .004777257511 .001077301085 .007852054085 .396539319482 .729132090846 .469782287405 .143906003929 .224036184994 .071309219267 .080612609151 .038029936935 .016574541631 .012550998556 .000429577973 .001801640704 .000353713800
M ¼ 8
M ¼ 9
M ¼ 10
n
PM(n)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
.054415842243 .312871590914 .675630736297 .585354683654 .015829105256 .284015542962 .000472484574 .128747426620 .017369301002 .044088253931 .013981027917 .008746094047 .004870352993 .000391740373 .000675449406 .000117476784 .038077947364 .243834674613 .604823123690 .657288078051 .133197385825 .293273783279 .096840783223 .148540749338 .030725681479 .067632829061 .000250947115 .022361662124 .004723204758 .004281503682 .001847646883 .000230385764 .000251963189 .000039347320 .026670057901 .188176800078 .527201188932 .688459039454 .281172343661 .249846424327 .195946274377 .127369340336 .093057364604 .071394147166 .029457536822 .033212674059 .003606553567 .010733175483 .001395351747 .001992405295 .000685856695 .000116466855 .000093588670 .000013264203
Source: Daubechies, I., Commun. on Pure and Appl. Math., XLI, 909, 1988. With permission.
10-47
Wavelet Transform
Φ(ω)
φ(t)
(a)
ω
t
Ψ(ω)
ψ(t)
(b)
ω
t
FIGURE 10.17 Daubechies scaling function (a) and wavelet (b) and their Fourier transforms with the compact support N ¼ 4. (From Daubechies, I., Commun. on Pure and Appl. Math., XLI, 909, 1988. With permission.)
q(n) ¼ (1)n p(N 1 n) where N is an even number and n ¼ 0, 1, 2, . . . , N. It is easy to verify that the translates of p(n) and q(n) with double integer steps are orthonormal, respectively. Figure 10.17a and b show the Daubechies scaling function and wavelet with compact support M ¼ 2 and N ¼ 4 and their Fourier transforms. The scaling function is generated from the quadrature mirror filter p(n) given with the method of reconstruction discussed in Section 10.5.5. Those functions have the most compact support, but are neither smooth nor regular. When the length of the filters p(n) and q(n) increase the Daubechies scaling functions and wavelets become smoother and more regular, at the cost of larger number of nonzero coefficients of p(n) and q(n) that results in large support widths for the scaling functions and wavelets. Another important feature of Figure 10.17 is the lack of any symmetry or antisymmetry axis for the Daubechies scaling function and wavelet. Daubechies has shown that it is impossible to obtain an orthonormal and compactly supported wavelet that is either symmetric or antisymmetric around any axis, except for the trivial Haar wavelets.
10.9 Fast Wavelet Transform The wavelet transform is not ready for explicit calculus. Only for few simple functions the wavelet transform have analytical solutions as given in Section 10.2.4. For most functions the wavelet
transforms must be computed in the digital computer. In the multiresolution analysis framework, the orthonormal wavelet transform is implemented by iterating the quadrature mirror filters in the tree algorithm. In the computer both the function to be transformed and the iterated quadrature mirror filters are discrete. The orthonormal wavelet series decomposition and reconstruction are essentially discrete. The wavelet tree algorithm permits the fast wavelet transform. One of the main reasons for the recent success of the wavelet transform is the existence of this fast wavelet transform algorithm which only requires a number O(L) of the operations where L is the size of the initial data. The discrete wavelet decomposition and reconstruction algorithms are discussed in Sections 10.5.4 and 10.5.5. In this section we implement the algorithm by matrix operations and introduce the discrete wavelet matrix and we discuss the number of operations, the time-bandwidth product of the wavelet transform output.
10.9.1 Wavelet Matrices The discrete orthonormal wavelet transform is a linear operation. Given a vector of data that has a length of an integer power of two, the wavelet decomposition and reconstruction are numerically computed by recurring two conjugate quadrature mirror filters p(n) and q(n) that are the FIR filters compactly supported with a finite number N of nonzero coefficients. The degree of the Laurent polynomial of the transfer function of
10-48
Transforms and Applications Handbook
the filters is N 1. The wavelet decomposition and reconstruction are computed with recursive applications of the wavelet filter bank in the tree algorithm. In the following we show an example of the wavelet decomposition and reconstruction with the Daubechies wavelets of N ¼ 4, called the DAUB4, in the matrix formalization. Let f(n) be the vector of initial data, we generate a wavelet transform matrix [17] with the translated discrete filters 0
c(1)
1
B C 0 p(0) B C B d(1) C B B C B B C B p(3) B C B c(2) C B B C B B C B B C B B d(2) C B B B C B B C B B C B C B B C B B C B B C¼B B C B B C B B C B B C B B B C B C B B C B B C B B C B B C B B C B B C B B C B p(2) B C @ B C @ A p(1)
p(1) p(2) p(3)
1
C p(2) p(1) p(0) C C C C p(0) p(1) p(2) p(3) C C C C p(3) p(2) p(1) p(0) C C C C C C C C C C C p(0) p(1) p(2) p(3) C C C p(3) p(2) p(1) p(0) C C C C p(3) p(0) p(1) C A p(0)
p(3) p(2)
The high-pass filter q(n) is obtained from the low-pass filter p(n) from the cross filter orthogonality (Equation 10.82) q(n) ¼ (1)n p(N 1 n) Let N ¼ 4 and n ¼ 0, 1, 2, 3, we have q(0) ¼ p(3), q(1) ¼ p(2), q(2) ¼ p(1) and q(4) ¼ p(0). The high-pass filter q(n) has the property that p(3) p(2) þ p(1) p(0) ¼ 0 corresponding to Equation 10.116, Q(0) ¼ 0, obtained from the orthonormality condition, and 0p(3) 1p(2) þ 2p(1) 3p(0) ¼ 0 corresponding to Equation 10.123 obtained for the regularity of the wavelets. It is easy to see that in the wavelet transform matrix the orthonormality between the double integer translates p(n) and that of q(n) and the cross filter orthogonality between p(n) and q(n) are insured because
0
1 f (1) B C B f (2) C B C B C B C B f (3) C B C B C B C B f (4) C B C B C B C B C C
B B C B C B C B C B C B C B C B C B C B C B C B C B C @ A
p(2)p(0) þ p(3(1) ¼ 0
(10:138)
f (N)
In the wavelet transform matrix the odd rows are the lowpass filters p(n). The low-pass filter p(n) in the third row is translated by two with respect to that in the first row and so on. The even rows are the high-pass filters q(n) which are also translates by two from the second row to the fourth row and so on. The wavelet transform matrix acts on a column vector of data, f(n), resulting in two related correlations between the data vector f(n) and the filters p(n) and q(n), that are the discrete approximation c(n) and the discrete wavelet coefficients d(n) respectively. It is easy to verify that the low-pass filter p(n) is a smoothing filter as described in Equation 10.122. With the coefficients of the Daubechies’ bases given in Equation 10.137 we have p(0)2 þ p(1)2 þ p(2)2 þ p(3)2 ¼ 1
It is also possible to reconstruct the original data f(n) of length L from the approximation sequence c(n) and the wavelet coefficients d(n), both sequences are of the length of L=2. From the preceding equations we see that the wavelet transform matrix in Equation 10.138 is orthonormal, so that its inverse is just the transposed matrix 1 p(0) p(3) p(2) p(1) C B B p(1) p(2) p(3) p(0) C C B C B B p(2) p(1) p(0) p(3) C C B C B C B p(3) p(0) p(1) p(2) C B C B C B C B C B C B C B C B p(2) p(1) p(0) p(3) C B C B B p(3) p(0) p(1) p(2) C C B C B B p(2) p(1) p(0) p(3) C A @ 0
p(3) p(0) p(1) p(2)
(10:139)
The discrete wavelet decomposition is computed by applying the wavelet transform matrix with the operation (Equation 10.138) hierarchically with the down-sampling by a factor of two after each iteration. The down-sampling by two is implemented by a permutation of the output vector in the left-hand
10-49
Wavelet Transform
side of Equation 10.138 as shown in the following diagram with N ¼ 16. 0
f (1)
1
0
c(1)
1
0
c(1)
1
C C C B B B B f (2) C B d(1) C B c(2) C C C C B B B B f (3) C B c(2) C B c(3) C C C C B B B C C C B B B B f (4) C B d(2) C B c(4) C C C C B B B C C C B B B B f (5) C B c(3) C B c(5) C C C C B B B B f (6) C B d(3) C B c(6) C C C C B B B C C C B B B B f (7) C B c(4) C B c(7) C C C C B B B C C C B B B B f (8) C (10:124) B d(4) C permute B c(8) C (10:124) C C C B B B ! ! ! B f (9) C B c(5) C B d(1) C C C C B B B C C C B B B B f (10) C B d(5) C B d(2) C C C C B B B C C C B B B B f (11) C B c(6) C B d(3) C C C C B B B C C C B B B B f (12) C B d(6) C B d(4) C C C C B B B B f (13) C B c(7) C B d(5) C C C C B B B C C C B B B B f (14) C B d(7) C B d(6) C C C C B B B C C C B B B @ f (15) A @ c(8) A @ d(7) A f (16) d(8) d(8) 1 1 0 0 1 0 0 C(1) C(1) C (1) C C B 0 C B B D(1) C(2) C C C B (2) C B B C C C B B B B D0 (1) C B C(2) C B C(3) C C C C B B B C C B 0 C B B B D (2) C B D(2) C B C(4) C C C C B B B C C C B B B B D(1) C B C(3) C B D(1) C C C C B B B B D(2) C B D(3) C B D(2) C C C C B B B C C C B B B B D(3) C B C(4) C B D(3) C C C C B B B C C C B B B B D(4) C permute B D(4) C B D(4) C C C C B B B
B C ! B d(1) C ! B d(1) C C C B B d(1) C B C C C B B B B d(2) C B d(2) C B d(2) C C C C B B B C C C B B B B d(3) C B d(3) C B d(3) C C C C B B B C C B d(4) C B B C B B d(4) C B d(4) C C C C B B B B d(5) C B d(5) C B d(5) C C C C B B B C C C B B B B d(6) C B d(6) C B d(6) C C C C B B B C C C B B B @ d(7) A @ d(7) A @ d(7) A d(8)
d(8)
in the hierarchy and working from right to left with the diagram (Equation 10.140). The inverse wavelet transform matrix (Equation 10.139), is used instead of Equation 10.138. The above wavelet transform matrix method shows a clear figure of the discrete wavelet decomposition and reconstruction. The wavelet transform can also be computed with other methods iterating the discrete filters in the tree algorithms without using the wavelet transform matrix.
10.9.2 Number of Operations We consider now the number of operations required for the discrete orthonormal wavelet transform of a vector of data. Let L be the length of the data vector and N the length of the FIR filters p(n) and q(n). As the wavelet transform is a local operation usually N L. At the highest frequency band the first stage of decomposition requires 2NL multiplies and adds. In the tree algorithm at the next coarser frequency band the vector length of the discrete approximation c(n) is reduced to N=2. Therefore the next stage of decomposition requires 2(NL=2) multiplies and adds. The total number of operations of the orthonormal wavelet decomposition is then NL NL 1 1 þ þ ¼ 2NL 1 þ þ þ 4NL 2 NL þ 2 4 2 4 As N is a small number, the orthonormal wavelet transform requires only an O(L) computations. This is even faster than the FFT for the Fourier transform, that requires O(L log2 L) multiplies and adds, due to its global nature.
(10:140)
d(8)
If the length of the data vector N > 16 there were be more stages of applying (Equation 10.140) and permuting. The final output vector will always be a vector with two approximation coefficients C0 (1) and C0 (2) at the lowest resolution and a hierarchy of the wavelet coefficients D0 (1), D0 (2) for the lowest resolution and D(1) D(4) for higher resolution and d(1) d(8) for still higher resolution, etc. Notice that once the wavelet coefficients d’s are generated, they simply propagate through to all subsequent stages without further computation. The discrete wavelet reconstruction can be computed by simple reversed procedure, starting with the lowest resolution level
10.9.3 Time–Bandwidth Product The wavelet transform is a mapping of a function of time, in 1-D case, to the 2-D timescale joint representation. At first glance the time-bandwidth product of the wavelet transform output would be squared of that of the signal. In the multiresolution analysis framework, however, the size of the data vector is reduced by a factor of two in moving from one frequency band to the next coarser resolution frequency band. The time-bandwidth product also is reduced by a factor of two. If the original data vector c0(n) has L samples, in the tree algorithm for the wavelet decomposition shown in Figure 10.11 the first stage wavelet coefficients outputs d1(n) has L=2 samples, that of the second stage has L=4 samples etc. Let the length of the data vector L ¼ 2K, the total time–bandwidth product of the wavelet decomposition including all the wavelet coefficients di(n) with i ¼ 1, 2, .. . . . K 1 and the lowest resolution approximation ck1(n) is equal to the original data vector length 1 1 L þ þ L 2 4
10-50
10.10 Applications of the Wavelet Transform In this section we present some popular applications of the wavelet transform for multiresolution transient signal analysis and detection, image edge detection, and compression with some simple examples.
10.10.1 Multiresolution Analysis of Power System Signal In this section we show an example of multiresolution analysis for a simple transient signal. Transient signals in the power system are nonstationary time-varying voltage and current that can occur as a result of changes in the electrical configuration and in industrial and residential loads, and of a variety of disturbances on transmission lines, including capacitor switching, lightning strikes and short-circuits. The waveform data of the transient signals are captured by digital transient recorders. Analysis and classification of the power system disturbance can help to provide more stability and efficiency in power delivery by switching transmission lines to supply additional current or switching capacitor banks to balance inductive loads, and help to prevent system failures. The power system transient signals contain a range of frequencies from a few hertz to impulse components with microsecond rise times. The normal 60 Hz sinusoidal voltage and current waveforms are interrupted or superimposed with impulses, oscillations, and reflected waves. An experienced power engineer can visually analyze the waveform data in order to determine the type of system disturbance. However, the Fourier analysis with its global operation nature is not as appropriate for the transient signals as the timescale joint representation provided by the wavelet transform. 10.10.1.1 Multiresolution Wavelet Decomposition of Transient Signal The wavelet transform provides a decomposition of power system transient signals into meaningful components in multiple frequency bands, and the digital wavelet transform is computationally efficient [18]. Figure 10.1 in Section 10.1 shows the wavelet components in the multiple frequency bands. At the top is the input voltage transient signal. There is a disturbance of a capacitor bank switching on a three-phase transmission line. Below the first line are the wavelet components as a function of the scale and time shift. The scales of the discrete wavelets increase by a factor of two successively from SCALE 1 to SCALE 64, corresponding to the dyadic frequency bands. The vertical axis in each discrete scale is the normalized magnitude of the signal component in voltage. The three impulses in high frequency band SCALE 1 correspond to the successive closing of each phase of the three-phase capacitor bank. SCALE 2 and SCALE 4 are the bands of system response frequencies. SCALE 4 contains most energy from the resonant frequency caused by the addition
Transforms and Applications Handbook
of a capacitor bank to a primarily inductive circuit. The times of occurrence of all those components can be determined on the time axis. SCALE 64 contains only the basic signal of continuous 60 Hz. The wavelet analysis decomposes the power system transient into the meaningful components, whose modulus maxima then can be used for further classification. The nonorthogonal multiresolution analysis wavelets with FIR quadratic spline wavelet filters were used in this example of application. 10.10.1.2 Shift Invariance One problem in this application and many other applications with the dyadic wavelet transform is the lack of shift invariance. The dyadic wavelet transform is not shift invariant. In the wavelet decomposition the analysis low-pass and high-pass filters are double shifted by two as described by Equation 10.60. If the input signal is shifted by one sampling interval distance, then the output of the dyadic wavelet transform is not simple shifted by the same distance, but the values of the wavelet coefficients would be changed dramatically. This aliasing error is caused by the down-sampled by factor of two in the multiresolution signal analysis and is discussed in Section 10.6.2. This is a disadvantage of the dyadic wavelet transform, because many applications such as real-time signal analysis and pattern recognition require shift invariant wavelet transform. In the above example of application, the orthonormal quadrature mirror filters has been found sensitive to translations of the input. Hence, nonorthonormal quadratic spline wavelets have been used.
10.10.2 Signal Detection The detection of weak signals embedded in a stronger stationary stochastic process, such as the detection of radar and sonar signals in zero-mean Gaussian white noise, is a well-studied problem. If the shape of the expected signal is known, the correlation and the matched filter provide optimum solution in terms of the signal-to-noise ratio in the output correlation. In the detection of speech or biomedical signals, the exact shape of the signal is unknown. The Fourier spectrum analysis could be effective for those applications, only when the expected signal has spectral features that are clearly distinguished from the noise. The effectiveness of the Fourier spectrum analysis is generally proportional to the ratio of the signal to noise energy. For short-time, low-energy transients, the change in the Fourier spectrum is not easily detected. Such transient signals can be detected by the wavelet transform. An example of electrocardiogram signal detection follows [19]. Figure 10.18 shows the clinical electrocardiogram with normal QRS peaks and an abnormality called ventricular late potentials (VLP) right after the second QRS peak. The amplitude of the VLP signal is about 5% of the QRS peaks. Its duration was about 0.1 s, or a little less than 10% of the pulse period. The VLP’s are weak signals, swamped by noise, and they occur somewhat randomly. Figure 10.19 shows the magnitude of continuous wavelet transform with the cos-Gaussian wavelets of scale s ¼ 1=11,
10-51
Wavelet Transform
1.80
1.80
1.20
1.20
0.600
0.600
0.000
0.000 0.400
1.20
2.00 Seconds
2.80
3.60
0.400
1.20
2.00 Seconds
2.80
3.60
FIGURE 10.18 (Left) Normal electrocardiogram, (right) electrocardiogram with VLP abnormality. (From Combes, J. M. et al., Wavelets, 2nd ed. Springer-Verlag, Berlin, 1990. With permission.)
s = 1/11
180.0
s = 1/16
s = 1/22
140.0 100.0 60.0 20.0 0.400
1.20
2.00 2.80 Seconds
3.60
.400
1.20
2.00 2.80 Seconds
3.60
.400
1.20
2.80 2.00 Seconds
3.60
FIGURE 10.19 Wavelet transform of the abnormal electrocardiogram for scale factor s ¼ 11, 16, 22. The bulge to the right of the second QRS peak for s ¼ 1=16 indicates the presence of the VLP. (From Combes, J. M. et al., Wavelets, 2nd ed. Springer-Verlag, Berlin, 1990. With permission.)
1=16 and 1=22. The peak after the second QRS spike observed for s ¼ 1=16 is very noticeable and gives a clear indication of the presence of the VLP.
10.10.3 Image Edge Detection Edges and boundaries, representing shapes of objects, intersections between surfaces and between textures, are among the most important features of images, useful for image segmentation and pattern recognition. An edge in image is a set of locally connected pixels, which are characterized by sharp intensity variation in their neighborhood in one direction, i.e., the maximum of the gradient of intensity, and smooth intensity variation in the direction perpendicular to the gradient. Edges are local features of an image. The wavelet transform is a local operation. The wavelet transform of a constant is equal to zero and the wavelet transform of a polynomial function of degree n is also equal to zero if the Fourier transform of the wavelet has the zero of order n þ 1 about the frequency v ¼ 0, as described in Section 10.2.4, Hence, the wavelet transform is useful for detecting singularities of functions and edges of images.
10.10.3.1 Edge Detectors The edge detectors smooth first an image at various scales and then detect sharp variation from the first- or second-order derivative of the smoothed images. The extrema of the firstorder derivative correspond to the zero crossing of the secondorder derivative and to the inflection points of the image. An simple example of edge detector is the first- or secondorder derivative of the Gaussian function g(x, y). The Gaussian function gs(x, y) is scaled by a factor s. The first- and secondorder derivative, i.e., the gradient and the Laplacian, of gs(x, y) are the Gaussian wavelets satisfying the wavelet admissible condition, as described in Section 10.2.5. By the definition, the wavelet transform of an image f(x, y) is correlation between f(x, y) and the scaled wavelets. We derive that W f (s; x, y) ¼ f *(srgs ) ¼ sr( f *gs )(x, y) where * denotes the correlation with the first-order derivative Gaussian wavelet and s is the scale factor, so that W f (x, y) ¼ f * s2 D2 gs ¼ s2 D2 ( f *gs )(x, y)
10-52
Transforms and Applications Handbook
with the second-order derivative Gaussian wavelet, which is the Mexican-hat wavelet. The wavelet transform is then the gradient or the Laplacian of the image smoothed by the Gaussian function gs(x, y) at the scale s. The local maxima of the wavelet transform with the firstderivative of Gaussian wavelet can be extracted as edges. This is the Canny edge detector. The zero-crossing of the wavelet transform with the Mexican-hat wavelet corresponds to the inflection points of the smoothed image f *gs(x, y), which can be extracted as edges. This is the zero-crossing Laplacian edge detector. 10.10.3.2 Two-Dimensional Wavelet Transform The wavelet transform can be easily extended to 2-D case for image processing applications. The wavelet transform of a 2-D image f(x, y) is 1 W f (sx , sy ; u, v) ¼ pffiffiffiffiffiffiffi sx sy
ðð
xu yv ; dx dy f (x, y)c sx sy
that is a four-dimensional function. It is reduced to a set of twodimension functions of (u, v) with different scales, when the scale factors sx ¼ sy ¼ s. When c(x, y) ¼ c(r) with r ¼ (x2 þ y2)1=2, the wavelets are isotropic and have no selectivity for spatial orientation. Otherwise, the wavelet can have particular orientation. The wavelet can also be a combination of the 2-D wavelets with different particular orientations, so that the 2-D wavelet transform has orientation selectivity. At each resolution the pair of the 1-D low-pass and high-pass filters are first applied to each row of the image, that results in a horizontally approximation image and a horizontal detail image. Then the pair of the 1-D filters are applied to each column of the two horizontally filtered images. The down-sampling by two is
applied after each filtering. The two-step filtering and downsampling result in four subband images : (LL) for the low-pass filtered both horizontally and vertically image, (HH) for the highpass filtered both horizontally and vertically image, (LH) for lowpass filtered in horizontal direction and high-pass filtered in vertical direction image and (HL) for high-pass filtered in vertical direction and high-pass filtered in horizontal direction image, as shown in Figure 10.20 [16]. All the four images have the half size of the input image. We put the detail images (LH), (HL) and (HH) in three respective quadrants as shown in Figure 10.21. The image (LL) is the approximation image in both horizontal and vertical directions and is down-sampled in both directions. Then, we apply the whole process of two-step filtering and down-sampling again to the image (LL) in this lower resolution level. The iteration can continue many times until, for instance, the image (LL) has only a size of 2 3 2. Figure 10.21 show a disposition of the detail images (LH), (HL) and (HH) at three resolution levels (1, 2, 3) and the approximation image (LL) at the fourth low resolution level (4). If the original image has L2 pixels at the resolution i ¼ 0, then each image (LH), (HL) and (HH) at resolution level i has (L=2i)2 pixels (i > 0) The total number of pixels of the orthonormal wavelet representation is therefore still equal to L2, as shown in Figure 10.21. The dyadic wavelet transform does not increase the volume of data. This is owing to the orthonormality of the discrete wavelet decomposition. 10.10.3.3 Multiscale Edges The wavelet transform of a 2-D image for edge detection is performed at a set of dyadic scales, generating a set of detail images. Similarly to the reconstruction process described in Section 10.5.5, the detail images from the wavelet decomposition can be used to reconstruct the original image.
Columns Rows q
2
q
1
2
HH
p
1
2
HL
q
1
2
LH
p
1
2
LL
1
ci
p
2
1
Convolve rows or columns with the ilter x
x 2
1
Keep one column out of two
1
2
Keep one row out of two
FIGURE 10.20 Schematic two-dimensional wavelet decomposition with quadrature mirror low-pass and high-pass filters p(n) and q(n).
10-53
Wavelet Transform
(LL)3
(HL)3 (HL)2
(LH)3
(HH)3 (HL)1
(LH)2
(HH)2
(LH)1
(HH)1
FIGURE 10.21 Presentation of the two-dimensional wavelet decomposition. The detail images din and the approximation ci are defined by Equations (10.9.5) and (10.9.7).
If the wavelet is the first-order derivative of the Gaussian smoothing function and we retain the modulus maxima of the wavelet components then we obtain the edge images, which correspond to the maximum variation of the smoothed image at a scale s. The multiscale edge information provided by the wavelet transform can also be used to analyze the regularity of the image function by observing the propagation of the modulus maxima from one scale to next scale. A similar approach is used to reduce the noise in the image, since the noise has different regularity than that of the image and image edges.
10.10.4 Image Compression Image compression is to use fewer bits to represent the image information for different purposes, such as image storage, image transmission, and feature extraction. The general idea behind is to remove the redundancy in an image to find more compact representation. A popular method for image compression for removing the spatial redundancy is so-called transform coding, that represents the image in the transformation basis such that the transformation coefficients are decorrelated. We see in Section 10.5.4 that the multiresolution wavelet decomposition is projections onto subspaces spanned by scaling function basis and the wavelet basis. The projections on the scaling function basis yield approximations of the signal and the projections on the wavelet basis yield the differences between the approximations at two adjacent resolution levels. Therefore, the wavelet detail images are decorrelated and can be used for image compression. Indeed, the
detail images obtained from the wavelet transform consist of edges in the image. There is only few correlation among the values on pixels in the edge images. One example of image compression applications is the grayscale fingerprint image compression using the wavelet transform [20]. The fingerprint images are captured as 500 pixels per inch and 256 gray levels. The wavelet subband decomposition is accomplished by the tree algorithm described by Figure 10.20. The dominant ridge frequency in fingerprint images is in roughly v ¼ p=8 up to v ¼ p=4 bands. Because the wavelet decomposition removes the correlation among image pixels, only the wavelet coefficients with large magnitude are retained. The wavelet decomposition uses pairs of symmetric biorthogonal wavelet filters with 7 and 9 taps. Most wavelet transform coefficients are equal or close to zero in the regions of smooth image intensity variation. After a thresholding on the wavelet coefficients the retained coefficients are subsequently coded according to a scalar quantizer and are mapped to a set of 254 symbols for Huffman encoding using the classical image coding technique. The thresholding and the Huffman coding can achieve high compression ratio. The analysis low-pass and high-pass filters, the quantization rule and the Huffman code table are included with the compressed images, so that a decoder can reconstruct approximations of the original images by performing the inverse wavelet transform. After compression at 20:1, the reconstructed images conserve the ridge features: ridge ending or bifurcations that are definitive information useful for determination.
References 1. D. Gabor, Theory of communication, J. Inst. Elec. Eng. 93, 429–457 (1946). 2. E. Wigner, On the quantum correction for thermodynamic equilibrium, Phys. Rev. 40, 749–759 (1932). 3. A. W. Rihaczek, Principles of High Resolution Radar, McGraw-Hill, New York (1969). 4. C. Gasquet and P. Witomski, Analyse de Fourier et Applications, Chap. XII, Masson, Paris (1990). 5. A. Harr, Zur theorie des orthogonalen funktionen systeme, Math. Annal, 69, 331–371 (1910). 6. R. K. Martinet, J. Morlet, and A. Grossmann, Analysis of sound patterns through wavelet transforms, Int. J. Pattern Recogn. Artif. Intell. 1(2), 273–302 (1987). 7. D. Marr and E. Hildreth, Theory of edge detection, Proc. R. Soc. Lond. B 207, 187–217 (1980). 8. I. Daubechies, The wavelet transform: Time-frequency localization and signal analysis, IEEE Trans. Inform. Theory 36(5), 961–1005 (1990). 9. S. G. Mallat, Multifrequency channel decompositions of images and wavelet models, Trans. IEEE Acoust. Speech Signal Process. ASSP-37 12, 2091–2110 (1989). 10. P. J. Burt and E. H. Adelson, The Laplacian pyramid as a compact image code, IEEE Trans. Commun. 31(4), 532–540 (1983).
10-54
11. A. N. Akansu and R. A. Haddad, Multiresolution Signal Decomposition, Academic, Boston, MA (1992). 12. G. Strang, Wavelets and dilation equations: A brief introduction, SIAM Rev. 31(4), 614–627 (1989). 13. I. Daubechies, Orthonormal bases of compactly supported wavelets, Commun. Pure Appl. Math. XLI, 909–996 (1988). 14. C. K. Chui, Ed., Wavelets: A Tutorial Theory and Applications, Vol. 2, Academic, Boston, MA (1992). 15. P. G. Lemarie and Y. Meyer, Ondelettes et bases Hibertiennes, Rev. Mat. Iberoam. 1Y2, 2 (1986). 16. S. Mallat, A theory for multiresolution signal decomposition: The wavelet representation, IEEE Trans. Pattern Anl. Machine Intell. PAMI-31 11, 674–693 (1989). 17. W. H. Press et al., Numerical Recipes, 2nd edn., Chap 13, Cambridge, London, U.K. (1992). 18. D. C. Robertson, O. I. Camps, and J. Mayer, Wavelets and power system transients: Feature detection and classification, Proc. SPIE 2242, 474–487 (1994).
Transforms and Applications Handbook
19. J. M. Combes, A. Grosmann, and P. Tchamitchian, Eds., Wavelets, 2nd edn., Springer-Verlag, Berlin, Germany (1990). 20. T. Hopper, Compression of gray-scale fingerprint images, Proc. SPIE 2242, 180–187 (1994). 21. Y. Sheng, D. Roberge, and H. Szu, Optical wavelet transform, Opt. Eng. 31, 1840–1845 (1992). 22. H. Szu, Y. Sheng, and J. Chen, The wavelet transform as a bank of matched filters, Appl. Opt. 31(17), 3267–3277 (1992). 23. M. O. Freeman, Wavelet signal representations with important advantages, Opt. & Photonics News 4(8), 8–14, August (1995). 24. G. Strang and T. Hguyen, Wavelets and Filter Banks, Cambridge-Wellesley, Wellesley, MA (1996). 25. R. L. Allen et al., Laplacian and orthogonal wavelet pyramid decomposition in coarse-to-fine registration, IEEE Trans. Signal Process. 41(12), 3536–3540 (1993).
11 Finite Hankel Transforms, Legendre Transforms, Jacobi and Gegenbauer Transforms, and Laguerre and Hermite Transforms 11.1 Finite Hankel Transforms ........................................................................................................ 11-1 Introduction . Definition of the Finite Hankel Transform and Examples . Basic Operational Properties . Applications of Finite Hankel Transforms . Additional Relations
11.2 Legendre Transforms................................................................................................................. 11-5 Introduction . Definition of the Legendre Transform and Examples . Basic Operational Properties of Legendre Transforms . Applications of Legendre Transforms to Boundary Value Problems . Additional Relations
11.3 Jacobi and Gegenbauer Transforms..................................................................................... 11-11 Introduction . Definition of the Jacobi Transform and Examples . Basic Operational Properties . Applications of Jacobi Transforms to the Generalized Heat Conduction Problem The Gegenbauer Transform and Its Basic Operational Properties . Application of the Gegenbauer Transform
.
11.4 Laguerre Transforms ............................................................................................................... 11-16 Introduction . Definition of the Laguerre Transform and Examples . Basic Operational Properties . Applications of Laguerre Transforms . Additional Relations
11.5 Hermite Transforms ................................................................................................................ 11-21
Lokenath Debnath University of Texas-Pan American
Introduction . Definition of the Hermite Transform and Examples Properties . Additional Relations
.
Basic Operational
References .............................................................................................................................................. 11-27
11.1 Finite Hankel Transforms 11.1.1 Introduction This chapter is devoted to the study of the finite Hankel transform and its basic operational properties. The usefulness of this transform is shown by solving several initial-boundary problems of physical interest. The method of finite Hankel transforms was first introduced by Sneddon (1946).
THEOREM 11.1 If f (r) is defined in 0 r a and ða ~f (ki ) ¼ r f (r)Jn (rki )dr, n
(11:1)
o
then f (r) can be represented by the Fourier–Bessel series as
11.1.2 Definition of the Finite Hankel Transform and Examples Just as problems on finite intervals a< x < a lead to Fourier series, problems on finite intervals 0 < r < a, where r is the cylindrical polar coordinate, lead to the Fourier–Bessel series representation of a function f (r), which can be stated in the following theorem.
f (r) ¼
1 2 X ~f (ki ) Jn (rki ) , 2 (ak ) 2 a i¼1 n Jnþ1 i
(11:2)
where ki(0 < k1 < k2 < . . . ) are the roots of the equation Jn(aki) ¼ 0, which means Jn0 (aki ) ¼ Jn 1 (aki ) ¼
Jnþ1 (aki ),
(11:3)
11-1
11-2
Transforms and Applications Handbook
due to the standard recurrence relations among Jn0 (x), Jn1 (x), and Jnþ1(x). Proof We write the Bessel series expansion of f(r) formally as f (r) ¼
1 X
ci Jn (rki ),
o
ða ~ *1 { f (r)} ¼ f 1 (ki ) ¼ rf (r)J1 (rki )dr,
(11:4)
i¼1
(11:10)
0
where the summation is taken over all the positive zeros k1, k2, . . . of the Bessel function Jn(aki). Multiplying Equation 11.4 by rJn(rki), integrating both sides of the result from 0 to a, and then using the orthogonal property of the Bessel functions, we obtain ða
where the summation is taken over the positive roots of J0(ak) ¼ 0. Similarly, the first-order finite Hankel transform and its inverse are
1 2 X ~ ~f (ki ) J1 (rki ) , *1 1 { f 1 (ki )} ¼ f (r) ¼ 2 a i¼1 1 J22 (aki )
(11:11)
where ki is chosen as a positive root of J1(ak) ¼ 0.
ða
rf (r)Jn (rki )dr ¼ ci rJn2 (rki )dr:
&
We now give examples of finite Hankel transforms of some functions.
o
Or,
Example 11.1 2 ~f (ki ) ¼ a ci J 2 (aki ), n 2 nþ1
If f (r) ¼ r n, then ða anþ1 *n {r n } ¼ r nþ1 Jn (rki ), dr ¼ Jnþ1 (aki ): ki
hence, we obtain ci ¼
2 ~f n (ki ) 2 (ak ) : a2 Jnþ1 i
(11:12)
0
(11:5)
When n ¼ 0, *0 {1} ¼
Substituting the value of ci into Equation 11.4 gives Equation 11.2.
a J1 (aki ): ki
(11:13) &
Definition 11.1: The finite Hankel transform of order n of a function f (r) is denoted by *n{f(r)} ¼ ~f n(ki) and is defined by ða
*n { f (r)} ¼ ~f n (ki ) ¼ rf (r)Jn (rki )dr:
(11:6)
0
Example 11.2 If f(r) ¼ (a2 r2), then ða
*0 {(a r )} ¼ r(a2 r 2 )J0 (aki )dr ¼ 2
2
0
The inverse finite Hankel transform is then defined by 2 ~ *1 n { f n (ki )} ¼ f (r) ¼ 2 a
1 X i¼1
~f (ki ) Jn (rki ) , n 2 (ak ) Jnþ1 i
Since ki are the roots of J0 (ak) ¼ 0, we find
(11:7)
where the summation is taken over all positive roots of Jn(ak) ¼ 0. The zero-order finite Hankel transform and its inverse are defined by ða
*0 { f (r)} ¼ ~f 0 (ki ) ¼ r f (r)J0 (rki )dr,
(11:8)
0
1 X ~f (ki )} ¼ f (r) ¼ 2 ~f (ki ) J0 (rki ) , { *1 0 0 2 a i¼1 0 J12 (aki )
4a 2a2 J (ak ) J0 (aki ): 1 i ki2 ki3
*0 {(a2 r 2 )} ¼
4a J1 (aki ): ki3
(11:14) &
11.1.3 Basic Operational Properties We state the following operational properties of finite Hankel transforms: *n { f 0 (r)} ¼
ki [(n 1)*nþ1 { f (r)} (n þ 1)*n1 { f (r)}], n 1, 2n (11:15)
(11:9) provided f(r) is finite at r ¼ 0.
11-3
Finite Hankel Transforms, Legendre Transforms, Jacobi and Gegenbauer Transforms
When n ¼ 1, we obtain the finite Hankel transform of derivatives (11:16) *1 { f (r)} ¼ ki *0 { f (r)} ¼ ki ~f 0 (ki ): 1 d n2 *n {r f 0 (r)} 2 f (r) ¼ k2i ~f n (ki ) aki f (a)Jn0 (aki ): r r dr (11:17)
The inverse transform gives the formal solution
u(r, t) ¼
0
*0
u(r, t) ¼
1 0 00 f (r) þ f (r) ¼ k2i ~f 0 (ki ) þ aki f (a)J1 (aki ): r
(11:18)
*1
1 0 1 f (r) þ f (r) 2 f (r) ¼ k2i ~f 1 (ki ) aki f (a)J10 (aki ): r r (11:19)
11.1.4 Applications of Finite Hankel Transforms
2 qu q u 1 qu þ ¼k , qt qr 2 r qr
0 r a, t > 0
u(r, 0) ¼ 0,
on 0 r a:
1 ut ¼ v urr þ ur r
0
kki2 (t
u , r2
0 r a,
t > 0,
(11:29)
u(r, t) ¼ aV on r ¼ a, t > 0, u(r, t) ¼ 0 at t ¼ 0
for 0 < r < a:
(11:30) (11:31)
We solve the problem by using the joint Laplace and the finite Hankel transform of order one defined by (11:23)
~ u(ki , s) ¼
1 ð
e
0
st
ða dt rJ1 (ki r)u(r, t)dr,
(11:32)
0
where ki are the positive roots of J1(aki) ¼ 0. Application of the joint transform gives
(11:24a,b) s~ u(ki , s) ¼
The solution of the first-order system is
(11:28)
(11:21)
yields the given system with the boundary condition
~ u(ki , t) ¼ kaki J1 (aki ) f (t) exp
X 2T0 1 J0 (rki ) exp kki2 t : a J (ak ) k i 1 i i¼1
The axisymmetric unsteady motion of a viscous fluid in an infinitely long circular cylinder of radius a is governed by
0
ðt
(11:27)
(11:20)
Application of the finite Hankel transform defined by
~ ut þ kki2 ~u ¼ kaki J1 (aki )f (t), ~ u(ki , 0) ¼ 0:
exp kki2 t :
Example 11.4 (Unsteady Viscous Flow in a Rotating Long Circular Cylinder)
(11:22)
ða ~ u(ki , t) ¼ *0 {u(r, t)} ¼ rJo (rki )u(r, t)dr,
1 2T0 X J0 (rki )
1 a i¼1 ki J1 (aki )
where u ¼ u(r,t) is the tangential fluid velocity and v is the constant kinematic viscosity of the fluid. The cylinder is initially at rest at t ¼ 0, and it is then allowed to rotate with constant angular velocity V. Thus, the boundary and initial conditions are
with the boundary and initial conditions u(r, t) ¼ f (t) on r ¼ a, t > 0
(11:26)
This solution representing the temperature distribution consists of the steady-state term, and the transient term which decays to zero as t ! 1. Consequently, the steady tempera& ture is attained in the limit as t ! 1.
Example 11.3 (Temperature Distribution in a Long Circular Cylinder) Find the solution of the axisymmetric heat conduction equation
u(r, t) ¼ T0
00
Results (Equations 11.18 and 11.19) are very useful for finding solutions of differential equations in cylindrical polar coordinates. The proofs of the above results are elementary exercises for the reader.
0
t) dt:
Using the inverse version of Equation 11.7 gives the final solution
If n ¼ 1, Equation 11.17 becomes
1 ðt 2k X ki J0 (rki ) f (t) exp kki2 (t a i¼1 J1 (aki )
In particular, if f(t) ¼ T0 ¼ constant,
When n ¼ 0
~ n ki2 u(ki , s)
na2 Vki 0 J1 (aki ): s
Or, t) dt
(11:25)
~ u(ki , s) ¼
na2 Vki J10 (aki ) : sðs þ nki2 Þ
(11:33)
11-4
Transforms and Applications Handbook
The inverse Laplace transform gives
~ u(ki , t) ¼
a2 V 0 J1 (aki ) 1 exp ntki2 : ki
~ u ¼ ~f (ki ) and (11:34)
d~ u ¼~ g(ki ): dt t¼0
(11:43ab)
The solution of this system is
Thus, the final solution is found from Equation 11.34 by using the inverse Hankel transform with J10 (aki ) ¼ J2 (aki ) in the form
~ u(ki , t) ¼ ~f (ki ) cos (ctki ) þ
~ g(ki ) sin (ctki ): cki
(11:44)
The inverse transform yields the formal solution u(r, t) ¼ 2V
1 X i¼1
J1 (rki )
1 exp ntki2 : ki J2 (aki )
(11:35)
This solution is the sum of the steady-state and the transient fluid velocities. In view of Equation 11.12 for n ¼ 1, we can write r ¼ *1 1
2
1 X a J1 (rki ) J2 (aki ) ¼ 2 : ki J2 (aki ) k i i¼1
(11:36)
This result is used to simplify Equation 11.35 so that the final solution for u(r, t) takes the form
u(r, t) ¼ rV 2V
1 X J1 (rki ) exp ntki2 : k J (aki ) i¼1 i 2
where the summation is taken over all positive roots of J0(aki) ¼ 0. We consider a more general form of the finite Hankel transform associated with a more general boundary condition f 0 (r) þ h f (r) ¼ 0
ða ~ *n { f (r)} ¼ f n (ki ) ¼ rJn (rki )f (r)dr,
(11:39)
with the initial and boundary data
(11:47)
where ki are the roots of the equation ki Jn0 (aki ) þ hJn (aki ) ¼ 0:
(11:48)
The corresponding inverse transform is given by
Example 11.5 (Vibrations of a Circular Membrane)
1 utt ¼ c2 urr þ ur , 0 < r < a, t > 0 r
(11:46)
0
~ f (r) ¼ *1 n {f n (ki )} ¼ 2
The free symmetric vibration of a thin circular membrane of radius a is governed by the wave equation
at r ¼ a,
where h is a constant. We define the finite Hankel transform of f (r) by
(11:38)
Physically, this represents the rigid body rotation of the fluid & inside the cylinder.
1 1 2 X J0 (rki ) 2 X J0 (rki ) f (k ) cos (ctk ) g(ki ) sin (ctki ) 2 þ , i i 2 a2 i¼1 J1 (aki ) ca2 i¼1 ki J1 (aki )
(11:45)
(11:37)
In the limit as t!1, the transient velocity component decays to zero, and the ultimate steady state flow is attained in the form u(r, t) ¼ rV:
u(r, t) ¼
1 X i¼1
ki2~f n (ki )Jn (rki ) : ðki2 þ h2 Þa2 n2 Jn2 (aki )
(11:49)
This finite Hankel transform has the following operational property
*n
1 d 0 n2 {rf (r)} 2 f (r) ¼ ki2~f n (ki ) þ a[f 0 (a) þ h f (a)]Jn (aki ), r r dr (11:50)
which is, by Equation 11.48 qu ¼ g(r) at t ¼ 0 for 0 < r < a, (11:40a,b) u(r, t) ¼ f (r), qt u(a, t) ¼ 0
for all t > 0:
(11:41)
Application of the zero-order finite Hankel transform of u(r, t) defined by Equation 11.23 in Equation 11.39 through 11.41 gives d2 ~u þ c2 ki2 ~u ¼ 0, dt 2
(11:42)
aki ¼ ki2~f n (ki ) [f 0 (a) þ h f (a)]Jn0 (aki ): h
(11:51)
Thus, result (Equation 11.51) involves f 0 (a) þ hf(a) as the & boundary condition. We apply these more general finite Hankel transform pairs (Equations 11.47 and 11.49) to solve the following axisymmetric initial-boundary value problem.
11-5
Finite Hankel Transforms, Legendre Transforms, Jacobi and Gegenbauer Transforms
Example 11.6 (Temperature Distribution of Cooling of a Circular Cylinder) Solve the axisymmetric heat conduction problem for an infinitely long circular cylinder of radius r ¼ a with the initial constant temperature T0, and the cylinder cooling by radiation of heat from its boundary surface at r ¼ a to the outside medium at zero temperature according to Newton’s law of cooling, which satisfies the boundary condition qu þ hu ¼ 0 qr
at r ¼ a, t > 0,
(11:52)
where h is a constant. The problem is governed by the axisymmetric heat conduction equation 1 ut ¼ k urr þ ur , r
0 r a,
t > 0,
(11:53)
at t ¼ 0, for 0 < r < a:
(11:54)
Application of the zero-order Hankel transform (Equation 11.47) with (Equation 11.48) to the system (Equations 11.52 through 11.54) gives
~ u(ki , 0) ¼ T0 rJ0 (rki )dr ¼ 0
t>0
(11:55)
aT0 J1 (aki ): ki
(11:56)
The solution of Equations 11.55 and 11.56 is ~ u(ki , t) ¼
aT0 J1 (aki ) exp ( ktki2 ): ki
1 2hT0 X J0 (rki ) exp ( ktki2 ) , a ðki2 þ h2 ÞJ0 (aki ) i¼1
An (rki ) ¼ Jn (rki )Yn (aki )
Yn (rki )Jn (aki ),
and Yn(x) is the Bessel function of the second kind of order n, then the inverse transform is 2
p *n 1 { ~f n (ki )} ¼ f (r) ¼ 2
1 X k2i ~f n (ki )An (rki )Jn2 (bki ) , Jn2 (aki ) Jn2 (bki ) i¼1
where ki are the positive roots of An(bki) ¼ 0.
*n
1 f (r) þ f 0 (r) r 00
n2 f (r) ¼ k2i ~f n (ki ) r2 2 Jn (aki ) þ f (b) p Jn (bki )
f (a) :
11.1.5 Additional Relations
11.2.1 Introduction We consider in this chapter the Legendre transform with a Legendre polynomial as kernel and discuss basic operational properties including the convolution theorem. Legendre transforms are then used to solve boundary value problems in potential theory. This chapter is based on papers by Churchill (1954) and Churchill and Dolph (1954).
11.2.2 Definition of the Legendre Transform and Examples Churchill (1954) defined the Legendre transform of a function f(x) defined in 1< x < 1 by the integral
(11:58)
where the summation is taken over all the positive roots of & kiJ1(aki) ¼ hJ0(aki).
Jn (ar) aki J 0 (aki ) ¼ 2 ða k2i Þ n Jn (aa) 2. If *n{f(r)} is the finite Hankel transform of f (r) defined by Equation 11.6, and if n > 0, then (a) *n {r 1 f 0 (r)} ¼ 12 ki [*nþ1 {r 1 f (r)} *n 1 {r 1 f (r)}], (b) *0 {r 1 f 0 (r)} ¼ ki *1 {r 1 f (r)} f (a): 1. *n
where
(11:57)
The inverse transform (Equation 11.49) with n ¼ 0 and ki J00 (aki ) þ hj0 (aki ) ¼ 0, that is, kiJ1(aki) ¼ hj0(aki), leads to the formal solution u(r, t) ¼
a
11.2 Legendre Transforms
d~u þ kki2 ~u ¼ 0, dt ða
ðb ~ *n { f (r)} ¼ f n (ki ) ¼ rf (r)An (rki )dr, b > a,
4. For the transform defined in problem 3, then
with the boundary condition (Equation 11.52) and the initial condition u(r, 0) ¼ T0
3. If we define the finite Hankel transform of f (r) by
7n { f (x)} ¼ ~f (n) ¼
ð1
Pn (x)f (x)dx,
(11:59)
1
provided the integral exists and where Pn(x) is the Legendre polynomial of degree n (0). Obviously 7n is a linear integral transformation. When x ¼ cos u, Equation 11.59 becomes ðp ~ 7n { f (cos u)} ¼ f (n) ¼ Pn (cos u)f (cos u) sin udu: 0
(11:60)
11-6
Transforms and Applications Handbook
The inverse Legendre transform is given by 1 X 2n þ 1 ~ ~f (n)} ¼ f (x) ¼ 71 { f (n)Pn (x): n 2 n¼0
1 Qn (t) ¼ 2
(11:61)
This follows from the expansion of any function f (x) in the form f (x) ¼
1 X
an Pn (x),
(11:62)
n¼0
an ¼
2n þ 1 2
ð1
1
(t x)1 Pn (x)dx:
1
These results are easy to verify with the aid of results given in & Copson (1935, p. 292, 310).
Example 11.9 If jrj 1, then
where the coefficient an can be determined from the orthogonal property of Pn(x). It turns out that
ð1
2n þ 1 ~ Pn (x)f (x)dx ¼ f (n), 2
a: 7n {(1
2rx þ r 2 )
1=2
b: 7n {1
2rx þ r 2 )
3=2
(11:63)
(1
Example 11.7
2p a
1=2
in Jnþ1=2 (a),
}¼
(11:64)
1=2
2rx þ r 2 )
¼
1 X n¼0
2r n : r2)
(11:69)
(1
r n Pn (x), jrj < 1:
ð1
(1
1=2
2rx þ r 2 )
Pn (x)dx ¼
1
ð1
(11:68)
Multiplying this result by Pn (x) and using the orthogonality condition of the Legendre polynomial gives
where Jv(x) is the Bessel function. We have, by definition,
7n { exp (iax)} ¼
2r n , (2n þ 1)
We have, from the generating function of Pn (x),
and hence, result (Equation 11.61) follows.
7n { exp (iax)} ¼
}¼
2r n : (2n þ 1)
In particular, when r ¼ 1, we obtain
exp (iax)Pn (x)dx,
1
7n {(1
which is, by a result in Copson (1935, p. 341),
x)
1=2
pffiffiffi 2 2 }¼ : (2n þ 1)
1 2
Similarly,
ð1
(1
2rx þ r 2 )
3=2
2r 2 )Pn (x)dx ¼
(2rx
1
(11:65)
where Iv (x) is the modified Bessel function of the first kind.
2nr n , (2n þ 1)
so that 7n {(1
2rx þ r 2 )
1=2
r 2 )7n {(1
} þ (1
2rx þ r 2 )
Using Equation 11.68, we obtain Equation 11.69.
Example 11.8
a: 7n {(1 x 2 )1=2 } ¼ p Pn2 (0)
1 b: 7n ¼ Qn (t), jtj > 1, 2(t x)
(11:71)
Differentiating Equation 11.70 with respect to r gives
rffiffiffiffiffiffi 2p n i Jnþ1=2 (a): ¼ a
rffiffiffiffiffiffi 2p 7n { exp (ax)} ¼ Inþ1=2 (a), a
(11:70)
3=2
}¼
2nr n : (2n þ 1) &
Example 11.10 (11:66) (11:67)
where Qn(t) is the Legendre function of the second kind given by
If jrj < 1 and a > 0, then 7n
8r 0:
(11:79)
0
Proof We have, by definition, However, for n > 1, we use the recurrence relation for Pn(x) as 0 0 (x) Pn1 (x) (2n þ 1)Pn (x) ¼ Pnþ1
(11:74)
7n {R[f (x)]} ¼
ð1
1
d (1 dx
d x ) f (x) Pn(x)dx dx 2
which is, by integrating by parts together with Equation 11.77,
to derive
7n {H(x)} ¼ ¼
1 (2n þ 1)
ð1
¼
0 0 [Pnþ1 (x) Pn1 (x)]dx
0
1 [Pn1 (0) Pnþ1 (0)]: 2n þ 1
ð1
1
(1
1 X (2n þ 1) (n m)! n¼0
1
d f (x)dx: dx
7n {R[f (x)]} ¼
[(1 x2 )]Pn0 (x)f (x)]1 1 ð1 d þ [(1 x2 )]Pn0 (x), f (x)dx: dx 1
x 2 )m=2 Pnm (x)
f (x)dx,
(11:75)
Using Equation 11.77 and the differential equation for the Legendre polynomial
where Pnm (x) is the associated Legendre function of the first kind. The inverse transform is given by
~ f (x) ¼ 71 n,m {f (n,m)} ¼
x2 )Pn0 (x)
(1
Integrating this result by parts again, we obtain
Debnath and Harrel (1976) introduced the associated Legendre transform defined by
7n, m { f (x)} ¼ ~f (n, m) ¼
ð1
2
(n þ m)!
~f (n, m)(1 x 2 )m=2 Pm (x): n (11:76)
The reader is referred to Debnath and Harrel (1976) for a & detailed discussion of this transform.
d (1 dx
dy x ) þ n(n þ 1)y ¼ 0, dx 2
(11:80)
we obtain the desired result 7n {R[f (x)]} ¼
n(n þ 1)~f (n):
We may extend this result to evaluate the Legendre transforms of & the differential forms R2 [f (x)], R3 [f (x)], . . . , Rk [f (x)].
11-8
Transforms and Applications Handbook
Clearly
Clearly,
7n {R2 [f (x)]} ¼ 7n {R[R[f (x)] ]}
¼ n(n þ 1)7n {R[ f (x)]} ¼ n2 (n þ 1)2~f (n),
(11:81)
provided f 0 (x) and f 00 (x) satisfy the conditions of Theorem 11.2. Similarly, 7n {R3 [ f (x)]} ¼ (1)3 n3 (n þ 1)3~f (n):
(11:82)
R[ log (1 x)] ¼
d log (1 x) does not satisfy the conditions of dx Theorem 11.2, we integrate by parts to obtain
Although
7n {R[ log (1 x)]} ¼
More generally, for a positive integer k, (11:83)
COROLLARY 11.1
7n {R[f (x)]} ¼
2
7n
1 f (x) R[f (x)] 4
# 1 ~ f (n): 4
(11:85)
k
1
(1 þ x)Pn0 (x)dx,
(11:88)
(1) 7n {R [f (x)] 4 f (x)} # " k1 X 1 2k2r ~ r k r (1) nþ ¼ f (n): 4 2 r r¼0
ð1
1
d d (1 x 2 ) log (1 x) Pn (x)dx, dx dx
which is, by Equation 11.78, ¼ 2 n(n þ 1)~f (n),
(11:89)
where ~f (n) ¼ 7n { log (1 x)}. However, R[log(1 x)] ¼ 1 so that 7n{R[log(1 x)]} ¼ 0 for all n > 0 and hence, result (Equation 11.89) gives 2 : n(n þ 1)
On the other hand, since P0(x) ¼ 1, we have 70 {[ log (1 x)]} ¼
k
(11:86)
The proof of Equation 11.86 from Equation 11.83 follows 2 by replacing n(nþ1)with n þ 12 14 and using the binomial & expansion.
ð1
1
log (1 x)dx,
which is, by direct integration, ¼ [(1 x){ log (1 x) x}]11 ¼ 2( log 2 1): &
THEOREM 11.3
Example 11.12 8 < 2(log 2 1), 7n { log (1 x)} ¼ 2 : , n(n þ 1)
þ
ð1
d [(1 x 2 )]Pn0 (x)dx: dx
7n [ log (1 x)] ¼ ~f (n) ¼
1 2~ ¼ nþ f (n): 2
In general, this result can be written as k
1
log (1 x)
7n {R[ log (1 x)]} ¼ 2 þ
Rearranging the terms in Equation 11.85 gives
ð1
(11:84)
2 Proof We replace n(nþ1) by n þ 12 14 in Equation 11.78 to obtain 1 nþ 2
x)Pn (x)]11
By integrating by parts twice, result (Equation 11.88) gives
1 1 2~ f (x) R[ f (x)] ¼ n þ f (n): 4 2
"
R[log(x)]Pn (x)dx
1
d which is, since (1 þ x) ¼ (1 x 2 ) log (1 x), and by intedx grating by parts,
¼ 2 þ
If 7n {R[ f (x)]} ¼ n(n þ 1)~f (n), then 7n
ð1
¼ [(1 þ
7n {Rk [ f (x)]} ¼ (1)k nk (n þ 1)k~f (n):
d d (1 x 2 ) log (1 x) ¼ 1: dx dx
9 n ¼ 0= : n > 0;
(11:87)
If f (x) and f 0 (x) are piecewise continuous in 1 < x < 1, Ð1 R1[ f(x)] ¼ h(x), and f (0) ¼ 1 f (x)dx ¼ 0, then
11-9
Finite Hankel Transforms, Legendre Transforms, Jacobi and Gegenbauer Transforms
71 n
) ðs ðx ~f (n) ds f (t)dt, ¼A (1 s2 ) n(n þ 1)
(
0
(11:90)
1
THEOREM 11.4 If f(x) is continuous in each subinterval of (1, 1) and a continuous function g(x) is defined by
where A is an arbitrary constant of integration. Proof We have
ðx
g(x) ¼
R[h(x)] ¼ f (x) or,
f (t)dt,
(11:95)
1
then
d 2 d (1 x ) h(x) ¼ f (x): dx dx
7n {g (x)} ¼ ~f (n) ¼ g(1) 0
Integrating over (1, x) gives ðx
1
f (t)dt ¼ (1 x2 )
d h(x), dx
(11:91)
ð1
g(x)Pn0 (x)dx:
(11:96)
1
Proof We have, by definition,
7n {g 0 (x)} ¼
which is a continuous function of x in jxj < 1 with limit zero as jxj ! 1. Integration of Equation 11.91 gives
ð1
g 0 (x) Pn (x)dx,
1
which is, by integrating by parts, h(x) ¼
ðx 0
ds (1 s2 )
ðs
1
f (t)dt A,
¼
where A is an arbitrary constant. Clearly, h(x) satisfies the conditions of Theorem 11.2, and there exists a positive real constant m < 1 such that
[Pn (x)g(x)]11
ð1
g(x)Pn0 (x)dx:
1
Since Pn(1) ¼ 1 and g(1) ¼ 0, the preceding result becomes & Equation 11.96.
jh(x)j ¼ O{(1 x2 )m } as jxj ! 1: Hence, 7n{R[h(x)]} exists, and by Theorem 11.2, it follows that 7n {R[h(x)]} ¼ n(n þ 1)7n {h(x)} ¼ n(n þ 1)7n {R1 [f (x)]},
(11:92)
from which it turns out that ~f (n) 7n {R { f (x)}} ¼ : n(n þ 1) 1
(11:93)
f (n) ¼ R1 { f (x)} ¼ h(x) n(n þ 1) ðs ðx ds f (t)dt: ¼A 1 s2
0
This proves the theorem.
If result (Equation 11.96) is true and g(x) is given by Equation 11.95, then 7n {g(x)} ¼ f (0) f (1) ~f (n 1) ~f (n þ 1) ¼ (2n þ 1)
when n ¼ 0 when n > 1
)
:
(11:97)
Proof We write ~f (n 1) and ~f (n þ 1) using Equation 11.96 and then subtract so that the resulting expression gives Equation & 11.97 with the help of Equation 11.74.
Inversion leads to the results 71 n
COROLLARY 11.2
(11:94)
COROLLARY 11.3
1
&
If g0 (x) is a sectionally continuous function and g(x) is the continuous function given by Equation 11.95, then
11-10
Transforms and Applications Handbook
7n {g 0 (x)} ¼ g(1), when n ¼ 0 ¼ g(1) (2n 1)~g (n 1) (2n 5)~g (n 3)
g(0) when n ¼ 1, 3, 5, . . . ¼ g(1) 2(2n 1)~g (n 1) (2n 5)~g (n 3) 3g(1) when n ¼ 2, 4, 6, . . .
)
(11:98)
These results can readily be verified using Equations 11.74 & and 11.96.
0 0
g( cos m cos n þ sin m sin n cos b)Pn ( cos n) sin n dn: (11:105)
Substituting this result in Equation 11.102 and changing the order of integration, we obtain
0 0
0
and 7n {g(x)} ¼ ~g (n), then
7n { f (x)} * g(x) ¼ ~f (n)~g (n),
ðp ðp
2pp 3 ðp ðð 1 ~f (n)~g (n) ¼ Pn (cos n)sin n4 f (cos m)sin m g(cos l)dm db5dn p
THEOREM 11.5 (Convolution) If 7n { f (x)} ¼ ~f (n)
We next use Churchill and Dolph’s (1954, pp. 94–96) geometrical arguments to replace the double integral inside the square bracket by
ðp
¼ h(cos n)Pn (cos n)sin n dn,
(11:99)
0
(11:106)
where the convolution f (x) * g(x) is given by 1 f (x) * g(x) ¼ h(x) ¼ p
ðp
where
ðp
f ( cos m) sin m dm g( cos l)db,
0
cos l ¼ cos m cos n þ sin m sin n cos b,
0
(11:100)
(11:107)
and
with x ¼ cos n
and
cos l ¼ cos m cos n þ sin m sin n cos b:
(11:101)
Proof We have, by definition (Equation 11.60), ðp ðp ~f (n)~g (n) ¼ f (cos m)Pn (cos m)sin mdm g(cos l)Pn (cos l)sin ldl 0 ðp
0
2p 3 ð ¼ f (cos m)sin m4 g(cos l)Pn (cos l)Pn (cos m)sin ldl5dm, 0
0
(11:102)
where f(x) ¼ f(cos m) and g(x) ¼ g(cos l). With the aid of an addition formula (see Sansone, 1959, p. 169) given as 1 Pn ( cos l)Pn ( cos m) ¼ p
ðp
ðp
ðp
f ( cos m) sin m dm g( cos l)db:
0
0
This proves the theorem. In particular, when v ¼ 0, result Equation 11.100 becomes h(1) ¼
ð1
f (t)g(t)dt,
(11:108)
1
and when v ¼ p, Equation 11.100 gives h( 1) ¼
ð1
f (t)g(t)dt:
(11:109)
1
&
Pn ( cos n)da,
(11:103)
0
where cos n ¼ cos l cos m þ sin l sin m cos a, the product can be rewritten in the form 2p p 3 ðð ðp 1 ~f (n)~g (n) ¼ f (cos m)sin m4 g(cosm)Pn (cos m)sin ldadl5dm: p 0
1 h( cos n) ¼ p
0 0
(11:104)
11.2.4 Applications of Legendre Transforms to Boundary Value Problems We solve the Dirichlet problem for the potential u(r, u) inside a unit sphere r ¼ 1, which satisfies the Laplace equation q 2 qu q qu r (1 x2 ) þ ¼ 0, qr qr qx qx with the boundary condition (x ¼ cos u)
0 < r < 1,
(11:110)
11-11
Finite Hankel Transforms, Legendre Transforms, Jacobi and Gegenbauer Transforms
u(1, x) ¼ f (x),
1 < x < 1:
(11:111)
which is, in terms of w,
~(r, n) ¼ 7n {u(r, u)} We introduce the ‘‘Legendre transform’’ u defined by Equation 11.59. Application of this transform to Equations 11.110 and 11.111 gives 2~
d u(r, n) d~ u þ 2r n(n þ 1)~ u(r, n) ¼ 0, r dr 2 dr 2
(11:112)
~(1, n) ¼ ~f (n), u
(11:113)
~(r, n) is to be continuous function for r for 0 r < 1. where u The bounded solution of Equations 11.112 and 11.113 is
Thus, the solution for u(r, x) can be found by the inverse transform so that 1 X n¼0
nþ
1 ~ f (n)r n Pn (x) for 0 < r 1, jxj < 1: 2 (11:115)
The convolution theorem allows us to give another representation of the solution. In view of Equation 11.69, we find 1 7n 1 {r n } ¼ (1 2
r2 )(1
2rx þ r 2 )
3=2
:
0
0
ðp
f ( cos m) sin m dm
0
r 2 )dl
2r cos v þ r 2 )3=2
(r 2 (1
1)dl
2r cos n þ r 2 )3=2
, (11:120)
where cos v is given by Equation 11.117.
1. If jrj < 1, a. 7n {xn } ¼ "
(
b. 7n log
2nþ1 (n!)2 . (2n þ 1)!
r
x þ (1 2rx þ r2 )1=2 1 x
,
)#
¼
2r nþ1 . (n þ 1)(2n þ 1)
h c. 7n {2r(1 rx þ r2 ) 1=2 } ( )# r x þ (1 2rx þ r 2 )1=2 2r nþ1 : ¼ log 1 x (n þ 1) d. 7n
1 log {1 2
rx þ (1
0, n¼0 2r n : , n>0 n(2n þ 1)
2 1=2
2rx þ r )
}
8
1 r r
1
2rx þ r 2 ) 2 ( )# 1 1 rx þ (1 2rx þ r 2 )1=2 rn ¼ : log 2 n 2
(11:116)
11.3 Jacobi and Gegenbauer Transforms
where cos n ¼ cos m cos u þ sin m sin u cos l:
(11:117)
Integral Equation 11.116 is called the Poisson integral formula for the potential inside the unit sphere for the Dirichlet problem. On the other hand, for the Dirichlet exterior problem, the potential w(r, cos u) outside the unit sphere (r > 1) can be obtained with the boundary condition w(1, cos u) ¼ f(cos u). The solution of the Legendre transformed problem is 1 ~ (r, n) ¼ ~f (n)r n , w r
n ¼ 0, 1, 2, . . . ,
(11:118)
11.3.1 Introduction This chapter deals with Jacobi and Gegenbauer transforms and their basic operational properties. The former is a fairly general finite integral transform in the sense that both Gegenbauer and Legendre transforms follow as special cases of the Jacobi transform. Some applications of both Jacobi and Gegenbauer transforms are discussed. This chapter is based on the papers by Debnath (1963, 1967), Scott (1953), Conte (1955), and Lakshmanarao (1954). All these special transforms have been unified by Eringen (1954) in his paper on the finite Sturm–Liouville transform.
11-12
Transforms and Applications Handbook
11.3.2 Definition of the Jacobi Transform and Examples
Example 11.14 (a,b) J Pm (x) ¼ dmn :
Debnath (1963) introduced the Jacobi transform of a function F(x) defined in 1 < x < 1 by the integral
ð1
1
(1 x)a (1 þ x)b Pn(a, b) (x)F(x)dx,
(11:121)
Pn(a, b) (x)
where is the Jacobi polynomial of degree n and orders a(> 1) and b(> 1). We assume that F(x) admits the following series expansion
F(x) ¼
1 X
an Pn(a,b) (x):
(11:122)
n¼1
1
From the uniformly convergent expansion of the generating function for jzj < 1 2aþb Q1 (1 z þ Q)a (1 þ z þ Q)b ¼ 2
z n Pn(a,b) (x),
n¼0
(11:129)
1 2
J{2aþb Q1 (1 z þ Q)a (1 þ z þ Q)b } ð1 1 X z n (1 x)a (1 þ x)b Pn(a,b) (x)Pn(a,b) (x)dx ¼ ¼
(a,b) (1 x)a (1 þ x)b Pn(a,b) (x)Pm (x)dx ¼ dn dmn ,
1 X
where Q ¼ (1 2xz þ z ) , it turns out that
n¼0
In view of the orthogonal relation ð1
&
Example 11.15
J{F(x)} ¼ f (a,b) (n) ¼
(11:128)
1 X
1
(dn )z n :
(11:130)
n¼0
&
(11:123)
Example 11.16
where dnm is the Kronecker delta symbol,
n
J{x } ¼
dn ¼
2aþbþ1 G(n þ a þ 1)G(n þ b þ 1) , n!(a þ b þ 2n þ 1)G(n þ a þ b þ 1)
(11:124)
ð1
1
(1 x)a (1 þ x)b Pn(a,b) (x)x n dx
¼ 2nþaþbþ1
G(n þ a þ 1)G(n þ b þ 1) : G(n þ a þ b þ 1)
(11:131)
and the coefficients an in Equation 11.122 are given by 1 an ¼ dn
ð1
1
(1 x)a (1 þ x)b F(x)Pn(a,b) (x)dx ¼
&
f (a,b) (n) : dn (11:125)
J 1 { f (a,b) (n)} ¼ F(x) ¼
If p > b 1, then J{(1 þ x)pb } ¼
Thus, the inverse Jacobi transform is given by 1 X
Example 11.17
(dn )1 f (a,b) (n)Pn(a,b) (x): (11:126)
n¼0
¼
ð1
1
(1 x)a (1 þ x)p Pn(a,b) (x)dx
nþa
In particular, when a ¼ b ¼ 0, the above results reduce to the corresponding results for the Legendre transform defined by Equation 11.59 so that
Note that both J and J1 are linear transformations.
Example 11.13 7n {(1 þ x)p } ¼
If F(x) is a polynomial of degree m < n, then J{F(x)} ¼ 0:
G(p þ 1)G(a þ 1)G(p b þ 1) : 2aþpþ1 G(a þ p þ n þ 2)G(p b þ n þ 1) n (11:132)
(11:127) &
¼
ð1
1
(1 þ x)p Pn (x)dx
2pþ1 {G(1 þ p)}2 , G(p þ n þ 2)G(p þ n þ 1)
(p > 1): (11:133) &
11-13
Finite Hankel Transforms, Legendre Transforms, Jacobi and Gegenbauer Transforms
Example 11.18 ¼ n(n þ a þ b þ 1)
If Re s > 1, then J{(1 x)sa } ¼
ð1
1
(1 x)s (1 þ x)b Pn(a, b) (x)dx,
2sþbþ1 G(s þ 1)G(n þ b þ 1)G(a s þ n) ¼ , n!G(a s) G(b þ s þ n þ 2) (11:134) &
Example 11.19
(1 x)a (1 þ x)b Pn(a, b) (x)F(x)dx
1 (a, b)
¼ n(n þ a þ b þ 1)f
Re s > 1,
ð1
(n):
This completes the proof. If F(x) and R[F(x)] satisfy the conditions of Theorem 11.6, then J{[R[F(x)] ]} exists and is given by J{R2 [F(x)]} ¼ J{R[R[F(x)] ]}
¼ (1)2 n2 (n þ a þ b þ 1)2 f (a, b) (n):
(11:139)
More generally, if F(x) and Rk[F(x)] satisfy the conditions of Theorem 11.6, where k ¼ 1, 2, . . . , m 1, and m is a positive integer then
If Re s > 1 then (a,s) J (1þx)sb Pm (x) ¼
ð1
J{Rm [F(x)]} ¼ (1)m nm (n þ a þ b þ 1)m f (a, b) (n):
(a,s) (1x)a (1þx)s Pn(a,b) (x)Pm (x)dx
1 aþsþ1
G(nþaþ1)G(aþbþmþnþ1)G(sþmþ1) m!(nm)!G(aþbþnþ1)G(aþsþmþnþ2) G(sbþ1) (11:135) : G(abþmþ1)
¼
2
&
(11:140)
When a ¼ b ¼ 0, Pn(0, 0) (x) becomes the Legendre polynomial Pn(x) and the Jacobi transform pairs (Equations 11.121 and 11.125) reduce to the Legendre transform pairs (Equations 11.59 and 11.61). All results for the Jacobi transform also reduce to those given in Chapter 14.
11.3.4 Applications of Jacobi Transforms to the Generalized Heat Conduction Problem
11.3.3 Basic Operational Properties
The one-dimensional generalized heat equation for temperature u(x, t) is
THEOREM 11.6 If J{F(x)} ¼ f (a,b)(n), lim (1 x)aþ1 (1 þ x)bþ1 F(x) ¼ 0,
jxj!1
aþ1
lim (1 x)
jxj!1
bþ1 0
(1 þ x)
F (x) ¼ 0,
(11:136a) (11:136b)
q qu qu k þ Q(x, t) ¼ rc , qx qx qt
(11:141)
where k is the thermal conductivity Q(x,t) is a continuous heat source within the medium r and c are density and specific heat, respectively
d aþ1 bþ1 d R[F(x)] ¼ (1 x) (1 þ x) (1 x) (1 þ x) F(x) , dx dx If the thermal conductivity is k ¼ a(1 x2), where a is a real (11:137) constant, and the source is Q(x, t) ¼ (mx þ n) qu, then the heat qx Equation 11.141 reduces to then J{R[F(x)]} exists and is given by q mx þ n qu rc qu J{R[F(x)]} ¼ n(n þ a þ b þ 1)f (a, b) (n), (11:138) 2 qu (1 x ) ¼ : (11:142) þ qx qx a qx a qt where n ¼ 0, 1, 2, 3, . . . a
b
Proof We have, by definition,
J{R[F(x)]} ¼
ð1
1
d aþ1 bþ1 dF (1 x) (1 þ x) P(a,b) (x)dx, dx dx n
which is, by integrating by parts and using the orthogonal relation (Equation 11.123),
We consider a nonhomogeneous beam with ends at x ¼ 1 whose lateral surface is insulated. Since k ¼ 0 at the ends, the ends of the beam are also insulated. We assume the initial conditions as u(x, 0) ¼ G(x)
for all 1 < x < 1,
where G(x) is a suitable function so that J{G(x)} exists.
(11:143)
11-14
Transforms and Applications Handbook
m n write ¼ (a þ b) and ¼ b a so that a m þ n ma n , , the left-hand side of Equation (a, b) ¼ 2a 2a 11.142 becomes If
we
q qu qu (1 x2 ) þ [(b a) (b þ a)x] qx qx qx q qu qu ¼ (1 x2 ) þ [(1 x)b (1 þ x)a] qx qx qx q qu ¼ (1 x)a (1 þ x)b (1 x)a (1 þ x)b (1 x2 ) qx qx
qu þ[b(1 þ x)b (1 x)aþ1 a(1 x)a (1 þ x)bþ1 ] qx a b q aþ1 bþ1 qu (1 x) (1 þ x) ¼ (1 x) (1 þ x) qx qx ¼ R[u(x, t)]:
a d¼ : rc
(11:144)
Application of the Jacobi transform to Equations 11.143 and 11.144 gives d (a,b) (n, t) ¼ dn(n þ a þ b þ 1)u(a,b) (n, t), u dt u(a,b) (n, 0) ¼ g (a,b) (n):
(11:145) (11:146)
u(a,b) (n, t) ¼ g (a,b) (n) exp [n(n þ a þ b þ 1)td]:
(11:147)
The inverse Jacobi transform gives the formal solution 1 X n¼0
where a ¼
(11:148) 1 1 (m þ n) and b ¼ (m n). 2a 2a
1 dy d (1 x2 )nþ2 þ n(n þ 2n)(1 x2 )n1 y ¼ 0, dx dx
(n)
(n) ¼
ð1
1
G1 { f (n) (n)} ¼ F(x) ¼
1
(1 x2 )n2 Cnn (x) F(x)dx,
1 X
n (n) d1 n Cn (x)f (n),
n¼0
(11:152)
1 < x < 1: (11:153)
Obviously, G and G1 stand for the Gegenbauer transformation and its inverse respectively. They are linear integral transformations. 1 When a ¼ b ¼ n , the differential form (Equation 11.137) 2 becomes d2 F dF (2n þ 1)x , dx2 dx
(11:154)
which can be expressed as 1 dF d (1 x2 )nþ2 : dx dx
(11:155)
Under the Gegenbauer transformation G, the differential form (Equation 11.154) is reduced to the algebraic form (11:156)
This follows directly from the relation (Equation 11.138). Similarly, we obtain G{R2 [F(x)]} ¼ (1)2 n2 (n þ 2n)2 f (n) (n):
(11:157)
More generally, G{Rk [F(x)]} ¼ (1)k nk (n þ 2n)k f (n) (n),
(11:149)
(11:151)
1 Thus, when a ¼ b ¼ n , the Jacobi transform pairs (Equa2 tions 11.121 and 11.126) reduce to the Gegenbauer transform pairs, in the form
G{R[F(x)]} ¼ n(n þ 2n)f (n) (n):
1 When a ¼ b ¼ n , the Jacobi polynomial Pn(a,b) (x) becomes 2 the Gegenbauer polynomial Cnn (x) which satisfies the self-adjoint differential form
212n pG(n þ 2n) : n! (n þ n)[G(n)]2
dn ¼
1
(a,b) d1 (n)Pn(a,b) (x) exp [n(n þ a þ b þ 1)td], n g
(11:150)
where
R[F(x)] ¼ (1 x2 )2n
11.3.5 The Gegenbauer Transform and Its Basic Operational Properties
1
1
n (1 x2 )n2 Cm (x) Cnn (x)dx ¼ dn dmn ,
R[F(x)] ¼ (1 x2 )
The solution of this system is
u(x, t) ¼
ð1
G{F(x)} ¼ f
Thus, Equation 11.142 reduces to 1 qu R[u(x, t)] ¼ , d qt
and the orthogonal relation
where k ¼ 1, 2, . . .
(11:158)
11-15
Finite Hankel Transforms, Legendre Transforms, Jacobi and Gegenbauer Transforms
and
CONVOLUTION THEOREM 11.7
cos c ¼ cos u cos f þ sin u sin f cos l:
If G{F(x)} ¼ f (v) (n) and G{G(x)} ¼ g (v) (n), then f (n) (n)g (n) (n) ¼ G{H(x)} ¼ h(n) (n),
(11:159)
where H(x) ¼ G1 {h(n) (n)} ¼ G1 { f (n) (n)g (n) (n)} ¼ F(x) * G(x),
(11:165)
In view of this formula, result (Equation 11.162) assumes the form ðp
f (u) (n)g (n) (n) ¼ A F( cos u)(sin u) 0
(11:160)
2n
(sin f) (sin l)
2n
" ðp ðp
2n1
G(cos f)Cnn (cos c)
0 0
#
(11:166)
dldf du:
and H(x) is given by We next introduce a new variable a defined by the relation H( cos c) ¼ A(sin c)12n
ðp ðp
F(cos u)G(cos f)(sin u)2n
cos f ¼ cos u cos c þ sin u sin c cos a:
0 0
(sin f)2n1 (sin l)2n1 duda,
(11:161)
where a is defined by Equation 11.167. Proof We have, by definition,
f
(n)
(n)
(n)g (n) ¼
ð1
1
ðp
(11:167)
Thus, under transformation of coordinates defined by Equations 11.165 and 11.167, the elementary area dl df ¼ (sin c=sin f) 3 dcda, where (sin c=sin f) is the Jacobian of the transformation. In view of this transformation, the square region of the fl plane given by (0 f p l p) transforms into a square region of the same dimension in the c a plane. Consequently, the double integral inside the square bracket in Equation 11.166 reduces to
1
F(x)(1 x2 )v2 Cnn (x)dx ð1
1
2 n12
G(x)(1 x )
ðp ðp
Cnn (x)dx
where cos c is defined by Equation 11.165 and cos f is defined by Equation 11.167. If the double integral (Equation 11.168) is substituted into Equation 11.166, and if the order of integration is interchanged, Equation 11.166 becomes
ðp
G(cos f)(sin f)2n Cnn (cos f)df 0
¼ F(cos u)(sin u)2n 0
sin c dcda, (11:168)
0
" ðp
1
0 0
¼ F(cos u)(sin u)2n Cnn (cos u)du
ðp
G(cos f)Cnn (cos c)(sin w)2n 1 (sin l)2n
G(cos f)Cnn (cos u)
ðp f (n) (n)g (n) (n) ¼ (sin c)2n Cnn (cos c)H(cos c)dc ¼ G{H(cos c)}, 0
(11:169)
0
#
Cnn (cos f)(sin f)2n df du:
(11:162)
The addition formula for the Gegenbauer polynomial (see Erdélyi, 1953, p. 177) is
H(cos c) ¼ A(sin c)
1 2n
(11:163)
A ¼ {G(n þ 2n)=n!22n1 G2 (n)},
(11:164)
where
F(cos u)G(cos f)(sin u)2n
(sin l)2n 1 duda: (11:170)
1 When n ¼ , Cn (x) becomes the Legendre polynomial, the 2 Gegenbauer transform pairs (Equations 11.152 and 11.153) reduce to the Legendre transform pairs (Equations 11.59 and 11.61), and the convolution theorem 11.7 reduces to the corresponding convolution theorem 11.5 for the Legendre transform. 1 2
0
ðp ðp
0 0 2n 1
(sin f)
ðp
Cnn (cos u)Cnn (cos f) ¼ A Cnn (cos c)(sin l)2n1 dl,
where
11-16
Transforms and Applications Handbook
11.3.6 Application of the Gegenbauer Transform The generalized one-dimensional heat equation in a nonhomogeneous solid beam for the temperature u(x,t) is q qu qu 1 qu (1 x2 ) , (2n þ 1)x ¼ qx qx qx d lt
where Lan (x) is the Laguerre polynomial of degree n(0) and order a(> 1), which satisfies the ordinary differential equation expressed in the self-adjoint form d d e x xaþ1 Lan (x) þ ne x xa Lan (x) ¼ 0: dx dx
(11:171)
a where k ¼ (1 x ) is the thermal conductivity, d ¼ , and rc the second term on the left-hand side represents the continuous source of heat within the solid beam. We assume that the beam is bounded by the planes at x ¼ 1 and its lateral surfaces are insulated. The initial condition is
(11:178)
In view of the orthogonal property of the Laguerre polynomials
2
1 ð 0
e x xa Lan (x)Lam (x)dx ¼
nþa G(a þ 1)dmn ¼ dn dnm , n (11:179)
where dmn is the Kronecker delta symbol, and dn is given by u(x, 0) ¼ G(x)
for 1 < x < 1,
(11:172)
where G(x) is a given function so that its Gegenbauer transform exists. Application of the Gegenbauer transform to Equations 11.171 and 11.172 and the use of Equation 11.156 gives d (n) u (n, t) ¼ d n(n þ 2n)u(n) (n, t), dt u(n) (n, 0) ¼ g (n) (n):
(11:173) (11:174)
dn ¼
nþa G(a þ 1): n
The inverse Laguerre transform is given by f (x) ¼ L 1 { ~f a (n)} ¼
1 X
(11:175)
The inverse transform gives the formal solution u(x, t) ¼
1 X n¼0
n (n) d1 n Cn (x)g (n) exp [n(n þ 2n)td],
(11:181)
n¼0
L{ f (x)} ¼ ~f 0 (n) ¼
1 ð
(11:176)
e x Ln (x)f (x)dx,
(11:182)
0
L 1 { ~f 0 (n)} ¼ f (x) ¼
1 X
~f (n)Ln (x), 0
(11:183)
n¼0
where Ln(x) is the Laguerre polynomial of degree n and order zero. Obviously, L and L 1 are linear integral transformations. The following examples (Debnath, 1960) illustrate the Laguerre transform of some simple functions.
where dn is given by Equation 11.151.
11.4 Laguerre Transforms 11.4.1 Introduction
Example 11.20
This chapter is devoted to the study of the Laguerre transform and its basic operational properties. It is shown that the Laguerre transform can be used effectively to solve the heat conduction problem in a semi-infinite medium with variable thermal conductivity in the presence of a heat source within the medium.
11.4.2 Definition of the Laguerre Transform and Examples Debnath (1960) introduced the Laguerre transform of a function f(x) defined in 0 x < 1 by means of the integral L{ f (x)} ¼ ~f a (n) ¼
(dn ) 1~f a (n)Lan (x):
When a ¼ 0, the Laguerre transform pairs due to McCully (1960) follow from Equations 11.177 and 11.181 in the form
This solution of this system is u(n) (n, t) ¼ g (n) (n) exp [n(n þ 2n)td]:
(11:180)
1 ð 0
If f (x) ¼ Lam (x) then
L Lam (x) ¼ dn dnm :
(11:184)
This follows directly from the definitions (Equations 11.177 & and 11.179).
Example 11.21 If f (x) ¼ xs
1
where s is a positive real number, then
L{x s 1 } ¼
1 ð
e x x aþs 1 Lan (x)dx ¼
G(s þ a)G(n s þ 1) , n!G(1 s)
0
e x xa Lan (x)f (x)dx,
(11:185)
(11:177) in which a result due to Howell (1938) is used.
&
11-17
Finite Hankel Transforms, Legendre Transforms, Jacobi and Gegenbauer Transforms
Example 11.22
Example 11.25
If a > 1, and f(x) ¼ eax, then
L{e
ax
}¼
1 ð
L{ex x a G(a, x)} ¼
ex(1þa) x a Lan (x)dx ¼
0
G(n þ a þ 1)an , n!(a þ 1)nþaþ1
(11:186)
1 X n¼0
dn , (n þ 1)
1 < a < 0:
(11:190)
We use a result from Erdélyi (1953, vol. 2, p. 215) as ex x a G(a, x) ¼
where the result in Erdélyi et al. (1954, vol. 2, p. 191) & is used.
1 X n¼0
(n þ 1)1 Lan (x), (a > 1, x > 0),
in the definition (Equation 11.177) to derive Equation & 11.190.
Example 11.23 Example 11.26
If f (x) ¼ eax Lam (x), then
L
eax Lam (x)
¼
1 ð
If b > 0, then ex(aþ1) x a Lan (x)Lam (x)dx, L{x b } ¼ G(a þ b þ 1)
0
which is, due to Howell (1938),
¼
n¼0
(b)n dn : G(n þ a þ 1)
(11:191)
Using the result from Erdélyi (1953, vol. 2, p. 214)
1 G(n þ a þ 1)G(m þ a þ 1) (a 1)nmþaþ1 n! m! G(1 þ a) anþmþ2aþ2 mþaþ1 1 2 F1 n þ a þ 1, , 2 , aþ1 a (11:187)
where 2F1(x,a,b) is the hypergeometric function.
1 X
x b ¼ G(a þ b þ 1)
1 X n¼0
(b)n La (x), G(n þ a þ 1) n
where a 1 b < 1 þ min a, , x > 0, a > 1, 2 4
&
&
we can easily obtain Equation 11.191.
Example 11.24 L{ f (x)x
ba
}¼
1 ð
ex x b Lan (x)f (x)dx:
Example 11.27
0
If jzj < 1 and a 0, then We use a result from Erdélyi (1953, vol. 2, p. 192) a: Lan (x) ¼
n X m¼0
(m!)1 (a b)m Lbnm (x)
(11:188) b:
L{ f (x)x ba } ¼
m¼0
dn z n : G(n þ a þ 1)
(11:192)
(11:193)
We have the following generating functions (Erdélyi, 1953, vol. 2, p. 189) (m!)1 (a b)m~f b (n m):
(11:189) 1 xz X Lan (x)z n , jzj < 1, ¼ z1 n¼0 1
pffiffiffiffi X z n Lan (x) (xz)a=2 ez Ja 2 xz ¼ , jzj < 1: G(n þ a þ 1) n¼0
(1 z)(aþ1) exp
In particular, when b ¼ a 1, we obtain L
1 n h io X a 1 L (xz) 2 ez Ja 2(xz)2 ¼ n¼0
to obtain the following result: n X
1 n xz o X ¼ dn z n , L (1 z)(aþ1) exp z1 n¼0
X n f (x) (m!)1~f a1 (n m): ¼ x m¼0 &
In view of these results combined with the orthogonality relation (Equation 11.179), we obtain Equations 11.192 and & 11.193.
11-18
Transforms and Applications Handbook
Example 11.28
L{ f 00 (x)} ¼ ~f a (n) 2a
(Recurrence Relations).
2a
a: ~f aþ1 (n) ¼ (n þ a þ 1)~f a (n) (n þ 1)~f a (n þ 1), (11:194) b:
n! ~f mn (n) ¼ (1)nm m!
m X k¼0
(k!) (2n 2m)k~f mn (m k): 1
m¼0
n X m¼0
m¼0
~f a1 (n m) þ 2
n1 X m¼0
~f (n m 1) a
(m þ 1)~f aþ1 (n m 1) þ a(a 1)
(m þ 1)fa2 (n m) þ
n2 X m¼0
(m þ 1)~f a (n m 2) (11:197)
(11:195)
and so on for the Laguerre transforms of higher derivatives. We have, by definition,
We have
~f aþ1 (n) ¼
1 ð
0
ex x aþ1 Lnaþ1 (x)f (x)dx,
L{ f (x)} ¼
0
1 ð 0
ex xa Lan (x)f 0 (x)dx
0
a
ex x a (n þ a þ 1)Lan (x) (n þ 1)Lanþ1 (x) f (x)dx
1 ð
ex xa Lan (x)f (x)dx
0
ex xa1 Lan (x)f (x)dx
0
¼ ~f a (n) a
Similarly, we find 1 ð
1 ð
1 ð
ex xa
0
d a Ln (x) f (x)dx, dx
which is, due to Erdélyi et al (1954, vol. 2, p. 192),
¼ (n þ a þ 1)~f a (n) (n þ 1)~f a (n þ 1):
n! ~f mn (n) ¼
1 ð
1 ¼ ex na Lan (x)f (x) 0 þ
which is, by using the recurrence relation for the Laguerre polynomial,
¼
n1 X
n X
n X k¼0
~f a1 (k) þ
n1 X
f a (k):
k¼0
Similarly, we can derive Equation 11.197.
ex x mn n! Lmn (x)f (x)dx: n
0
THEOREM 11.8
We next use the following result due to Howell (1938)
Ðx If g(x) ¼ 0 f (t)dt so that g(x) is absolutely continuous and g0 (x) exists, and if g0 (x) is bounded and integrable, then
nm n! Lnmn (x) ¼ (1)nm m! Lm (x)
to obtain
n! ~f mn (n) ¼ (1)nm m! ¼ (1)nm m!
1 ð
~f (n) ~f (n 1) ¼ ~g (n) a~g (n), a a a a1 nm ex x mn Lm (x)f (x)dx
0 m X k¼0
(11:198)
and
(k!)1 (2n 2m)k~f mn (m k): &
9 8x = 0 t
(12:121)
One recognizes the expression of a multiplicative convolution. Such an integral can be computed by performing the following steps: .
(12:124)
Techniques of inversion for this expression are developed in a book by Sasiella18 devoted to the propagation of electromagnetic waves in turbulent media.
Essentially, the technique concerns integrals which can be brought to the form: 1 ð
}[Ki ; si ]
i¼1
(12:119)
The integral is easily computed by the method of residues, closing the contour to the left where the integrand goes to zero. The function z(s) is analytic everywhere except at s ¼ 1 where it has a simple pole with residue equal to 1. The result is y 2 py p2 S¼ þ 4 6 2
N Y
Mellin transform functions K0 and K1 to obtain }[K0; s] and }[K1; s]. Multiply the transforms to obtain }[K; s] }[K0; s] }[K1; s]. Find the inverse Mellin transform of }[K; s] using the tables. The result will in general be expressed as a combination of generalized hypergeometric series.
For the last operation, the book by Marichev5 can be of great help as previously mentioned in Section 12.2.1.6. The method can be extended to allow the computation of integrals of the form:
(t(d=dt ))k u(t) (t(d =dt ))k d(t 1) u(t)
(12:126)
they can be written as a convolution: n X k¼0
ak ðt(d=dt )Þk d(t 1) u(t) ¼ g(t)
(12:127)
The more usual Euler–Cauchy differential equation, which is written as n bn t (d =dt )n þ bn1 t n1 (d =dt )n1 þ þ b0 u(t) ¼ g(t)
(12:128)
can be brought to the form (Equation 12.125) by using relations such that:
12-14
Transforms and Applications Handbook
d t dt
2
¼t
d d2 þ t2 2 dt dt
(12:129)
k¼0
k (k)
bk t d (t 1) u(t) ¼ g(t)
0 < r < 1, a < u < a
(12:135)
with the following boundary conditions:
It can also be transformed directly into a convolution which reads: n X
Du ¼ 0,
(12:130)
1. On the sides of the wedge, if R is a given positive number:
if 0 < r < R if r > R
(12:136)
u(r, a) ¼ H(R r)
(12:137)
u(r, a) ¼
1 0
or, equivalently:
The Mellin treatment of convolution equations will be explained in the case of Equation 12.127 since it is the most characteristic. Suppose that the known function g has a Mellin transform }[g; s] ¼ G(s) that is holomorphic in the strip S(sl, sr). We shall seek solution u which admit a Mellin transform U(s) holomorphic in the same strip or in some substrip. The Mellin transform of Equation 12.127 is obtained by using the convolution property and relation (Equation 12.62): A(s)U(s) ¼ G(s)
(12:131)
where A(s)
1 X
ak (1)k sk
(12:132)
k¼0
Two different situations may arise. 1. Either A(s) has no zeros in the strip S(sl, sr). In that case, U(s) given by G(s)=A(s) can be inverted in the strip. According to Theorems 12.2 and 12.3, the unique solution is a distribution belonging to 70 (s1, s2). 2. Or A(s) has m zeros in the strip. The main strip S(sl, sr) can be decomposed into adjacent substrips sl < Re(s) < s1 , s1 < Re(s) < s2 , . . . , sm < Re(s) < sr (12:133) The solution in the k-substrip is given by the Mellin inverse formula: 1 u(t) ¼ 2pj
cþj1 ð
G(s)t s ds A(s)
(12:134)
cj1
where sk < c < skþ1. There is a different solution in each strip, two solutions differing by a solution of the homogeneous equation. 7,8,17
12.2.2.4 Solution of a Potential Problem in a Wedge
The problem is to solve Laplace’s equation in an infinite twodimensional wedge with Dirichlet boundary conditions. Polar coordinates with origin at the apex of the wedge are used and the sides are located at u ¼ a. The unknown function u(r, u) is supposed to verify:
2. When r is finite, u(r, u) is bounded. 3. When r tends to infinity, u(r, u) rb, b > 0.
In polar coordinates, Equation 12.135 multiplied by r2 yields: r2
q2 u qu q2 u þ r ¼0 þ qr 2 qr qu2
(12:138)
The above conditions on u(r, u) ensure that its Mellin transform U(s, u) with respect to r exists as a holomorphic function in some region 0 < Re(s) < b. The equation satisfied by U is obtained from Equation 12.138 by using property (Equation 12.59) of the Mellin transformation and reads: d2 U (s, u) þ s2 U(s, u) ¼ 0 du2
(12:139)
The general solution of this equation can be written as U(s, u) ¼ A(s)ejsu þ B(s)ejsu
(12:140)
Functions A, B are to be determined by the boundary condition (Equation 12.137) which leads to the following requirement on U: U(s, a) ¼ Rs s1
for Re(s) > 0
(12:141)
Explicitly, this is written as A(s)ejsa þ B(s)ejsa ¼ as s1
(12:142)
A(s)ejsa þ B(s)ejsa ¼ as s1
(12:143)
and leads to the solution: A(s) ¼ B(s) ¼
Rs 2s cos (sa)
(12:144)
The solution of the form (Equation 12.140) which verifies Equation 12.141 is given by U(s, u) ¼
Rs cos (su) s cos (sa)
(12:145)
12-15
Mellin Transform
This function U is holomorphic in the strip 0 < Re(s) < p=(2a). Its inverse Mellin transform is a function u(r, u) that is obtained from the result of Example 12.10. 12.2.2.5 Asymptotic Expansion of Integrals The Laplace transform I[ f; l] defined by I[f ; l] ¼
1 ð
elt f (t) dt
(12:146)
0
has an asymptotic expansion as l goes to infinity which is characterized by the behavior of the function f when t ! 0þ.9,14,19 With the help of Mellin’s transformation, one can extend this type of study to other transforms of the form: I[f ; l] ¼
1 ð
h(lt)f (t) dt
(12:147)
. . .
.
A short formal overview of the procedure will be given below. The theory is exposed in full generality in Ref. [9]. It includes the study of asymptotic expansions when l ! 0þ in relation with the behavior of f at infinity and the extension to complex values of l. The case of oscillatory h-kernels is given special attention. Suppose from now on that f and h are locally integrable functions such that the transform I[ f; l] exists for the large l. The different steps leading to an asymptotic expansion of I[ f; l] in the limit l ! þ1 are the following: 1. Mellin transform the functions h and f and apply Parseval’s formula. The Mellin transforms }[ f; s] and }[h; s] are supposed to be holomorphic in the strips h1 < Re(s) < h2 and < a1 < Re(s) < a2, respectively. Assuming that Parseval’s formula may be applied and using property (Equation 12.56), one can write Equation 12.146 as rþj1 ð
(12:150)
for all a in the interval [r, R]. Under these conditions, Cauchy’s formula may be applied and yields:
I[f ; l] ¼
Fourier transform: h(lt) ¼ ejlt Cosine and Sine transforms: h(lt) ¼ cos (lt) or sin (lt) Laplace transform: h(lt) ¼ elt 1=2 _ where Jv is the Hankel transform: h(lt) ¼ Jv (lt)(lt) Bessel function of the first kind Ð1 Generalized Stieltjes transform: h(lt) ¼ lv 0 f (t)= (1 þ lt)v dt
1 I[f ; l] ¼ 2pj
lim G(a þ jb) ¼ 0
jbj!1
0
where h is a general kernel. Examples of such h-transforms9 are .
2. Shift of the contour of integration to the right and use of Cauchy’s formula. Suppose G(s) can be analytically continued in the right half-plane Re(s) min(a2, 1 h1) as a meromorphic function. Remark that this assumption implies that M[ f; s] may be continued to the right halfplane Re(s) > a2 and M[h; s] to the left Re(s) < h1. Suppose moreover that the contour of integration in Equation 12.148 can be displaced to the right as far as the line Re(s) ¼ R > r. A sufficient condition ensuring this property is that
X
r 0, b 2 IR for a fixed value of r. Examples of such spaces are the spaces 7(a1, a2) of Section 12.2.1.3 provided a1, a2 are chosen verifying the inequality a1 < r þ 1 < a2 .7,8 Then the space of distributions 70 is defined as usual as a linear space 70 of continuous functionals on 7. It can be shown that the space 70 contains the distributions of bounded support on the positive axis and, in particular, the Dirac distributions. The Mellin transform of a distribution Z in a space 70 can always be obtained as the result of the application of Z to the set of test functions v2pjbþr , b 2 IR, i.e., as
(12:228)
}[Z](b þ c) ¼ hZ, n2pj(bþc)þr i
¼ hZn2pjc , n2pjbþr i
With this result, condition (Equation 12.226) becomes n þ s2b 0 l2 s2n þ (l=2p)
(12:229)
The left member is a quadratic expression of the parameter l. Its positivity whatever the value of l means that the coefficients of the expression verify: n=4p sn sb
(12:230)
(12:233)
(12:234)
and the result }[Z](b þ c) ¼ }[Zn2pjc ](b)
(12:235)
Example 12.14
The functions for which this product is minimal are such that there is equality in Equation 12.224. Hence, they are solutions of the equation:
The above formula (Equation 12.233) allows to compute the Mellin transform of d(v v0) by applying the usual definition of the Dirac disribution:
jl(n [@ b n)]Z(n) ¼ 0
hd(n n0 ), fi ¼ f(n0 )
(12:231)
to the function f(v) ¼ v 2pjbþr , thus giving
and are found to be
K(n) e2pln n2plnr12pjb
(12:236)
(12:232)
}[d(n n0 )](b) hd(n n0 ), n2pjbþr i ¼ n02pjbþr
(12:237)
These functions, first introduced by Klauder,27 are the analogs of Gaussians in Fourier theory.
Example 12.15 The Geometric Dirac Comb
12.3.1.3 Extension of the Mellin Transformation to Distributions
In problems involving dilations, it is natural to introduce a special form of the Dirac comb defined by
The definition of the transformation has to be extended to distributions to be able to treat generalized functions such as Dirac’s which are currently used in electrical engineering. Section 12.2.1.3 can be read at this point for a general view of the possible approaches. Here we only give a succinct definition that will
DrA (n)
þ1 X
n¼1 þ1 X
n¼1
Anr d(n An ) Anr d(n An )
(12:238)
12-22
Transforms and Applications Handbook
multiplicative convolution in the original space. The latter operations can also be defined directly by their transformation properties under a dilation as will now be explained.
where A is a positive number. The values of v which are picked out by this distribution form a geometric progression of ratio A. Moreover, the comb DrA is invariant in a dilation by an integer power of A. Indeed, using definition (Equation 12.214), we have $A DrA (n) Arþ1 DrA (An) ¼
1 X
n¼1
Invariant product The dilation-invariant product of the functions Z1 and Z2 which will be denoted by the symbol 8 is defined as
(12:239)
A(n1)r d(n An1 )
¼ DrA (n)
(12:240)
(Z1 Z2 )(v) vrþ1 Z1 (v)Z2 (v)
(12:241)
It is nothing but the usual product of the functions multiplied by the (r þ 1)th power of the variable. Relation (Equation 12.245) defines an internal law on the set of functions that is stable by dilation since:
The distribution DrA will be referred to as the geometric Dirac comb and is represented in Figure 12.3. Distribution DrA does not belong to 70 and, hence, its Mellin transform cannot be obtained by formula (Equation 12.233). However, the property of linearity of the Mellin transformation and result (Equation 12.237) allow to write: }[DrA ](b) ¼
þ1 X
A2jpbn
$a [Z1 ] $a [Z2 ] ¼ $a [Z1 Z2 ]
(12:242)
n¼1
ln A
e2jpnb ln A
n¼1
1 X
n
d b ¼ ln A n¼1
}[Z1 Z2 ](b) ¼ (12:243)
þ1 1 X n
d b ln A n¼1 ln A
þ1 ð
vrþ1 Z1 (v)Z2 (v)v2jpbþr dv
(12:244)
Thus, the Mellin transform of a geometric Dirac comb IRþ is an arithmetic Dirac comb on IR (Figure 12.3).
DrA
}[Z1 Z2 ](b) ¼
on
12.3.1.4 Transformations of Products and Convolutions The relations between product and convolution that are established by a Fourier transformation have analogs here. Classical convolution and usual product in the space of Mellin transforms correspond respectively to a special invariant product and a
0
1 ð
db1
1
¼
1 ð
1 ð
1
1
}[Z1 ](b1 )}[Z2 ](b2 ) db2 d(bb1 b2 ) (12:248a)
}[Z1 ](b1 )}[Z2 ](b b1 ) db1
ΔrA(v)
v
(12:248b)
where we recognize the classical convolution of the Mellin transforms.
[ΔrA](β)
0
(12:247)
Replacing Z1 and Z2 by their inverse Mellin transforms given by Equation 12.212 and using the orthogonality relation (Equation 12.195) to perform the v-integration, we obtain:
This leads to }[DrA ](b) ¼
(12:246)
where $a is the operation (Equation 12.180). Now, we shall compute the Mellin transform of the product Z1 Z2 . According to definition (Equation 12.211), this is given by
The right-hand side of Equation 12.242 is a Fourier series which can be summed by Poission’s formula: 1 X
(12:245)
0
β
FIGURE 12.3 Geometrical dirac comb in IRþ-space and corresponding arithmetical Dirac comb in the Mellin space (case r ¼ 1=2).
12-23
Mellin Transform
THEOREM 12.7
Z1 Z2
The Mellin transform of the invariant product (Equation 12.245) of the two functions Z1 and Z2 is equal to the convolution of their Mellin transforms: }[Z1 Z2 ](b) ¼ (}[Z1 ]* }[Z2 ])(b)
(12:249)
Multiplicative convolution: For a given function Z1 (resp Z2), the usual convolution Z1*Z2 can be seen as the most general linear operation commuting with translations that can be performed on Z1 (resp Z2). By analogy, the multiplicative convolution of Z1 and Z2 is defined as the most general linear operation on Z1 (resp Z2) that commutes with dilations. More precisely, suppose that a linear operate ! is defined in terms of a kernel function A(v, v0 ) according to
![Z1 ](n) ¼
þ1 ð
A(n, n0 )Z1 (n0 )dn0
(12:250)
0
Then the requirement that transformation $a applied either on Z1 or ![Z1] yield the same results implies that:
a
rþ1
rþ1
![Z1 ](an) ¼ a
þ1 ð
A(n, n0 )Z1 (an0 )dn0
(12:251)
A(n, n0 ) a A(an, an0 )
(12:252)
valid for any a. For a ¼ v0 1, we obtain the identity: A(n, n0 )
n dn0 n0
(12:255)
n0
On this definition, it can be observed that dilating one of the factors Z1 or Z2 of the multiplicative convolution is equivalent to dilating the result, i.e., $a [(Z1 Z2 )(n)] [Z1 ($a Z2 )](n)
(12:256)
[($a Z1 ) Z2 ](n)
(12:257)
where $a is defined in Equation 12.214. For applications, an essential property of the multiplicative convolution is that it is converted into a classic product when a Mellin transformation is performed. }[Z1 Z2 ](b) ¼ }[Z1 ](b)}[Z2 ](b)
(12:258)
To prove this result, we write the definition of }[Z1 Z2](b) which is, according to Equations 12.211 and 12.255:
}[Z1 Z2 ](b) ¼
1 ð 0
n2p=bþr Z1 (n0 )Z2
n dn0 n0
n0
dn
(12:259)
The change of variables from v to x ¼ v=v0 yields the result.
1 n
A ,1 n0 n0
Z1 (n0 )Z2
n dn0 n0
n0
THEOREM 12.8 The Mellin transform of the multiplicative convolution (Equation 12.255) of functions Z1 and Z2 is equal to the product of their Mellin transforms: }[Z1 Z2 ](b) ¼ }[Z1 ](b)}[Z2 ](b)
(12:253)
which shows that the operator ! can be expressed by using a function of a single variable. Thus, any linear transformation acting on function Z1 and commuting with dilations can be written in the form:
0
0
Z1 (n0 )Z2
0
must be true for any function Z1. Comparing Equation 12.251 to Equation 12.250, we thus obtain the following constraint on the kernel A(v, v0 ):
þ1 ð
þ1 ð
(12:254)
where Z2(v) is an arbitrary function. It can be verified, by changing variables, that the above expression is symmetrical with respect to the two functions Z1 and Z2. It defines the multiplicative convolution of these functions which is usually denoted by Z1 Z2:
(12:260)
Remark It can be easily verified that the above theorems remain true if Z1, Z2 are distributions provided the composition laws involved in the formulas may be applied.
12.3.2 Discretization and Fast Computation of the Transform Discretization of the Mellin transform (Equation 12.211) is performed along the same lines as discretization of the Fourier transform. It concerns signals with support practically limited, both in v-space and in b-space. The result is a discrete formula giving a linear relation between N geometrically spaced samples of Z(v) and N arithmetically spaced samples of }[Z](b).24,25,28 The fast computation of this discretized transform involves the same algorithms as used in the fast Fourier transformation (FFT).
12-24
Transforms and Applications Handbook
Before proceeding to the discretization itself, we introduce the special notions of sampling and periodizing that will be applied to the function Z(v). 12.3.2.1 Sampling in Original and in Mellin Variables Sampling and periodizing are operations that are well defined in ~ the Mellin space of functions Z(b) and can be expressed in terms of Dirac combs. We shall show that the corresponding operations in the space of original functions Z(v) involve the geometrical Dirac combs introduced in Section 12.3.1.3. 12.3.2.1.1 Arithmetic Sampling in Mellin Space Given a function M(b) }[Z](b), the arithmetically sampled function Ms(b) with sample interval 1=ln Q,Q real, is usually defined by þ1 1 X n }[Z](b) d b Ms (b) ln Q n¼1 ln Q
(12:261)
Remark that besides sampling, this definition contains a factor 1=lnQ that is a matter of convenience. To compute the inverse Mellin transform of this function Ms(b), we remark that, due to relation (Equation 12.244), it can also be written as a product of Mellin transform in the form: Ms (b) ¼ }[Z](b)}[DrQ ](b)
(Z
DrQ )(v)
¼
n¼1
nr
0
n
#
Q d(v Q )
dv0 (12:265) v0
þ1 X
Qn(rþ1) Z(Qn v)
(12:266)
n¼1
As seen on Figure 12.4, function ZD is constructed by juxtaposing dilated replicas of Z. This operation will be referred to as dilatocyling and the function ZD itself as the dilatocycled form of Z with ratio Q. In the special case where the support of function Z is the interval [v1, v2] and the ratio Q verifies Q [v2=v1], the restriction of ZD to the support [v1, v2] is equal to the original function Z. Result The Mellin transform MS(b) of the dilatocycled form ZD of a signal Z is equal to a regular sampled form of the Mellin transform of Z. Explicitly, we have Z D (v) ¼ (DrQ _ Z)(v)
(12:267)
where DrQ (v)
þ1 X
n¼1
Qnr d(v Qn )
(12:268)
and the result is
(12:263)
This relation implies that the inverse Mellin transform of the impulse function MS(b) is the function ZD(v) given by
v0
Z D (v) ¼
DrQ
Ms (b) ¼ }[Z DrQ ](b)
0
Z
" þ1 v X
The expression (Equation 12.264) finally becomes
(12:262)
where is the geometric Dirac comb (Equation 12.238). Applying now theorem 12.8, we write MS as
þ1 ð
MS (b)
þ1 1 X n }[Z](b)d b ln Q n¼1 ln Q
(12:269)
12.3.2.1.2 Geometric Sampling in the Original Space Z D (v) (Z _ DrQ )(v)
(12:264)
The definition of ZD can be cast into a more explicit form by using the definition of the multiplicative convolution and the expression (Equation 12.238) of DrQ :
Given a function Z(v), its geometrically sampled version is defined as the function ZS equal to the invariant product (Equation 12.245) of Z with the geometric Dirac comb Drq , i.e., as ZS Z Drq
(12:270)
ZD(v)
MS (β)
0 v1
v2
v
0
FIGURE 12.4 Correspondence between the dilatocycled form of a function and its Mellin transform.
β
12-25
Mellin Transform
Z(v)
MP(β)
0
v1
v2
v
β
0 1/ln q
FIGURE 12.5 Correspondence between the geometrically sampled function and its Mellin transform.
is connected by Mellin’s correspondence to the periodized form of }[Z](b) given by
or, using the expression (Equation 12.238): ZS (v) ¼ Z(v) ¼
þ1 X
n¼1
þ1 X
n¼1
qnr d(v qn )vrþ1
qn Z(qn )d(v qn )
(12:271) M P (b) ¼ (12:272)
M P (b) }[ZS ](b) ¼ }[Z
Drq ](b)
(}[Z]* }[Drq ])(b)
(12:273) (12:274) (12:275)
Thus, function MP(b) is equal to the convolution between }[Z] and the transform }[Drq ] which has been shown in Equation 12.244 to be a classical Dirac comb. As a consequences, it is equal to the classical periodized form of }[Z](b) which is given explicitly by M P (b)
þ1 1 X n }[Z] b ln q n¼1 ln q
(12:276)
If the function M(b) }[Z](b) is equal to zero outside the interval [b1, b2], then to avoid aliasing, the period l=ln q must be chosen such that: 1 b2 b1 ln q
(12:277)
Let Z(v) be a function with Mellin transform M(b) and suppose that these functions can be approximated by their restriction to the intervals [v1, v2] and [b1, b2], respectively, (see Figure 12.6a and b). For such functions, it is possible to write down a discretized form of the transform which is very similar to what is done for the Fourier transformation. One may obtain the explicit formulas by performing the following steps: Dilatocylce function Z(v) with ratio Q. This operation leads to the function ZD defined by Equation 12.267. To avoid aliasing, the real number Q must be chosen such that: Q
n¼1
qn Z(qn )d(v qn )
(12:280)
MSP (b) ¼
1 1 X n MS b ln q n¼1 ln q
(12:281)
To avoid aliasing in b-space, the period must be chosen greater than the approximate support of }[Z](b) and this leads to the condition: 1 b2 b1 ln q
(12:282)
The inverse Mellin transform of MSP is the geometrically sampled form of ZD (Figure 12.6e) given, according to Equation 12.278, by
Result The geometrically sampled form of Z(v) defined by þ1 X
v2 v1
The Mellin transform of ZD is the sampled function MS defined by Equation 12.269 is terms of }[Z](b) M(b) (Figure 12.6d). Periodize MS(b) with a period 1=ln q. This is performed by rule (Equation 12.279) and yields a function MSP (b) given by
In that case, the functions MP(b) and (1=ln q)M(b) coincide on the interval [b1, b2].
ZS (v)
(12:279)
12.3.2.2 The Discrete Mellin Transform
The result is a function made of impulses located at points forming a geometric progression in v-space (Figure 12.5). Let us compute the Mellin transform MP(b) of Zs(v). Using definition (Equation 12.270) and property (Equation 12.249), we can write:
¼
1 1 X n }[Z] b ln q n¼1 ln q
(12:278)
ZSD (v) ¼
1 X
n¼1
qn Z D (qn )d(v qn )
(12:283)
12-26
Transforms and Applications Handbook
Z(v)
M(β)
0
v
0
β
0
β
(b)
(a)
ZD(v)
MS (β)
0
v
(d)
(c)
ZD(v)
MSP(β)
0 (e)
β
v
N
N
(f )
FIGURE 12.6 Steps leading to the discrete Mellin transform. Continuous form of the function Z(v) (a) and its Mellin transform M(b) (b). Dilatocycled function (c) and its Mellin transform (d). Correspondence between samples of a cycle (e) in v-space and samples of a period (f) in b-space.
The use of Equation 12.269 allows to rewrite definition (Equation 12.281) as 1 X 1 p n p P MS (b) ¼ M d b ln q ln Q n, p¼1 ln Q ln q ln Q
(12:284)
We now impose that the real numbers q and Q be connected by the relation: Q ¼ qN , N positive integer
1 X 1 p nN þ p M d b ln q ln Q n, p¼1 N ln q N ln q
(12:286)
or, changing the p-index to k p þ nN: MSP (b) ¼
MSP (b)
1 1 X k k P ¼ M d b ln Q k¼1 ln Q ln Q
MSP (b) ¼
1 X
(12:287)
qn(rþ1) Z D (qn )e2jpnb ln q
(12:289)
n¼1
This formula shows that qn(rþ1) ZD(qn) for different values of n are the Fourier series coefficients of the periodic function MSP (b). They are computed as
D
n
n(rþ1)
Z (q ) ¼ q
ln q
1=ðln q 0
1 1 X k d b ln Q k¼1 ln Q
k M ej2pnb ln q db ln Q X qn(rþ1) KþN1 k P M ¼ e2jpkn=N N ln Q k¼K P
1 X 1 k n k M d b ln q ln Q n, k¼1 N ln q ln q N ln q
(12:288)
Connect the v and b samples. This is done by writing explicitly that MSP as given by Equation 12.288 is the Mellin transform (Equation 12.211) of ZSD and computing:
(12:285)
This ensures that the function MSP defined by Equation 12.281 is of periodic impulse type which can be written as MSP (b) ¼
Thus, recalling definition (Equation 12.279)
(12:290)
12-27
Mellin Transform
where the summation is on these values of b lying inside the interval [b1, b2]. The integer K is thus given by the integer part of b1 ln Q. Inversion of Equation 12.290 is performed using the classical techniques of discrete Fourier transform (DFT). This leads to the discrete Mellin transform formula: M
P
m ln Q
¼
JþN1 X
q
n(rþ1) D
n
Z (q )e
2jpnm=N
where the integer J is given by the integer part of ln v1=ln q. In fact, since the definition of the periodized MP contains a factor N=lnQ ¼ 1=ln q, the true samples of M(b) are given by (lnQ=n) MP(m=ln Q). It is clear on formulas (Equation 12.290) and (Equation 12.291) that their implementation can be performed with a FFT algorithm. Choose the number of samples of handle. The number of samples N is related to q and Q according to Equation 12.285 by ln Q ln q
(12:292)
The conditions for nonaliasing given by Equations 12.280 and (12.282) lead to the sampling condition: v2 N (b2 b1 ) ln v1
12.3.2.3 Interpolation Formula in v-Space In the same way as the Fourier transformation is used to reconstruct a band-limited function from its regularly spaced samples, Mellin’s transformation allows to recover a function Z(v) with limited spread in the Mellin space from its samples spaced according to a geometric progression. If the Mellin transform }[Z] has a bounded support [b0=2, b0=2], it will be equal on this interval to its periodized form with period 1=ln q ¼ b0. Thus, þ1 X
n b }[Z] b g ln q b 0 n¼1
(12:294)
where the window function g is the characteristic function of the [1=2, 1=2]-interval. The inverse Mellin transform of this product is the multiplicative convolution of the two functions Z1 and Z2 defined as Z1 (v) ¼ ln q
þ1 X
n¼1
qn Z(qn )d(v qn )
1 ð
1
g
¼ vr1
b 2jpbr1 db v b0
(12:296)
sin (pb0 ln v) p ln v
(12:297)
The multiplicative convolution between Z1 and Z2 takes the following form:
Z(v) ¼
þ1 ð 0
þ1 X
v r1 sinpb ln v dv0 0 0 v v ln q q Z(q )d(v q ) 0 v0 v pln 0 n¼1 v n
n
0
n
(12:298)
which reduces to
ln v n sin p ln q
Z(v) ¼ vr1 qn(rþ1) Z(qn ) ln v p ln q n n¼1 þ1 X
(12:299)
where the relation b0 ¼ 1=ln q has been used. This is the interpolation formula of a function Z(v) from its geometrically spaced samples Z(qn).
(12:293)
which gives the minimum number of samples to consider in terms of the spreads of Z(v) and }[Z](b). In practice, the spread of the Mellin transform of a function is seldom known. However, as we will see in the applications, there are methods to estimate it.
}[Z](b) ¼
Z2 (v) ¼
(12:291)
n¼J
N¼
and
(12:295)
12.3.3 Practical Use in Signal Analysis 12.3.3.1 Preliminaries As seen above, Mellin’s transformation is essential in problems involving dilations. Thus, it is not surprising that it has come to play a dominant role in the development of analytical studies of wideband signals. In fact, expressions involving dilations arise in signal theory any time the approximation of small relative bandwidth is not appropriate. Recent examples of the use of the Mellin transform in this context can be found in time-frequency analysis where it has contributed to the introduction of several classes of distributions.29–36 This fast growing field cannot be explored here but an illustration of the essential role played by Mellin’s transformation in the analysis of wide-band signals will be given in Section 12.3.3.2 where Cramer–Rao bound for velocity estimation is derived.37 Numerical computation of Mellin’s transform has been undertaken in various domains such as signal analysis,38,39 optical image processing,40 or pattern recognition.41–43 In the past, however, all these applications have been restricted by the difficulty of assessing the validity of the results, due to the lack of definite sampling rules. Such a limitation does not exist any more as we will show in Section 12.3.3.3 by deriving a sampling theorem and a practical way to use it. The technique will be applied in Sections 12.3.3.4 and 12.3.3.5 to the computation of a wavelet coefficient and of an affine time-frequency distribution.47–49
12-28
Transforms and Applications Handbook
12.3.3.2 Computation of Cramer–Rao Bounds for Velocity Estimation in Radar Theory37 In a classical radar or sonar experiment, a real signal is emitted and its echo is processed in order to find the position and velocity of the target. In simple situations, the received signal will differ from the original one only by a time shift and a Doppler compression. In fact, the signal will also undergo an attenuation and a phase shift; moreover, the received signal will be embedded in noise. The usual procedure, which is adapted to narrow-band signals, is to represent the Doppler effect by a frequency shift.44 This approximation will not be made here so that the results will be valid whatever the extent of the frequency band. Describing the relevant signals by their positive frequency parts (so-called analytic signals), we can write the expression of the received signal x(t) in terms of the emitted signal z(t) and noise n(t) as 01=2
xa0 (t) ¼ a1
0 jf A0 z(a01 1 t a2 )e þ n(t)
(12:300)
where A0 and f characterize the unknown changes in amplitude and phase and the vector a0 (a01 , a02 ) represents the unknown parameters to be estimated. The parameter a02 is the delay and a01 is the Doppler compression given in terms of the target velocity v by cþv a01 ¼ , c velocity of light cv
(12:301)
The noise n(t) is supposed to be a zero mean Gaussian white noise with variance equal to s2. Relation (Equation 12.300) can be written in terms of the Fourier transforms Z, X, N of z, x, n (defined by Equation 12.19): 01=2
0 0
Xa0 (f ) ¼ a1 A0 ejpfa1 a2 Z(a01 f )ejf þ N(f )
(12:302)
The signal Z( f) is supposed normalized so that: kZ(f )k2
1 ð 0
jZ(f )j2 df ¼ 1
1 jA(a0 , a)j2 2s2
(12:304)
where 0
A(a , a)
þ1 ð
Xa0 (f )Z * (a1 f )e
2jpa1 a2 f
0
is the broad-band ambiguity function.45
df
s2ij E[(^ai ai )(^aj aj )]
(12:305)
(12:306)
where the mean value operation E includes an average on noise. For an unbiased estimator (E(^ai ) ¼ ai ), this variance satisfies the Cramer–Rao inequality46 given by s2ij (J 1 )ij
(12:307)
where the matrix J, the so-called Fisher information matrix, is defined by Jij ¼
E
q2 L qai qaj
(12:308) ij
with the partial derivatives evaluated at the true values of the parameters. The minimum value of the variance given by (s0ij )2 ¼ (J 1 )ij
(12:309)
is called the Cramer–Rao bound and is attained in the case of an efficient estimator such as the maximum-likelihood one. The determination of the matrix (Equation 12.308) by classical methods is intricate and does not lead to an easily interpretable result. On the contrary, we shall see how the use of Mellin’s transformation allows a direct computation and leads to a physical interpretation of the matrix coefficients. The computation of J is done in the vicinity of the value a ¼ a0 which maximizes the likelihood function L and, without loss of generality, all partial derivatives will be evaluated at the point a1 ¼ 1, a2 ¼ 0. Using Parseval’s formula (Equation 12.213), we can write the ambiguity function A(a0 , a) as
(12:303)
Hence, the delayed and compressed signal will also be of norm equal to one. Remark that here we work in the space L2(IRþ, f 2rþ1 df) with r ¼ 1=2 (cf Section 12.3.1.1). We will consider the maximum-likelihood estimates ^ai of the parameters a0i . They are obtained by maximizing the likelihood function L(a0 , a) which is given in the present context by L(a0 , a)
The efficiency of an estimator ^ai is measured by its variance s2ij defined by
A(a0 , a) ¼
þ1 ð
2jpb
}[X](b)}* [Za2 ](b)a1
db
(12:310)
1
with Za2 Z(f )e2jpa2 f
(12:311)
On this form, the partial derivatives with respect to a are easily computed and the result is
¼ 2jp
¼ 2jp
qA qa1 qA qa2
¼ 2jp
þ1 ð
1 þ1 ð 1 þ1 ð 0
b}[X](b)}* [Z](b)db
(12:312)
}[X](b)}* [fZ(f )](b)db
(12:313)
fX(f )Z * (f )df
(12:314)
12-29
Mellin Transform þ1 ð q2 A 2 b}[X](b)}* [fZ(f )](b)db ¼ 4p qa1 qa2
(12:315)
The computation of the J22 coefficient is performed in the same way and leads to
1
þ1 2 ð qA ¼ 2jp b(2jpb 1)}[X](b)}* [Z](b)db (12:316) qa21 1
þ1 2 ð qA 2 f 2 X(f )Z * (f )df ¼ 4p qa22
(12:317)
J22 ¼
J11
¼
" 2 ( 2 )! #
qA 1 qA ¼ 2 E Re A* þ
2 s qa1 qa1
1 Re s2
þ1 ð þ1 ð
(12:318)
E[}[X](b1 )}* [X](b2 )]}* [Z](b1 )}[Z](b2 )
1 1
[2jpb1 (2jpb1 1) þ 4p2 b1 b2 ]db1 db2
(12:319)
The properties of the zero mean white Gaussian noise n(t) lead to the following expression for the covariance of the Mellin transform of X: E[}[X](b1 )}* [X](b2 )] ¼ A20 }[Z](b1 )}* [Z](b2 ) þ s2 d(b1 b2 )
(12:320)
Substituting this relation in Equation 12.319, we obtain the expression of the J11 coefficient:
(12:323)
where s2f
0
where the curly brackets mean that the functions are evaluated for the values a1 ¼ a0 1 ¼ 1, a2 ¼ a0 2 ¼ 0. The corresponding Fisher information matrix can now be computed. To obtain J11, we substitute the expression (Equation 12.304) in definition (Equation 12.308) and use (Equation 12.312) and (Equation 12.316):
4p2 A20 2 sf s2
¼
þ1 ð
1
2
2
( f f ) jZ(f )j df , f ¼
þ1 ð
1
f jZ( f )j2 df
(12:324)
The computation of the symmetrical coefficient J12 ¼ J21 is a little more involved. Writing the definition in the form: J12 ¼
2 1 qA qA* qA E Re A* þ s2 qa1 qa2 qa1 qa2
(12:325)
and using relations (Equations 12.312 through 12.315), (Equation 12.320), we get 2 þ1 ð 4p2 A20 4 Re b1 }* [fZ(f )](b1 )}[Z](b1 )db1 J12 ¼ s2
þ1 ð
1
}* [fZ(f )](b1 )}[Z](b1 )db1
1
þ1 ð
1
2
3
b2 j}[Z](b2 )j db25
(12:326)
This expression is then transformed to the frequency domain using the Parseval formula (Equation 12.213) and the property (Equation 12.208) of the operator @ defined by Equation 12.187 (with r ¼1=2). The result is 2
J12 ¼
4p A20 s2
2
4Re
þ1 ð 0
3
@Z(f )fZ*(f )df bf 5
4p2 A20 ¼ [M bf ] s2
(12:327)
where M is the broad-band modulation index defined by J11
4p2 A20 2 ¼ sb s2
(12:321) 1 M Im 2p
s2b
where the variance of parameter b defined in Equation 12.217 is given explicitly by
s2b
¼
þ1 ð
1
¼ b
2 j}[Z](b)j2 db, (b b)
þ1 ð
1
(12:322) bj}[Z](b)j2 db
þ1 ð 0
f2
dZ* Z(f ) df df
(12:328)
The inversion of the matrix J just obtained leads according to Equation 12.307 to the explicit expression of the Cramer–Rao bound for the case of delay and velocity estimation with broadband signals: 2 s0ij ¼
s2
2 4p2 A20 s2f s2b (M bf )
s2f bf M
bf M s2b
!
(12:329)
12-30
Transforms and Applications Handbook
Relation (Equation 12.301) allows to deduce from this result the minimum variance of the velocity estimator: c2 a1 )2 E (v ^v)2 ¼ E (a1 ^ 4 c2 0 2 s ¼ 4 11
(12:330) (12:331)
Comparing these results to the narrow-band case, we see that the delay resolution measured by s022 is still related to the spread of the signal in frequency: 0 2 s22
s2 1 4p2 A20 s2f
(12:332)
while the velocity resolution now depends in an essential way on the spread in Mellin’s space: c2 s2 E[(v ^v)2 ] 16p2 A20 s2b
s2f c2 s2 16p2 A20 f02 s2t s2f (m f0 t0 )2
1
þ1 ð
1
Eb ( f ) ¼ f 2pjbr1 f r1 ejf(f )
tZ(f )
dZ* df df
T(f ) ¼
s2t ¼
1
2
2
(t t ) jz(t)j dt, t ¼
1 ð
1
(12:339)
(12:336)
2
tjz(t)j dt
(12:337)
1 df(f ) 2p df
b f
(12:340)
As seen on this expression, the variable b has no dimension and labels hyperbolas in a time-frequency half-plane f > 0. Hyperbolas displaced in time, corresponding to a group delay law t ¼ j þ b=f are obtained by time shifting the filters Eb to Ebj ( f) defined by Ebj (f ) ¼ e2pjjf f 2pjbr1
(12:341)
A more precise characterization of signals (Equation 12.339) and, hence, of variable b is obtained from a study of a particular affine time-frequency distribution which is to dilations what Wigner– Ville’s is to frequency translations. We give only the practical results of the study, referring the interested reader to the literature.28–31 The explicit form of the distribution is
and the variance s2t by 1 ð
(12:338)
The elementary parts:
(12:335)
where the modulation index m is given by dz 1 tz*(t) dt ¼ Im dt 2p
}[Z](b)Eb (f )db
1
(12:334)
Its Mellin transform which is equal to d(bb0) can be considered to have zero spread in b. Hence, such a signal cannot be of any help if seeking a finite velocity resolution. These remarks can be developed and applied to the construction of radar codes with given characteristics in the variables f and b.37 The above results can be seen as a generalization to arbitrary signals of a classical procedure since, in the limit of narrow band, the variance of the velocity estimator can be shown to tend toward its usual expression:
þ1 ð
Z( f ) ¼
1 ð
can be considered as filters with group delay given by
Z( f ) ¼ f 2jpb0 1=2
1 m¼ Im 2p
Consider a signal defined by a function of time z(t) such that its Fourier transform Z( f) has only positive frequencies (so-called analytic signal). In that case a Mellin transformation can be applied to Z( f) and yields a function }[Z](b). But while variables t and f have a well defined physical meaning as time and frequency, the interpretation of variable b and its relation to physical parameters of the signal has still to be worked out. This will be done in the present paragraph, thus allowing a formulation of the sampling condition (Equation 12.293) for the Mellin transform in terms of the time and frequency spreads of the signal. As seen in Section 12.3.1.1, the Mellin transform }[Z](b) gives the coefficients of the decomposition of Z on the basis {Eb( f)}:
(12:333)
Thus, for wide-band signals, it is not the duration of the signal that determines the velocity resolution, but the spread in the dual Mellin variable measured by the variance s2b . As an illustrative example, consider the hyperbolic signal defined by
E[(v ^v)2 ] ¼
12.3.3.3 Interpretation of the Dual Mellin Variable in Relation to Time and Frequency
P0 (t,f ) ¼ f
2rþ2
þ1 ð
1
(l(u)l(u))rþ1 Z(f l(u))Z*(f l(u)) e2 jpftu du (12:342)
12-31
Mellin Transform
where function l is given by l(u) ¼
the time-bandwidth product BT and the relative bandwidth R defined by ueu=2 2 sinh u=2
(12:343)
This distribution realizes an exact localization of hyperbolic signals defined by Equation 12.341 on hyperbolas of the timefrequency half-plane as follows: Z(f ) ¼ e2pjjf f r1 f 2jpb ! P0 (t, f ) ¼ f 1 d(t j b=f )
(12:344)
It can be shown that the affine time-frequency distribution (Equation 12.342) has the so-called tomographic property29–31 which reads: þ1 ð
1
dt
þ1 ð 0
P0 (t, f )d(t j b=f )f 1 df ¼ j}[Z](b)j2 (12:345)
Formulas (Equation 12.344) and (Equation 12.345) are basic for the interpretation of the b variable. It can be shown that for a signal z(t) $ Z( f) having a duration T ¼ t2 t1 and a bandwidth B ¼ f2 f1, distribution P0 has a support approximately localized in a bounded region of the half-plane f > 0 (see Figure 12.7) around the time j ¼ (t1 þ t2)=2 and the mean frequency f0 ¼ ( f1 þ f2)=2. Writing that the hyperbolas at the limits of this region have the equation:
and pass through the points of coordinates j T=2, f0 þ B=2, we find: b0 ¼ (f0 þ B=2)(T=2)
(12:347)
The support [b1, b2] of the Mellin transform }[Z](b) thus can be written in terms of B and T as b2 b1 ¼ 2b0
(12:348)
The condition (Equation 12.293) to avoid aliasing when performing a discrete Mellin transform can now be written in terms of
ξ
Hyperbolas: t = ξ ± β0/f
P (t, f )
B f0
(12:349)
The result giving the minimum number of samples to treat is N BT
1 1 1 þ R=2 þ ln 2 R 1 R=2
(12:350)
12.3.3.4 The Mellin Transform and the Wavelet Transform47,48 The Mellin transform is well suited to the computation of expressions containing dilated functions and, in particular, of scalar products such as
(Z1 , $a Z2 ) ¼ arþ1
þ1 ð
Z1 (f )Z2* (af )f 2rþ1 df
(12:351)
0
Because of the dilation parameter, a numerical computation of these functions of a by standard techniques (such as DFT) requires the use of oversampling and interpolation. By contrast, the Mellin transform allows a direct and more efficient treatment. The method will be explained on the example of the wavelet transform for one-dimensional signals. But it can also be used in more general situations such as those encountered in radar imaging.47,48 Let s(t) be a real signal with Fourier transform S( f) defined by
(12:346)
t ¼ j b0 =f
t
R
S(f ) ¼
1 ð
s(t)e2jpf dt
(12:352)
1
The reality of s implies that: S(f ) ¼ S*(f )
(12:353)
Given a real function f(t) (the so-called mother wavelet), one defines the continuous wavelet transform of signal s(t) as a function C(a, b) of two variables a > 0, b real given by 1 C(a, b) ¼ pffiffiffi a
þ1 ð
1
z(t)f*
tb dt a
(12:354)
Transposed to the frequency domain by a Fourier transformation and the use of property (Equation 12.353), the definition becomes: f
9 8 ð =
db 2
Pseudo Wigner
Reduced interference
RDx (t, f ) ¼ x(t)X * (fð)ej2pt f
Rihaczek
SPWDx (t, f ; G, s) ¼ s(t t 0 ) PWDx (t 0 , f ; G)dt 0 ðð ¼ s(t t 0 ) WDg (0, f f 0 ) WDx (t 0 , f 0 ) dt 0 df 0 Ð Ð 0 SPECx (t, f ; G) ¼ j x(t)g*(t t)ej2pf t dtj2 ¼ j X( f 0 )G*( f 0 f )e j2pt f df 0 j2
Smoothed Pseudo Wigner
Spectrogram
ð ð t t
n n j2ptn WDx (t, f ) ¼ x t þ x* t ej2p f t dt ¼ X f þ X* f dn e 2 2 2 2
Wigner
1, jtj < jaj , AFx (t, n) is defined in Equation 13.18, and m ~ (~t, ~ n ; a, r, b, g) ¼ ~t2 (~ n2 )a þ 0, jtj > jaj a 2 b g 2 n þ 2r( ((~t~n) ) ) . Functions with lower- and uppercase letters, e.g., g(t) and G( f), are Fourier transform pairs. (~t) ~ Note: Here, recta (t) ¼
13-28
Transforms and Applications Handbook
TABLE 13.10 TFR
Kernels of Cohen’s Shift Covariant Class of TFRs Defined in Table 13.9
cC(t, f)
1 t j2pf t dt sAC e jtj t
AC
ð
ACK
2cos(4pt f )
CC(t, n) SAC(tn)
cos(ptn)
BJD BUD CWD
" rffiffiffiffiffiffi ð # s 1 s t 2 j2p f b e db exp 4p jbj 4 b
CKD
CDS GED
GRD 1 j2ptf =~a e j~ aj
LD MH
2 cos(4ptf )
d(t þ t=2) þ d(t t=2) 2 8 < 1 , jt / tj < 1 / 2 jtj : 0, jt / tj > 1 / 2
d(f n=2) þ d(f þ n=2) 2 8 < 1 , jf / nj < 1 / 2 jnj : 0, jf / nj > 1 / 2
rffiffiffiffiffiffi s 1 s t 2 exp 4p jtj 4 t g(t), jt / tj < 1 / 2 0, jt / tj > 1 / 2
" rffiffiffiffiffiffi # s 1 s f 2 exp 4p jnj 4 n
2 2M 2 n0 t0 M n0 t0 t pffiffiffiffi exp 4t2M 2 p t
N ¼ 1 only
2 2N 2 t0 n0 N t0 n0 f pffiffiffiffi exp 4n2N 2 p n
M ¼ 1 only
sin (2pjsjt=jtjM=N ) pt
e j2p~atn
d(t þ a ~ t)
ejpjtjn
d(t þ jtj=2)
d( f a ~ n)
d(t þ t=2) þ d(t t=2) 2
t n , ; a, r, b, g S m ~ t0 n0 t n , ; 0, r, 1, 1 exp p~ m t0 n0
PD
RID
sin (ptn) ptn 1 1 d(n) þ ejpjtjn 2 jn 1 1 d( n) e jpjtjn 2 jn " # 2M t n 2N exp t0 n0 8 1, > < jtjM=N jnj=s < 1 0, > : jtjM=N jnj=s > 1
FC( f, v) 1 f sAC jnj n
d( f n=2) þ d( f þ n=2) 2
2l !
ND
RGWD
2
e(2p t n) =s
cos(ptn)
MT
PWD
sin (ptn) ptn 2M 2N !1 t n 1þ t0 n0
g(t)jtj
CAS
GWD
wC(t, t) t
1 sAC jtj t
d(t)WDg (0, f) 1 cos (2ptf =~ a) j~ aj ð 1 t j2p f b db e s jbj b
d(t jtj=2)
g(t=2)g*(t=2)
d(t)g(t=2)g*(t=2)
W Dg(0, f)
d(t þ a ~ t) þ d(t a ~ t) 2 1 t
s , jtj t
d( f a ~ n) þ d( f þ a ~ n) 2 1 f s , jnj n
s(a) ¼ 0, jaj >
s(a) ¼ 0, jaj >
cos (2p~ atn) S(tn), S(b) 2 with S(b) 2 0 is required for convergence.
14.5 Basic and Operational Properties of the Fractional Fourier Transform Here we present a list of the more important basic and operational properties of the FRT. Readers can easily verify that the operational properties, such as those for scaling, coordinate multiplication, and differentiation, reduce to the corresponding property for the ordinary Fourier transform when a ¼ 1.
Linearity: Let F a denote the ath order fractional Fourier
transP P a b f (u)] ¼ b [F f (u) . form operator. Then F a k k k k k k
14-7
Fractional Fourier Transform
Integer orders: F k ¼ (F )k where F denotes the ordinary Fourier transform operator. This property states that when a is equal to an integer k, the ath order fractional Fourier transform is equivalent to the kth integer power of the ordinary Fourier transform, defined by repeated application. It also follows that F 2 ¼ P (the parity operator), F 3 ¼ F 1 ¼ (F )1 (the inverse transform operator), F 4 ¼ F 0 ¼ I (the identity operator), and F j ¼ F j mod 4 .
Transform of a shifted function: Let SH(u0 ) and PH(m0 ) denote the shift SH(u0 )[f (u)] ¼ f (u þ u0 ) and the phase shift PH(m0 )[f (u)] ¼ exp (i2pm0 u)f (u) operators, respectively. Then
Unitarity: (F a )1 ¼ (F a )H ¼ F a where ()H denotes the conjugate transpose of the operator. In terms of the kernel, this property can be stated as Ka1 (u, u0 ) ¼ K*a (u0 , u).
We see that the SH(u0 ) operator, which simply results in a translation in the u domain, corresponds to a translation followed by a phase shift in the ath fractional domain. The amount of translation and phase shift is given by cosine and sine multipliers which can be interpreted in terms of ‘‘projections’’ between the axes.
Commutativity: F a2 F a1 ¼ F a1 F a2 .
Transform of a phase-shifted function:
Eigenfunctions: F a [cn (u)] ¼ exp (ianp=2)cn (u). Here cn (u) are the Hermite–Gaussian functions defined in Section 14.2. Ð Ð Parseval: f *(u)g(u)du ¼ fa* (u)ga (u)du. This property is equivalent to unitarity. Energy or norm conservation (En[f ] ¼ En[fa ] or k f k ¼ kfa k) is a special case.
F a PH(m0 ) ¼ e
Inverse: (F a )1 ¼ F a . In terms of the kernel, this property is stated as Ka1 (u, u0 ) ¼ Ka (u, u0 ).
a2 þa1 Index additivity: F a2 F a1 ¼ F . In terms of kernels this can Ð 0 be written as Ka2 þa1 (u, u ) ¼ Ka2 (u, u00 )Ka1 (u00 , u0 ) du00 .
Associativity: F a3 (F a2 F a1 ) ¼ (F a3 F a2 )F a1 .
Time reversal: Let P denote the parity operator: P[f (u)] ¼ f (u), then F a P ¼ PF a
(14:39)
F [f (u)] ¼ fa (u)
(14:40)
a
Transform of a scaled function: Let M(M) and Q(q) denote the scaling M(M)[f (u)] ¼ jMj1=2 f (u=M) and chirp multiplication 2 Q(q)[f (u)] ¼ eipqu f (u) operators, respectively. Here the notation M(M)[f (u)] means that the operator M(M) is applied to the function f(u). Then a
2
F M(M) ¼ Q( cot a (1 ( cos a )=( cos a))) 0
M( sin a=M sin a0 ) F a , rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 i cot a ipu2 cot a(1 e F a [jMj 1=2 f (u=M)] ¼ 1 iM 2 cot a Mu sin a0 fa0 : sin a
(14:43)
2
F a [f (u þ u0 )] ¼ eipu0 sin a cos a ei2puu0 sin a fa (u þ u0 cos a):
(14:44)
ipm20 sin a cos a
(14:41) ( cos2 a0 )=( cos2 a))
(14:42)
Here a0 ¼ arctan (M 2 tan a) and a0 is taken to be in the same quadrant as a. This property is the generalization of the ordinary Fourier transform property stating that the Fourier transform of f (u=M) is jMjF(Mm). Notice that the fractional Fourier transform of f (u=M) cannot be expressed as a scaled version of fa(u) for the same order a. Rather, the fractional Fourier transform of f (u=M) turns out to be a scaled and chirp modulated version of fa0 (u) where a0 6¼ a is a different order.
PH(m0 cos a) SH( m0 sin a)F a ,
(14:45)
ipm20
F a [ exp (i2pm0 u)f (u)] ¼ e fa (u
sin a cos a i2pum0 cos a
e m0 sin a):
(14:46)
Similar to the shift operator, the phase-shift operator, which simply results in a phase shift in the u domain, corresponds to a translation followed by a phase shift in the ath fractional domain. Again the amount of translation and phase shift are given by cosine and sine multipliers. Transform of a coordinate multiplied function: Let U and D denote the coordinate multiplication U[f (u)] ¼ uf (u) and differentiation D[f (u)] ¼ (i2p) 1 df (u)=du operators, respectively. Then F a U n ¼ [ cos a U F a [un f (u)] ¼ [ cos a u
2
0
2
F a SH(u0 ) ¼ eipu0 sin a cos a PH(u0 sin a) SH(u0 cos a)F a ,
sin a D]n F a ,
sin a (i2p) 1 d=du]n fa (u):
(14:47) (14:48)
When a ¼ 1, the transform of a coordinate multiplied function uf (u) is the derivative of the transform of the original function f(u), a well-known property of the Fourier transform. For arbitrary values of a, we see that the transform of uf (u) is a linear combination of the coordinate-multiplied transform of the original function and the derivative of the transform of the original function. The coefficients in the linear combination are cos a and sin a. As a approaches 0, there is more uf (u) and less df (u)=du in the linear combination. As a approaches 1, there is more df (u)=du and less uf (u). Transform of the derivative of a function: F a Dn ¼ [ sin a U þ cos a D]n F a ,
(14:49)
F a [[(i2p) 1 d=du]n f (u)] ¼ [ sin a u þ cos a (i2p) 1 d=du]n fa (u): (14:50)
14-8
Transforms and Applications Handbook
When a ¼ 1 the transform of the derivative of a function df (u)=du is the coordinate-multiplied transform of the original function. For arbitrary values of a, we see that the transform is again a linear combination of the coordinate-multiplied transform of the original function and the derivative of the transform of the original function. Transform of a coordinate divided function:
a
F [f (u)=u] ¼ i csc a e
ipu2 cot a
2pu ð
02
fa (u0 )e(ipu
cot a)
1
du0 : (14:51)
Transform of the integral of a function:
F
2u ð
a4
u0
0
3
05
f (u ) du
2
¼ sec a eipu
tan a
ðu
fa (u0 )eipu
02
tan a
du0 :
u0
(14:52)
A few additional properties are F a [f *(u)] ¼ fa * (u), a
(14:53)
F [(f (u) þ f (u))=2] ¼ (fa (u) þ fa (u))=2,
(14:54)
F a [(f (u) f (u))=2] ¼ (fa (u) fa (u))=2:
(14:55)
It is also possible to write convolution and multiplication properties for the fractional Fourier transform, though these are not of great simplicity (page 157 of [129] and [9,174]). A function and its ath order fractional Fourier transform satisfy an ‘‘uncertainty relation,’’ stating that the product of the spread of the two functions, as measured by their standard deviations, cannot be less than j sin (ap=2)j=4p [116]. We may finally note that the transform is continuous in the order a. That is, small changes in the order a correspond to small changes in the transform fa(u). Nevertheless, care is always required in dealing with cases where a approaches an even integer, since in this case the kernel approaches a delta function.
14.6 Dual Operators and Their Fractional Generalizations The dual of the operator A will be denoted by AD and satisfies AD ¼ F 1 AF :
(14:56)
AD performs the same action on the frequency-domain representation F(m), that A performs on the time-domain representation f(u). For instance, if A represents the operation of multiplying with the coordinate variable u, then the dual AD represents the operation of multiplying F(m) with m, which in the time domain corresponds to the operator (i2p)1 d=du.
The fractional operators we deal with in this section perform the same action in a fractional domain: Aa ¼ F a AF a :
(14:57)
This equation generalizes Equation 14.56 and reduces to it when a ¼ 1 with A1 ¼ AD . If again A corresponds to the multiplication of f(u) with u, then Aa corresponds to the multiplication of fa (ua ) with ua, where ua denotes the coordinate variable associated with the ath fractional Fourier domain. The effect of Aa in the ordinary time domain can be expressed as cos a uf (u) þ sin a (i2p)1 df (u)=du (see ‘‘Transform of a coordinate multiplied function’’ in Section 14.5). To distinguish the kind of fractional operators discussed in this section from the ath operator power of A which is denoted by Aa, we are denoting them by Aa. The FRT is the ath operator power of the ordinary Fourier transform, but the fractional operators here are operators that perform the same action, such as coordinate multiplication, in different fractional Fourier domains. To further emphasize the difference, we note that for a ¼ 0, A0 ¼ A while A0 ¼ I ; and for a ¼ 1, A1 ¼ AD while A1 ¼ A. In other words, Aa interpolates between the operator A and its dual AD, gradually evolving from one member of the dual pair to the other as the fractional order goes from zero to one. On the other hand, Aa interpolates between the identity operator and the operator A. The first pair of dual operators we will consider are the coordinate multiplication U and differentiation D operators, whose effects in the time domain are to take a function f(u) to uf (u) and (i2p)1 df (u)=du, respectively. The fractional forms of these operators U a and Da are defined so as to have the same functional effect in the ath domain; they take fa (ua ) to ua fa (ua ) and (i2p)1 dfa (ua )dua , respectively. In the time domain these operations correspond to taking f(u) to cos a uf (u) þ sin a (i2p)1 df (u)=du and sin auf (u) þ cos a(i2p)1 df (u)=du, respectively. (These and similar results are a consequence of the operational properties presented in Section 14.5.) These relationships can be captured elegantly in the following operator form: U a ¼ cos a U þ sin a D, Da ¼ sin a U þ cos a D:
(14:58)
The phase shift operator PH(h) and the translation operator SH(j) are also duals which are defined in terms of the U and D operators as PH(h) ¼ exp (i2phU) and SH(j) ¼ exp (i2pjD). (Such expressions are meant to be interpreted in terms of their series expansions.) These operators take f(u) to exp (i2phu)f (u) and f (u þ j), respectively. The fractional forms of these operators are defined as PHa (h) ¼ exp (i2phU a ) and SHa (j) ¼ exp (i2pjDa ) and satisfy PHa (h) ¼ exp (iph2 sin a cos a)PH(h cos a)SH(h sin a),
SHa (j) ¼ exp (ipj2 sin a cos a)PH(j sin a)SH(j cos a):
(14:59)
14-9
Fractional Fourier Transform
The scaling operator M(M) can be defined as M(M) ¼ exp [ip( pffiffiffiffiffiffiffiffiffi ffi ln M)(UD þ DU)] where M > 0. It takes f(u) to 1=M f (u=M). This operator is its own dual in the sense that scaling in the time domain corresponds to p descaling in the ffiffiffiffiffiffiffiffiffiffi frequency domain: the Fourier transform of 1=M f (u=M) is pffiffiffiffiffi M F(Mm). The fractional form is defined as Ma (M) ¼ exp [ip( ln M)(U a Da þ Da U a )] and satisfies Ma (M) ¼ F a M(M)F a :
(14:60)
The dual chirp multiplication Q(q) and chirp convolution R(r) operators are defined as Q(q) ¼ exp (ip qU 2 ) and R(r) ¼ exp (ip rD2 ). In the time domain pffiffiffiffiffiffiffi they take f(u) to exp (ip qu2 )f (u) and exp (ip=4) 1=r exp (ip u2 =r)*f (u), respectively. Their fractional forms are defined as Qa (q) ¼ exp (ipqU 2a ) and Ra (r) ¼ exp iprD2a and satisfy Qa (q) ¼ R(tan a) Q(q cos2 a) R( tan a), Ra (r) ¼ Q(tan a) R(r cos2 a) Q( tan a):
(14:61)
We now turn our attention to the final pair of dual operators we will discuss. The discretization DI (Dm) and periodization PE(Du) operators can be defined in terms of the phase shift P and translation operators: DI (Dm) ¼ 1 k¼1 PH(kDm) and P1 PE(Du) ¼ k¼1 SH(kDu). The parameters Du > 0 and Dm > 0 correspond to the period of replication in the time and frequency domains, respectively. Unlike the other operators defined above, these operators do not in general have inverses. Since sampling in the time domain corresponds to periodic replication in the frequency domain and vice versa, we also define du ¼ 1=Dm and dm ¼ 1=Du, denoting the sampling interval in the time and frequency domains, respectively. It is possible to show that the discretization and periodization operators take P1 P f(u) to du 1 k¼1 d(u kdu)f (kdu) and k¼1 f (u kDu), respectively. In the time domain, the discretization operator corresponds to multiplication with an impulse train, and the periodization operator corresponds to convolution with an impulse train (and vice versa in the frequency domain). Discretization in the time domain corresponds to periodization in the frequency domain and periodization in the time domain corresponds to discretization in the frequency domain. This is what is meant by the duality of these two operators. The fractional versions of these operators can be defined as P P and PE a (Du) ¼ 1 DI a (Dm) ¼ 1 k¼1 PHa (kDm) k¼1 SHa (kDu) and satisfy DI a (Dm) ¼ R(tan a) DI (Dm cos a) R( tan a), PE a (Du) ¼ Q(tan a) PE(Du cos a) Q( tan a):
(14:62)
Equations 14.58 through 14.62 all express the fractional operators in terms of their non-fractional counterparts. Equations 14.58 through 14.60 are directly related to the corresponding operational properties presented in Section 14.5, and may be considered
abstract ways of expressing them (transform of a coordinate multiplied or differentiated function, transform of a phase-shifted or shifted function, transform of a scaled function, respectively). The fractional operators in Equation 14.62 interpolate between periodicity and discreteness with the smooth transition being governed by the parameter a. However, this is not the only significance of the fractional periodicity and discreteness operators. In practice, one cannot realize infinite periodic replication; any periodic replication must be limited to a finite number of periods. This corresponds to multiplying the infinite periodic replication operator with a window function, and will be referred to as partial periodization. Likewise, one cannot realize discretization with true impulses; any discretization will involve finitewidth sampling pulses. This corresponds to convolving a true impulse sampling operator with a window function, and will be referred to as partial discretization. Thus, the partial periodization and discretization operations represent practical real-life replication and sampling operations. It has been shown that fractional periodization and discretization operators can be expressed in terms of partial periodization and discretization operators [128]. Therefore, the fractional periodization and discretization operators are also related to real-life sampling and periodic replication. The subject matter of this section is further discussed in [128,156].
14.7 Time-Order and Space-Order Representations Interpreting the fractional Fourier transforms fa(u) of a function f(u) for different values of the order a as a two-dimensional function of u and a leads to the concept of time-order (or space-order) signal representations. Just like other timefrequency and time-scale (or space-frequency and space-scale) signal representations, they constitute an alternative way of displaying the content of a signal. These representations are redundant in that the information of a one-dimensional signal is displayed in two dimensions. There are two variations of the time-order representation, the rectangular time-order representation and the polar time-order representation. For the rectangular time-order representation, fa(u) is interpreted as a two-dimensional function, with u the horizontal coordinate and a the vertical coordinate. As such, the representations of the signal f(u) in all fractional domains are displayed simultaneously. Mathematically, the rectangular time-order representation Tf (u, a) of a signal f is defined as Tf (u, a) ¼ fa (u):
(14:63)
Figure 14.4 illustrates the definition of the rectangular time-order representation. Such a display of the fractional Fourier transforms of the rectangle function is shown in Figure 14.1. For the polar time-order representation, fa (u) ¼ f2a=p (r) is interpreted as a polar two-dimensional function where r is the
14-10
Transforms and Applications Handbook
a
f1.4(u) f1.0(u) f0.6(u) f0.2(u) u
f–0.2(u) f–0.6(u) f–1.0(u) f–1.4(u)
FIGURE 14.4 The rectangular time-order representation. (From Ozaktas, H. M. and Kutay M. A., Technical Report BU-CE-0005, Bilkent University, Department of Computer Engineering, Ankara, January 2000; Ozaktas, H. M., Zalevsky, Z., and Kutay, M. A., The Fractional Fourier Transform with Applications in Optics and Signal Processing. John Wiley & Sons, New York, 2001. With permission.)
fa2 (r) ¼ fa ( r), from which it also follows that Tf (r, a) ¼ Tf ( r, a p). Figure 14.5 illustrates the definition of the polar time-order representation. As a consequence of its definition, there is a direct relation between the polar time-order representation and the concept of fractional Fourier domains. Each fractional Fourier transform fa (r) of the signal f ‘‘lives’’ in the ath domain, defined by the radial line making angle a ¼ ap=2 with the u axis. The polar time-order representation can be considered as a time–frequency space since the horizontal and vertical axes correspond to time and frequency. The oblique slices of the polar representation are simply equal to the fractional Fourier transforms. The slice at a ¼ 0 is the time-domain representation f (r), the slice at a ¼ p=2 is the frequency-domain representation F(r), and other slices correspond to fractional transforms of other orders. We now discuss a number of properties of the polar timeorder representation. The original function is obtained from the distribution as f (u) ¼ f0 (u) ¼ Tf (u, 0):
(14:65)
radial coordinate and a is the angular coordinate. As such, all the fractional Fourier transforms of f(u) are displayed such that fa (r) lies along the radial line making angle a ¼ ap=2 with the horizontal axis. Mathematically, the polar time-order representation Tf (r, a) of a signal f is defined as
The time-order representation of the a0 th fractional Fourier transform of a function is simply a rotated version of the timeorder representation of the original function
Tf (r, a) ¼ f2a=p (r):
where a0 ¼ a0 p=2. Since the time-order representation is linear, the representation of any linear combination of functions is the same as the linear combination of their representations. We now discuss the relationship of time-order representations with the Wigner distribution and the ambiguity function. We had already encountered the Radon transform of the Wigner distribution:
(14:64)
Tf (r, a) is periodic in a with period 2p as a result of the fact that fa (r) is periodic in a with period 4. Tf (r, a) can be consistently defined for negative values of r as well by using the property f1.0 (ρ) f1.4 (ρ)
Ra [Wf (u, m)](r) ¼ jf2a=p (r)j2 ¼ jTf (r, a)j2 :
f0.6 (ρ)
f1.8 (ρ)
f0.2 (ρ)
f–1.8 (ρ)
f–0.2 (ρ)
f–0.6 (ρ)
f–1.4 (ρ)
Tfa0 (r, a) ¼ Tf (r, a þ a0 ),
f–1.0 (ρ)
FIGURE 14.5 The polar time-order representation. (From Ozaktas, H. M. and Kutay M. A., Technical Report BU-CE-0005, Bilkent University, Department of Computer Engineering, Ankara, January 2000; Ozaktas, H. M., Zalevsky, Z., and Kutay, M. A., The Fractional Fourier Transform with Applications in Optics and Signal Processing. John Wiley & Sons, New York, 2001. With permission.)
(14:66)
(14:67)
Thus, the Radon transform of the Wigner distribution, interpreted as a polar function, corresponds to the absolute square of the polar time-order representation. We also already encountered the following result, which is a consequence of the projection-slice theorem (page 56 of [129]): u, m )](r) ¼ Af (r cos a, r sin a) S a [Af ( * ( r), ¼ Tf (r, a)*Tf*( r, a) ¼ f2a=p (r)*f2a=p
(14:68)
where * denotes ordinary convolution. The Radon transforms and slices of the Wigner distribution and the ambiguity function are summarized in Table 14.1. For both the Wigner distribution and the ambiguity function, the Radon transform is of product form and the slice is of convolution form. The essential difference between the Wigner distribution and the ambiguity function lies in the scaling of r by 2 or 1=2 on the right-hand side.
14-11
Fractional Fourier Transform TABLE 14.1 Radon Transforms and Slices of the Wigner Distribution and the Ambiguity Function * (r) ¼ Tf (r, a)Tf*(r, a) RDN a [Wf (u, m)](r) ¼ f2a=p (r)f2a=p
* ( r=2) ¼ Tf (r=2, a)Tf*( r=2, a) u, m )](r) ¼ f2a=p (r=2)f2a=p RDN a [Af ( * (2r) ¼ 2Tf (2r, a)*2Tf*(2r, a) SLCa [Wf (u, m)](r) ¼ 2f2a=p (2r)*2f2a=p * ( r) ¼ Tf (r, a)*Tf*( r, a) u, m )](r) ¼ f2a=p (r)*f2a=p SLCa [Af (
Sources: From Ozaktas, H. M. and Kutay, M. A., Technical Report BU-CE0005, Bilkent University, Department of Computer Engineering, Ankara, January 2000; Ozaktas, H. M., et al., The Fractional Fourier Transforms with Applications in Optics and Signal Processing. John Wiley & Sons, New York, 2001. With permission.) Note: The upper row can also be expressed as jf2a=p (r)j2 ¼ jTf (r, a)j2 .
Analogous expressions for the Radon transforms and slices of the polar time-order representation Tf (r, a) and its two-dimen~f ( r, a ) are given in Table 14.2. The slice sional Fourier transform T of Tf (r, a) at a certain angle is simply equal to the fractional Fourier transform fa (r) by definition (with a ¼ ap=2). The ~f ( r, a ) at an angle f is given by fbþ1 (r) or Radon transform of T Tf (r, f þ p=2), a p=2 rotated version of Tf (r, a) (with f ¼ bp=2). We already know that the time-frequency representation whose projections are equal to jfa (u)j2 is the Wigner distribution. We now see that the time–frequency representation whose projections are equal to fa(u) is the two-dimensional Fourier transform of the polar time-order representation (within a rotation). Thus in Tables 14.1 and 14.2 we present a total of eight expressions for the Radon transforms and slices of the Wigner distribution and its two-dimensional Fourier transform (the ambiguity function), and the Radon transforms and slices of the polar time-order representation and its two-dimensional Fourier transform. The polar time-order representation is a linear time–frequency representation, unlike the Wigner distribution and ambiguity function which are quadratic. Its importance stems from the fact that the Radon transforms (integral projections) and slices of the Wigner distribution and the ambiguity function can be expressed in terms of products or convolutions of various scaled forms of the time-order representation and its two-dimensional
Fourier transform. These representations are discussed in greater detail in Chapter 5 of [129].
14.8 Linear Canonical Transforms Linear canonical transforms (LCTs) are a three-parameter family of linear integral transforms. Many important operations and transforms including the FRT are special cases of linear canonical transforms. Readers wishing to learn more than we can cover here are referred to [129,164]. The linear canonical transform fM (u) of f(u) with parameter M is most conveniently defined as fM (u) ¼
RDN f [Tf (r, a)](R) ¼
Ð p=2
p=2 f2(fþu)=p (R sec
u) R sec2 u du
~ f ( r, a )](R) ¼ f2f=pþ1 (R) RDN f [T
(14:69)
1
pffiffiffi 0 CM (u, u ) ¼ b eip=4 exp [ip(au2 2buu0 þ gu 2 )],
where a, b, and g are real parameters. The label M represents the three parameters a, b, and g which completely specify the transform. Linear canonical transforms are unitary; that is, the inverse transform kernel is the Hermitian conjugate of the ori0 * 0 ginal transform kernel: C1 M (u, u ) ¼ CM(u , u). The composition of any two linear canonical transforms is another linear canonical transform. In other words, the effect of consecutively applying two linear canonical transforms with different parameters is equivalent to applying another linear canonical transform whose parameters are related to those of the first two. (Actually this is strictly true only within a sign factor [129,164].) Such compositions are not in general commutative, but they are associative. Finding the parameters of the composite transform is made easier if we define a 2 3 2 unit-determinant matrix to represent the parameters of the transform. We let the symbol M (which until now denoted the three parameters a, b, g) now be defined as a matrix of the form
g=b 1=b a=b A B ¼ ¼ C D b þ ag=b a=b b ag=b
1=b g=b
1
,
(14:70)
with determinant AD BC ¼ 1. The three original parameters can be expressed in terms of the matrix elements as a ¼ D=B, b ¼ 1=B, and g ¼ A=B, and the definition of linear canonical transforms can be rewritten as
SLCf [Tf (r, a)](R) ¼ f2f=p (R) Ð p=2 0 ~ f ( r, a )](R) ¼ 2pi SLCf [T p=2 f2(fþu)=pþ1 (R cos u) sec u du
Sources: From Ozaktas, H. M. and Kutay, M. A., Technical Report BU-CE-0005, Bilkent University, Department of Computer Engineering, Ankara, January 2000; Ozaktas, H. M., et al., The Fractional Fourier Transforms with Applications in Optics and Signal Processing. John Wiley & Sons, New York, 2001. With permission.)
CM (u, u0 )f (u0 ) du0 ,
0
M¼ TABLE 14.2 Radon Transforms and Slices of the Polar Time-Order Representation and Its Two-Dimensional Fourier Transform
1 ð
fM (u) ¼ CM (u, u0 ) ¼
pffiffiffiffiffiffiffiffi 1=B e
1 ð
CM (u, u0 )f (u0 ) du0 ,
(14:71)
1
ip=4
D 2 exp ip u B
1 A 0 2 uu0 þ u 2 B B
:
Now, it is easy to show the following results: The matrix M3 corresponding to the composition of two systems is the matrix
14-12
Transforms and Applications Handbook
product of the matrices M2 and M1 corresponding to the individual systems. That is, M3 ¼ M2 M1 ,
Bm, Cu þ Am):
M 0
0 1=M
μ
(a)
–4
4
μ
–4
u
μ
4
4u
4
4
–4
(b)
–4
(c)
–4
4u
–4
–4
(d) μ
4
μ
(14:73) (14:74)
A similar relationship holds for the ambiguity function as well. The above result means that the Wigner distribution of the transformed function is simply a linearly distorted form of the Wigner distribution of the original function, with the value of the Wigner distribution at each time=space–frequency point being mapped to another time=space–frequency point. Since the determinant of M is equal to unity, this pointwise geometrical distortion or deformation is area preserving; it distorts but does not concentrate or deconcentrate the Wigner distribution. We now discuss several special cases of linear canonical transforms that correspond to specific forms of the matrix M. The last of these special cases will be the fractional Fourier transform which corresponds to the case where M is the rotation matrix. pffiffiffiffiffiffiffiffiffiffi 1=M f (u=M). The The scaling operation takes f(u) to inverse of a scaling operation with parameter M > 0 is a scaling operation with parameter 1=M. The M matrix is of the form
4
4u
–4
Furthermore, the matrix corresponding to the inverse of a linear canonical transform is the inverse of the matrix corresponding to the original transform. The set of linear canonical transforms satisfies all the axioms of a noncommutative group (closure, associativity, existence of identity, inverse of each element), just like the set of all unitdeterminant 2 3 2 matrices (again within a sign). Certain subsets of the set of linear canonical transforms are groups in themselves and thus are subgroups. Some of them will be discussed below. For example, the fractional Fourier transform is a subgroup with one real parameter. The effect of linear canonical transforms on the Wigner distribution of a function can be expressed quite elegantly in terms of the elements of the matrix M:
WfM (u, m) ¼ Wf (Du
μ
(14:72)
where M1 is the matrix of the transform that is applied first M2 is the matrix of the transform that is applied next
WfM (Au þ Bm, Cu þ Dm) ¼ Wf (u, m),
4
(14:75)
and the Wigner distribution of the scaled function is Wf (u=M, Mm) (Figure 14.6b shows how the Wigner distribution is scaled for M ¼ 2). Let us now consider chirp multiplication which takes f(u) to 2 e ipqu f (u). The inverse of this operation with parameter q has the same form but with parameter q. Its M matrix is
4u
–4
(e)
–4
4u
–4
–4
(f )
FIGURE 14.6 (a) Rectangular region in the time=space-frequency plane, in which most of the signal energy is assumed to be concentrated. Effect of (b) scaling with M ¼ 2, (c) chirp multiplication with q ¼ 1, (d) chirp convolution with r ¼ 1, (e) Fourier transformation, (f) fractional Fourier transformation with a ¼ 0.5. (From Ozaktas, H. M., Zalevsky, Z., and Kutay, M. A., The Fractional Fourier Transform with Applications in Optics and Signal Processing. John Wiley & Sons, New York, 2001. With permission.)
1 0 q 1
(14:76)
and the Wigner distribution of the chirp multiplied function is Wf (u, m þ qu) (Figure 14.6c shows this vertical shearing for q ¼ 1). Now consider chirp convolution which takes f(u) to pffiffiffiffiffiffiffi e ip=4 1=r exp (ipu2 =r)*f (u). The inverse of this operation with parameter r has the same form but with parameter r. Its M matrix is 1 r (14:77) 0 1
14-13
Fractional Fourier Transform
and the Wigner distribution of the chirp convolved function is Wf (u rm, m) (Figure 14.6d shows this horizontal shearing for r ¼ 1). Fourier transform takes f(u) to Ð 1The 0 ordinary i2puu0 0 f (u )e du . However, the Fourier transform that is a 1 special case of linear canonical Ðtransforms has a slightly modified 0 definition, taking f(u) to eip=4 f (u0 )ei2puu du0 . The M matrix is
0 1 1 0
(14:78)
and the Wigner distribution of the Fourier transformed function is Wf (m, u) (Figure 14.6e shows this p=2 rotation). Finally, we turn our attention to the fractional Fourier transform, which takes f(u) to fa(u) as defined in Equation 14.1. The inverse of the ath order FRT is the ath order FRT. The M matrix is
cos (ap=2) sin (ap=2)
sin (ap=2) cos (ap=2)
(14:79)
Wf [ cos (ap=2) u sin (ap=2) m, sin (ap=2) u þ cos (ap=2) m]:
(14:80)
We have already encountered this expression before in Equation 14.25 (Figure 14.6f shows this rotation by angle a ¼ ap=2 when a ¼ 0.5). To summarize, we see that fractional Fourier transforms constitute a one-parameter subgroup of linear canonical transforms corresponding to the case where the M matrix is the rotation matrix, and the fractional order parameter corresponds to the angle of rotation. Fractional Fourier transformation corresponds to rotation of the Wigner distribution in the time=space– frequency plane (phase space). The ordinary Fourier transform is a special case of the fractional Fourier transform, which is in turn a special case of linear canonical transforms. The matrix formalism not only allows one to easily determine the parameters of the concatenation (composition) of several LCTs, it also allows a given LCT to be decomposed into more elementary operations such as scaling, chirp multiplication and convolution, and the fractional Fourier transform. This is often useful for both analytical and numerical purposes. Of the many such possible decompositions here we list only a few (see page 104 of [129]): A B 1 ¼ C D 0
(A 1)=C 1
1 C
0 1
1 0
(D 1)=C 1
(14:81)
¼
1 0 (D 1)=B 1
1 B 0 1
1 0 M A B ¼ 0 q 1 C D
1 0 : (14:82) (A 1)=B 1
0 1=M
cos a sin a , sin a cos a
(14:83)
where a ¼ arccot(A=B), pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi M ¼ sgn(A) A2 þ B2 , q¼
and the Wigner distribution of the Fourier transformed function is
Such decompositions usually show how an arbitrary LCT can be expressed in terms of its special cases. Specifically, the above two decompositions show how any unit-determinant matrix can be written as the product of lower and upper triangular matrices, which we have seen correspond to chirp multiplication and convolution operations. Another important decomposition is the decomposition of an arbitrary LCT into a fractional Fourier transformation followed by scaling followed by chirp multiplication:
A D , B(A2 þ B2 ) B
(14:84) (14:85) (14:86)
where sgn(A) is the sign of A. The ranges of the square root and the arccotangent both lie in (p=2, p=2]. Equation 14.83 can be interpreted geometrically as follows: any linear distortion in the time=space–frequency plane can be realized as a rotation followed by scaling followed by shearing. This decomposition is important because it forms the basis of a fast and accurate algorithm for digitally computing arbitrary linear canonical transforms [76,119]. These algorithms compute LCTs with a performance similar to that of the fast Fourier transform (FFT) algorithm in computing the Fourier transform, both in terms of speed and accuracy. Further discussion of decompositions of the type of Equation 14.83 may be found in [4]. Other works on the computation of LCTs include [64,65]. Many of the elementary and operational properties of LCTs are collected in Section 14.9, which can be recognized as generalizations of the corresponding properties of the fractional Fourier transform.
14.9 Basic and Operational Properties of Linear Canonical Transforms Here we present a list of the more important basic and operational properties of the LCTs. Readers can easily verify that the operational properties reduce to the corresponding property for the fractional Fourier transform when M is the rotation matrix. Linearity: Let CM denote the linear canonical transform operator P with parameter matrix M. Then CM [ k bk fk (u)] ¼ P k bk [CM fk (u)]. Inverse: (CM )1 ¼ CM1 .
Unitarity: (CM )1 ¼ (CM )H ¼ CM1 where ()H denotes the conjugate transpose of the operator.
14-14
Transforms and Applications Handbook
Associativity: (CM1 CM2 )CM3 ¼ CM1 (CM2 CM3 ).
Eigenfunctions: Eigenfunctions of linear canonical transforms are discussed in [133]. Ð Ð Parseval: f *(u)g(u)du ¼ fM* (u)gM (u)du. This property is equivalent to unitarity. Energy or norm conservation (En[f ] ¼ En[fM ] or k f k ¼ kfM k) is a special case. Time reversal: Let P[f (u)] ¼ f (u), then
P
denote
the
parity
CM P ¼ PCM , CM [f (u)] ¼ fM (u):
operator:
(14:87)
fsingle (u) ¼ ½F a Lh F a f (u) ¼ T single f (u), (14:89)
Here M0 is the matrix that corresponds to the parameters a0 ¼ a, b0 ¼ Kb, and g0 ¼ K 2 g. Transform of a shifted function:
CM [f (u u0 )] ¼ exp [ip(2uu0 C u20 AC)]fM (u Au0 ):
(14:90)
Here u0 is real. Transform of a phase-shifted function: CM [ exp (i2pm0 u)f (u)] ¼ exp [ipm0 D(2u m0 B)]fM (u Bm0 ):
(14:91)
Here m0 is real. Transform of a coordinate multiplied function: CM [un f (u)] ¼ [Du B(i2p)1 d=du]n fM (u):
Filtering, as conventionally understood, involves taking the Fourier transform of a signal, multiplying it with a Fourier-domain transfer function, and inverse transforming the result (Figure 14.7a). Here, we consider filtering in fractional Fourier domains, where we take the fractional Fourier transform, apply a filter function in the fractional Fourier domain, and inverse transform to the original domain (Figure 14.7b). Formally the filter output is written as
(14:88)
Transform of a scaled function: CM [jKj1 f (u=K)] ¼ CM0 [f (u)] ¼ fM0 (u):
14.10 Filtering in Fractional Fourier Domains
(14:92)
Here n is a positive integer. Transform of the derivative of a function: CM [ [(i2p)1 d=du]n f (u)] ¼ [Cu þ A(i2p)1 d=du]n fM (u):
(14:93)
Here n is a positive integer. A few additional properties are CM [ f *(u)] ¼ fM* 1(u),
(14:94)
CM [(f (u) þ f (u))=2] ¼ (fM (u) þ fM (u))=2,
(14:95)
CM [(f (u) f (u))=2] ¼ (fM (u) fM (u))=2:
(14:96)
A function and its linear canonical transform satisfy an ‘‘uncertainty relation,’’ stating that the product of the spread of the two functions, as measured by their standard deviations, cannot be less than jBj=4p [129].
(14:97)
where F a is the ath order fractional Fourier transform operator Lh denotes the operator corresponding to multiplication by the filter function h(u) T single is the operator representing the overall filtering configuration To understand the basic motivation for filtering in fractional Fourier domains, consider Figure 14.8, where the Wigner distributions of a desired signal and an undesired noise term are superimposed. We observe that the signal and noise overlap in both the 0th and 1st domains, but they do not overlap in the 0.5th domain (consider the projections onto the u0 ¼ u, u1 ¼ m, and u0.5 axes). Although it is not possible to eliminate the noise in the time or frequency domains, we can eliminate it easily by using a simple amplitude mask in the 0.5th domain. Fractional Fourier domain filtering can be applied to the problem of signal recovery or estimation from observations, where the signal to be recovered has been degraded by a known distortion or blur, and the observations are noisy. The problem is to reduce or eliminate these degradations and noise. The solution of such problems depends on the observation model and the prior knowledge available about the desired signal, degradation process, and noise. A commonly used observation model is ð
g(u) ¼ hd (u, u0 )f (u0 ) du0 þ n(u),
(14:98)
where hd (u, u0 ) is the kernel of the linear system that distorts or blurs the desired signal f(u) n(u) is an additive noise term The problem is to find an estimation operator represented by the kernel h(u, u0 ), such that the estimated signal ð fest (u) ¼ h(u, u0 )g(u0 ) du0
(14:99)
14-15
Fractional Fourier Transform
–1
h
(a)
–a2
–a2
–aM
aM
hM
h2
h1
(c)
h
(b)
–a1
a1
–a
a
–a1
a1
h1 a2
–a2
h2
– aM
aM
hM
(d)
FIGURE 14.7 (a) Filtering in the frequency domain; (b) filtering in the ath order fractional Fourier domain; (c) multi-stage (series) filtering; (d) multi-channel (parallel) filtering.
optimizes some criteria. Despite its limitations, one of the most commonly used objectives is to minimize the mean square error s2err defined as
ð s2err ¼ jfest (u) f (u)j2 du , (14:100) μ
where the angle brackets denote an ensemble average. The estimation or recovery operator minimizing s2err is known as the optimal Wiener filter. The kernel h(u, u0 ) of this optimal filter satisfies the following relation [87]: ð Rfg (u, u0 ) ¼ h(u, u00 )Rgg (u00 , u0 ) du00
for all u, u0 ,
(14:101)
Noise
where Rfg (u, u0 ) is the statistical cross-correlation of f(u) and g(u) Rgg (u, u0 ) is the statistical autocorrelation of g(u)
ua
u
Signal
FIGURE 14.8 Filtering in a fractional Fourier domain as observed in the time- or space-frequency plane. a ¼ 0.5 as drawn. (From Ozaktas, H. M., et al., J Opt Soc Am A-Opt Image Sci Vis, 11:547–559, 1994. With permission.)
In the general case hd (u, u0 ) represents a time varying system, and there is no fast algorithm for obtaining fest (u). We can formulate the problem of obtaining an estimate fest (u) ¼ fsingle (u) of f(u) by using the ath order fractional Fourier domain filtering configuration (Equation 14.97). As we will see in Section 14.13, the fractional Fourier transform can be efficiently computed with an N log N algorithm similar to the fast Fourier transform algorithm used to compute the ordinary Fourier transform. Therefore, the fractional Fourier transform can be implemented nearly as efficiently as the ordinary Fourier transform, and the cost of fractional Fourier domain filtering is approximately the same as the cost of ordinary Fourier domain filtering. The optimal multiplicative filter function h(u) for a given order a that minimizes the mean square error defined in Equation 14.100
14-16
Transforms and Applications Handbook
for the filtering configuration represented by Equation 14.97 is given by [85]: ÐÐ Ka (ua , u)Ka (ua , u0 )Rfg (u, u0 ) du0 du ÐÐ , h(ua ) ¼ Ka (ua , u)Ka (ua , u0 )Rgg (u, u0 ) du0 du
(14:102)
where the statistical cross-correlation and autocorrelation functions Rfg (u, u0 ) and Rgg (u, u0 ) can be obtained from the functions Rff (u, u0 ) and Rnn (u, u0 ), which are assumed to be known. The corresponding mean square error can be calculated from Equation 14.100 for different values of a, and the value of a resulting in the smallest error can be determined. Generalizations of the ath order fractional Fourier domain filtering configuration are the multistage (repeated or serial) and the multichannel (parallel) filtering configurations. These systems consist of M single-domain fractional Fourier filtering stages in series or in parallel (Figure 14.7). M ¼ 1 corresponds to single-domain filtering in both cases. In the multistage system shown in Figure 14.7c, the input is first transformed into the a1th domain where it is multiplied by a filter h1(u). The result is then transformed back into the original domain and the same process is repeated M times consecutively. This amounts to sequentially visiting the domains a1 , a2 , a3 , . . ., and applying a filter in each. On the other hand, the multichannel system consists of M singledomain blocks in parallel (Figure 14.7d). For each channel k, the input is transformed to the akth domain, multiplied with a filter hk(u), and then transformed back. If these configurations are used to obtain an estimate fser (u) or fpar (u) of f(u) in terms of g(u), we have fser (u) ¼ ½F aM LhM F a2
fpar (u) ¼
"
M X k¼1
F
ak
Lh k F
ak
a1
Lh1 F a1 g(u) ¼ T ser g(u),
(14:103)
#
g(u) ¼ T par g(u),
(14:104)
where F ak represents the akth order fractional Fourier transform operator Lhk denotes the operator corresponding to multiplication by the filter function hk(u) T ser , T par are the operators representing the overall filtering configurations Both of these equations reduce to Equation 14.97 for M ¼ 1. Multistage and multichannel filtering systems as described above are a subclass of the class of general linear systems whose input–output relation is given in Equation 14.99. Such linear systems have in general N2 degrees of freedom, where N is the time-bandwidth product of the signals. Obtaining the output from the input normally takes N 2 time, unless the system kernel h(u, u0 ) has some special structure which can be exploited. Shift-invariant (time- or space-invariant) systems are also a
subclass of general linear systems whose system kernels h(u, u0 ) can always be expressed in the form h(u, u0 ) ¼ h(u u0 ). They are a restricted subclass with only N degrees of freedom, but can be implemented in N log N time in the ordinary Fourier domain. We may think of shift-invariant systems and general linear systems as representing two extremes in a cost–performance trade-off. Shift-invariant systems exhibit low cost and low performance, whereas general linear systems exhibit high cost and high performance. Sometimes use of shift-invariant systems may be inadequate, but at the same time use of general linear systems may be an overkill and prohibitively costly. Multistage and multichannel fractional Fourier domain filtering configurations interpolate between these two extremes, offering greater flexibility in trading off between cost and performance. Both filtering configurations have at most MN þ M degrees of freedom. Their digital implementation will take O(MN log N) time since the fractional Fourier transform can be implemented in N log N time. These configurations interpolate between general linear systems and shift-invariant systems both in terms of cost and flexibility. If we choose M to be small, cost and flexibility are both low; M ¼ 1 corresponds to single-stage filtering. If we choose M to be larger, cost and flexibility are both higher; as M approaches N, the number of degrees of freedom approaches that of a general linear system. Increasing M allows us to better approximate a given linear system. For a given value of M, we can approximate this system with a certain degree of accuracy (or error). For instance, a shiftinvariant system can be realized with perfect accuracy with M ¼ 1. In general, there will be a finite accuracy for each value of M. As M is increased, the accuracy will usually increase (but never decrease). In dealing with a specific application, we can seek the minimum value of M which results in the desired accuracy, or the highest accuracy that can be achieved for given M. Thus these systems give us considerable freedom in trading off efficiency and greater accuracy, enabling us to seek the best performance for a given cost, or the least cost for a given performance. In a given application, this flexibility may allow us to realize a system which is acceptable in terms of both cost and performance. The cost-accuracy trade-off is illustrated in Figure 14.9, where we have plotted both the cost and the error as functions of the number of filters M for a hypothetical application. The two plots show how the cost increases and the error decreases as we increase M. Eliminating M from these two graphs leads us to a graph of error versus cost. The multistage and multichannel configurations may be further extended to generalized filtering configurations or generalized filter circuits where we combine the serial and parallel filtering configurations in an arbitrary manner (Figure 14.10). Having discussed quite generally the subject of filtering in fractional Fourier domains, we now discuss the closely related concepts of fractional convolution and fractional multiplication [108,117]. The convolution of two signals h and f in the
14-17
Fractional Fourier Transform
Error
Cost (a)
Of course, convolution (or multiplication) in the a ¼ 0th domain is ordinary convolution (or multiplication) and convolution (or multiplication) in the a ¼ 1st domain is ordinary multiplication (or convolution). More generally, convolution (or multiplication) in the ath domain is multiplication (or convolution) in the (a 1)th domain (which is orthogonal to the ath domain), and convolution (or multiplication) in the ath domain is again convolution (or multiplication) in the (a 2)th domain (the sign-flipped version of the ath domain). Convolution or multiplication in an arbitrary ath domain is an operation ‘‘interpolating’’ between the ordinary convolution and multiplication operations [129]. In light of these definitions, filtering in the ath fractional Fourier domain corresponds to the multiplication of two signals in the ath fractional Fourier domain or equivalently the convolution of two signals in the a 1th fractional Fourier domain.
(b)
M
M
Error
Eliminate M
M
(c)
14.11 Fractional Fourier Domain Decompositions
Cost
FIGURE 14.9 (a) Cost versus M, (b) error versus M, (c) error versus cost. (From Kutay, M. A., PhD thesis, Bilkent University, Ankara, 1999; Ozaktas, H. M., et al., The Fractional Fourier Transform with Applications in Optics and Signal Processing. John Wiley & Sons, New York, 2001. With permission.)
The fractional Fourier domain decomposition (FFDD) [86] is closely related to multichannel filtering and is analogous to the singular-value decomposition in linear algebra [68,154]. The SVD of an arbitrary Nout Nin complex matrix H is HNout Nin ¼ UNout Nout SNout Nin VH Nin Nin ,
Input
Output
where U and V are unitary matrices whose columns are the eigenvectors of HHH and HH H, respectively. The superscript H denotes Hermitian transpose. S is a diagonal matrix whose elements lk (the singular values) are the nonnegative square roots of the eigenvalues of HHH and HH H. The number of strictly positive singular values is equal to the rank R of H. The SVD can also be written in the form of an outer product (or spectral) expansion H¼
FIGURE 14.10 Generalized filter circuits; each block is of the form F ak Lhk F ak
ath fractional Fourier domain is defined such that their ath order fractional Fourier domain representations ha (ua ) and fa (ua ) are convolved to give the corresponding representation of some new signal g: ga (ua ) ¼ ha (ua ) * fa (ua ),
(14:105)
where * denotes ordinary convolution. Likewise, multiplication of two signals in the ath fractional Fourier domain is defined as ga (ua ) ¼ ha (ua )fa (ua ):
(14:106)
(14:107)
R X
lk uk v H k,
(14:108)
k¼1
where uk and vk are the columns of U and V. It is common to assume that the lk are ordered in decreasing value. Let FaN denotes the N-point ath order discrete fractional Fourier transform matrix. The discrete fractional Fourier transform will be defined in Section 14.12. For the purpose of this section, it will suffice to think of this transform in analogy with the ordinary discrete Fourier transform. The discrete Fourier transform of a discrete signal represented by a vector of length N can be obtained by multiplying the vector by the N-point discrete Fourier transform matrix FN. Likewise, the ath order discrete fractional Fourier transform of a vector is obtained by multiplying it by FaN . The discrete transforms can be used to approximately compute the continuous transforms. The columns of the inverse discrete fractional Fourier transform matrix FN a constitute an orthonormal basis for the ath
14-18
Transforms and Applications Handbook
domain, just as the columns of the identity matrix constitute a basis for the time domain and the columns of the ordinary inverse DFT matrix constitute a basis for the frequency domain. Now, let H be a complex Nout Nin matrix and {a1 , a2 , . . . , aN } a set of N ¼ max (Nout , Nin ) distinct real numbers such that 1 < a1 < a2 < < aN 1. For instance, aks may be chosen uniformly spaced in this interval. We define the FFDD of H as [86] HNout Nin ¼
N X k¼1
H ak FNout (Lhk )Nout Nin FNinak ,
(14:109)
where the Lhk are Nout Nin diagonal matrices with N 0 ¼ min (Nout , Nin ) complex elements. Starting from the upper left corner, the lth diagonal element of Lhk is denoted as hkl , l ¼ 1, 2, . . . , N 0 (the lth element of the column vector hk). When H is Hermitian (skew H Hermitian), hk is real (imaginary). We also recall that FNinak ¼ FaNkin . The FFDD always exists and is unique [129]. If we compare one term in the summation on the right-hand side of Equation 14.109 with the right-hand side of Equation 14.107, we see that they are similar in that they both consist of three terms of corresponding dimensionality, the first and third being unitary matrices and the second being a diagonal matrix. Whereas the columns of U and V constitute orthonormal bases specific to H, ak and FNinak constitute orthonormal bases for the the columns of FNout akth fractional Fourier domain. Customization of FFDD is achieved through the coefficients hkl and=or perhaps also the orders ak. When H is a square matrix of dimension N, the FFDD becomes H¼
N X
F
ak
Lhk (F
ak H
) ,
(14:110)
k¼1
where all matrices are of dimension N. The continuous counterpart of the FFDD is similar to this equation, with the summation being replaced by an integral over a [167]. Equation 14.109 represents a decomposition of a matrix H into N terms. Each term corresponds to filtering in the akth fractional Fourier domain (see Equation 14.97). All terms taken together, the FFDD can be interpreted as the decomposition of a matrix into fractional Fourier domain filters of different orders. An arbitrary matrix H will in general not correspond to multiplicative filtering in the time or frequency domain or in any other single fractional Fourier domain. However, H can always be expressed as a combination of filtering operations in different fractional domains. A sufficient number of different-ordered fractional Fourier domain filtering operations ‘‘span’’ the space of all linear operations. The fundamental importance of the FFDD is that it shows how an arbitrary linear system can be decomposed into this complete set of domains in the time-frequency plane.
Truncating some of the singular values in SVD of H has many applications [68,154]. Similary we can eliminate domains for which the coefficients hk1 , hk2 , . . . , hkN 0 are small. This procedure, which we refer to as pruning the FFDD, is the counterpart of truncating the SVD. An alternative to this procedure will be referred to as sparsening, in which one simply employs a more coarsely spaced set of domains. In any event, the resulting smaller number of domains will be denoted by M < N. The upper limit of the summation in equation 109 is replaced by M and the equality is replaced by approximate equality. The equation ~ is likewise replaced by H Ph. ~ If we solve this in the H ¼ Ph ~ k, we can find the filter least-squares sense, minimizing k H Ph coefficients resulting in the best M-domain approximation to H. (This procedure amounts to projecting H onto the subspace spanned by the MN 0 basis matrices, which now do not span the whole space.) The correspondence between the pruned FFDD and multichannel filtering configurations is evident; it is possible to interpret multichannel filtering configurations as pruned FFDDs. These concepts have found application in to image compression [166].
14.12 Discrete Fractional Fourier Transforms Ideally, a discrete version of a transform should exhibit a high level of analogy with its continuous counterpart. This analogy should include basic structural similarity and analogy of operational properties. Furthermore, it is desirable for the discrete transform to usefully approximate the samples of the continuous transform, so that it can provide a basis for digital computation of the continuous transform. The following can be posed as a minimal set of properties that we would like to see in a definition of the discrete fractional Fourier transform (DFRT): 1. Unitarity 2. Index additivity 3. Reduction to the ordinary discrete Fourier transform (DFT) when a ¼ 1 4. Approximation of the samples of the continuous FRT Several definitions of the DFRT have been proposed in the literature. Some of these correspond to totally distinct continuous transforms. For example, one proposal was based on the power series expansion of the DFT matrix and employed the Cayley– Hamilton theorem [147]. If we let Fa be the N N matrix representing the discrete fractional Fourier transform, this definition can be stated as follows: 3 X
3 exp j p(n F ¼ 4 n¼0 a
sin p(n a) Fn , a) 4 sin 14 p(n a)
(14:111)
where Fn is the nth (integer) power of the DFT matrix. This definition satisfies all the desired properties listed above, except the critical fourth one: it can not be used to approximate the
14-19
Fractional Fourier Transform
samples of the continuous fractional Fourier transform which is the subject of this chapter. Rather, it corresponds to the continuous fractional Fourier transform based on principal powers of the eigenvalues discussed in Equation 14.9. In the rest of this section, we focus on discrete fractional Fourier transforms that correspond to the continuous FRT defined in this chapter. The main task is to first find an eigenvector set of the DFT matrix which can serve as discrete versions of the Hermite–Gaussian functions. Such Hermite–Gaussian vectors have been defined in [26] based on [136]. It can be shown that [26] as h ! 0 the difference equation f (u þ h) 2f (u) þ f (u h) 2( cos (2phu) 1) þ f (u) ¼ lf (u) h2 h2 (14:112) approximates the Hermite–Gaussian generating differential equation d 2 f (t) 4pt 2 f (t) ¼ lf (t): dt 2
(14:113)
1 When h ¼ pffiffiffiffi the difference equation (Equation 14.112) has N periodic coefficients. Therefore the solutions of the difference equation is also periodic and can be written as the eigenvectors of the following matrix, denoted by S: 2
3 2 1 0 ... 0 1 6 1 2 cos (2p=N) 7 1 ... 0 0 6 7 6 7 1 2 cos (2p2=N) . . . 0 0 S ¼ 60 7: 6 .. 7 .. .. .. .. 4. 5 . . . . 1 0 0 . . . 1 2 cos (2p(N 1)=N) (14:114)
In other words, the difference equation can be written as Sf ¼ lf. It can also been shown that S commutes with DFT matrix. Since two commuting matrices share a common eigenvector set [154], the eigenvectors of S are also eigenvectors of the DFT matrix. Thus the eigenvectors of S constitute an orthogonal eigenvector set of the DFT matrix which are analogous to and which approximate the Hermite–Gaussian functions. Further details such as the distinctness of the eigenvectors and enumeration of the eigenvectors with respect to the continuous Hermite–Gaussian functions are discussed in [26]. Having obtained an appropriate set of eigenvectors, the discrete fractional Fourier transform matrix can now be defined as follows: a
F ¼
(P
N jp2 ka T uk , k¼0, k6¼N1 uk e PN jp2 ka T u e u k¼0, k6¼N k k,
when N even when N odd
(14:115)
where uk corresponds to the eigenvector of the S matrix with k zero-crossings [26]. The necessity of separately writing the sum-
mation in equation 14.115 for even and odd dimensions N is a consequence of the eigenvalue multiplicity of the ordinary DFT matrix [26]. This definition of the fractional DFT satisfies all four of the desirable properties we had set out at the beginning. A complementary perspective to this line of development may be found in [13]. A MATLAB1 routine ‘‘dFRT’’ for the calculation of the discrete fractional Fourier transform matrix defined above is available [25]. The following steps show how to use the routine to compute and plot the samples of the ath order FRT of a continuous function f(u): pffiffiffiffi h ¼ 1= N ; tsamples ¼ (N=2 * h):h: (N=2 1) * h; f0 ¼ f(tsamples); f0shifted ¼ fftshift(f0); Fa ¼ dFRT(N,a,order); {order can be any number in [2,N1]} 5. fashifted ¼ Fa*f0shifted; 6. fa ¼ fftshift(fashifted); 7. plot(tsamples,fa); 1. 2. 3. 4.
The ‘‘fftshift’’ operations are needed since the DFRT matrix follows the well-known circular indexing rule of the DFT matrix. Normally the approximation ‘‘order’’ is set to 2; higher values correspond to higher-order approximations to the continuous transform than have been discussed here. The approximation order should not be confused with the fractional Fourier transform order a. Figure 14.11 compares the N ¼ 64 samples calculated with this routine with the continuous fractional Fourier transform of the example function f (u) ¼ sin (2pu)rect(u). This function can be interpreted as the windowed version of a single period of the sine waveform between 0.5 and 0.5. As can be seen, the discrete transform fairly closely approximates the continuous one. A number of other definitions of the discrete FRT which are still compatible with the continuous FRT discussed in this chapter have been proposed. In [138], the authors start with vectors formed by sampling the continuous Hermite–Gaussian functions. These are neither orthogonal nor eigenvectors of the DFT matrix. The authors orthogonalize these through a Gram–Schmidt process involving the S matrix. These orthogonal vectors are then used to define the fractional DFRT. We find this method less desirable in that it is based on a numerical rather than an analytical approach. The approach of [101] is similar, but here the eigenvectors of the DFT matrix are constructed by sampling periodically replicated versions of the Hermite–Gaussian functions (which are not orthogonal either). In [12] another finite dimensional approximation to the Fourier transform similar to the DFT is proposed. This transform has strong connections with the Fourier transform within a group theoretical framework [165]. Furthermore, analytical expressions for the transform can be written in terms of the so-called Kravchuk polynomials, which are known to approximate the Hermite polynomials. A major disadvantage of this approach
14-20
Transforms and Applications Handbook a = 0.1
a = 0.4
1
0.7 0.6
0.8 0.6
|fa(u)|
|fa(u)|
0.5
0.4
0.4 0.3 0.2
0.2 0 −4
0.1 −2
0
2
0 −4
4
−2
0
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.2
0.1
0.1 −2
0
2
2
4
a=1
0 −4
4
4
0.3
0.2
0 −4
FIGURE 14.11 transform.
|fa(u)|
|fa(u)|
a = 0.7 0.7
2
−2
0
Approximation of the continuous fractional Fourier transform of f (u) ¼ sin(2pu)rect(u) with the discrete fractional Fourier
is that the discrete FRT thus defined does not reduce to the ordinary DFT when a ¼ 1. In [26,61,135,148] yet other definitions are proposed based on the commuting matrices approach already discussed in relation to the S matrix. These matrices can be interpreted as higherorder approximation matrices which can be used to obtain increasingly accurate approximations to the continuous transform. A comparison of such matrices is given in [27].
14.13 Digital Computation of the Fractional Fourier Transform The FRT of a continuous function whose time- or space-bandwidth product is N can be computed in the order of N log N time [115], similar to the ordinary Fourier transform. Therefore, if in some application any improvements can be obtained by using the FRT instead of the ordinary Fourier transform, these improvements come at no additional cost. The following formula allows one to compute the samples of the fractional Fourier transform fa(u) of a function f(u), in terms of the samples of f(u), in N log N time where N ¼ Du2 , under the assumption that Wigner distribution of f(u) is approximately confined to a circle of diameter Du:
fa
k Aa ip( cot a e ¼ 2Du 2Du
N 1 X
l¼ N
csc a)(k=2Du)2
eip csc a((k
l)=2Du)2 ip( cot a csc a)(l=2Du)2
e
f
l : 2Du
(14:116) The summation is recognizable as a convolution, which can be computed in N log N time by using the fast Fourier transform (FFT). The result is then obtained by a final chirp multiplication. The overall procedure takes N log N time. A MATLAB code based on this formula may be found in [79]. A broader discussion of computational issues may be found in [115]. Note that this method is distinct from that discussed in Section 12. There, the discrete fractional Fourier transform was defined. The samples of the fractional Fourier transform of a function are then found by multiplying the discrete fractional Fourier transform matrix with the sample vector of the function to be transformed. Since a method for calculating this matrix product in N log N time is presently not available, the operation will take N 2 time. The approach in this section does not involve a definition of the discrete fractional Fourier transform, and can be viewed as a method to numerically compute the fractional Fourier transform integral.
14-21
Fractional Fourier Transform
14.14 Applications The purpose of this section is to highlight some of the applications of the fractional Fourier transform which have received greater interest so far. The reader may consult [23,122,123,129] for further references. The fractional Fourier transform is of potential usefulness in every area in which the ordinary Fourier transform is used. The typical pattern of discovery of a new application is to concentrate on an application where the ordinary Fourier transform is used and ask if any improvement or generalization might be possible by using the fractional Fourier transform instead. The additional order parameter often allows better performance or greater generality because it provides an additional degree of freedom over which to optimize. Typically, improvements are observed or are greater when dealing with time=space-variant signals or systems. Furthermore, very large degrees of improvement often becomes possible when signals of a chirped nature or with nearly linearly increasing frequencies are in question, since chirp signals are the basis functions associated with the fractional Fourier transform (just as harmonic functions are the basis functions associated with the ordinary Fourier transform). Fractional Fourier transforms are also of special use when dealing with integral transforms whose kernels are of quadratic-exponential type, the diffraction integral being the most common example.
14.14.1 Applications in Signal and Image Processing The FRT has found widespread application in signal and image processing, some of which are reviewed here (also see [157]). One of the most striking applications is that of filtering in fractional Fourier domains, whose foundations have been discussed in Section 14.10 [117]. In traditional filtering, one takes the Fourier transform of a signal, multiplies it with a Fourier-domain transfer function, and inverse transforms the result. Here, one takes the fractional Fourier transform, applies a filter function in the fractional Fourier domain, and inverse transforms to the original domain. It has been shown that considerable improvement in performance is possible by exploiting the additional degree of freedom coming from the order parameter a. This improvement comes at no additional cost since computing the fractional Fourier transform is not more expensive than computing the ordinary Fourier transform (Section 14.13). The concept has been generalized to multistage and multichannel filtering systems which employ several fractional Fourier domain filters of different orders [81,82]. These schemes provide flexible and cost-efficient means of designing time=space-variant filtering systems to meet desired objectives. Fractional Fourier domain filtering has been useful in optical signal separation [43] and signal and image recovery and restoration in the presence of time=space-varying distortions such as space-varying blurs and nonstationary noise, with application to compensation of nonconstant velocity camera motion and atmospheric turbulence [48,50,83,85].
The FRT has also found many applications in pattern recognition and detection. Correlation is the underlying operation in matched filtering, which is used to detect signals. Fractional correlation has been defined in a number of different ways [71,92,103,180]. It has been shown how to control the degree of shift-invariance by adjusting the order a, which in turn allows one to design systems which detect objects within a certain region but reject them otherwise [92]. Joint-transform correlation is a well-known optical correlation technique, whose fractional version has received considerable attention [78,90]. The FRT has been studied as a preprocessing unit for neural network object recognition [14]. Some other applications in the pattern recognition area are face recognition [72] and realization and improvement of navigational tasks [149]. The windowed fractional Fourier transform has been studied in [29,46,104]. The possibility of changing the fractional order as the window is moved and=or choosing different orders in the two dimensions makes this a very flexible tool suited for various pattern recognition tasks, such as fingerprint recognition [171] or detection of targets in specific locations [59]. A review of applications of the FRT to pattern recognition as of 1998 is presented in [105]. The FRT has found a number of applications in radar signal processing. In [2], detection of linear frequency modulated signals is studied. In [69], radar return transients are analyzed in fractional domains. In [35,155], detection of moving targets for airborne radar systems is studied. In [11,10], synthetic aperture radar image reconstruction algorithms have been developed using the fractional Fourier transform. The transform has found application to interpolation [53,150] and superresolution of multidimensional signals [32,62,151], phase retrieval from two or more intensity measurements [6,7,41,54,55], system and transform synthesis [50], processing of chirplets [22], signal and image compression [117,162,166], watermarking [40,112], speech processing [176], acoustic signal processing [60,173], ultrasound imaging [17], and antenna beamforming [168]. A large number of publications discuss the application of the FRT to encryption; for instance, see [33,63,66,111,161].
14.14.2 Applications in Communications The FRT has found applications in spread spectrum communications systems [1], multicarrier communications systems [97], in the processing of time-varying channels [110], and beamforming for next generation wireless communication systems [75]. The concept of multiplexing in fractional Fourier domains, which generalizes time-domain and frequency-domain multiplexing, has been proposed in [117].
14.14.3 Applications in Optics and Wave Propagation The fractional Fourier transform has received a great deal of interest in the area of optics and especially optical signal
14-22
processing (also known as Fourier optics or information optics) [5,18,91,118,127,129,139,159]. Optical signal processing is an analog signal processing method which relies on the representation of signals by light fields and their manipulation with optical elements such as lenses, prisms, transparencies, holograms, and so forth. Its key component is the optical Fourier transformer which can be realized using one or two lenses separated by certain distances from the input and output planes. It has been shown that the fractional Fourier transform can be optically implemented with equal ease as the ordinary Fourier transform [88,124,127,146], allowing a generalization of conventional approaches and results to their more flexible or general fractional counterparts. The fractional Fourier transform has also been shown to be intimately related to wave and beam propagation and diffraction. The process of diffraction of light in free space (or any other disturbance satisfying a similar wave equation) has been shown to be nothing but a process of continual fractional Fourier transformation; the distribution of light becomes fractional Fourier transformed as it propagates, evolving through continuously increasing orders [118,126,127,139]. More generally, it is well known that a rather broad class of optical systems can be modeled as linear canonical transforms, which were discussed in Section 14.8 [15,129]. These include optical systems consisting of arbitrary concatenations of thin lenses and sections of free space, as well as sections of quadratic graded-index media. It has been shown that all such systems can be expressed in the form of a fractional Fourier transform operation followed by appropriate scaling and a residual chirp factor, which can be interpreted as a change in the radius of curvature of the output plane (Equation 14.83) [118,119]. Therefore, all such optical systems can be interpreted as fractional Fourier transformers [16,118,127], and the propagation of light through such systems can be viewed as a process of continual fractional Fourier transformation with the fractional transform order monotonically increasing as light propagates through the system. The case of free-space optical diffraction in the Fresnel approximation, discussed in the previous paragraph, is a special case of this more general result, and rests on expressing the Fresnel integral in terms of the FRT. Similar results hold for other wave and beam propagation modalities that satisfy a similar wave equation as the optical wave equation, or an equation similar to that of the quantum-mechanical harmonic oscillator, including electromagnetic and acoustic waves [47]. As noted above, the fractional Fourier transform plays a central role in the study of optical systems consisting of arbitrary sequences of lenses. Also of interest are systems in which thin optical filters (masks) are inserted at various points along the optical axis. Such systems can be modeled as multistage fractional Fourier domain filtering systems with multiplicative filters inserted between fractional Fourier transform stages, which were discussed in Section 14.10. The fractional Fourier transform has also found application in the study of laser resonators and laser beams. The order of the fractional transform has been shown to be proportional to the
Transforms and Applications Handbook
Gouy phase shift accumulated during Gaussian beam propagation [49,126] and also to be related to laser resonator stability [126,140,179]. Other laser applications have also been reported [94]. The FRT has also found use in increasing the resolution of low-resolution wave fields [32], optical phase retrieval from two or more intensity measurements [41,54,55], coherent and partially coherent wave field reconstruction using phase-space tomography [99,144,145], optical beam characterization and shaping [3,38,44,172,177], synthesis of mutual intensity functions [52], and the study of partially coherent light [20,24,51,153,160,163]. It has found further use in quantum optics [170], studies of the human eye [141,142], lens design problems [42], diffractive optics [58,158], optical profilometry [181], speckle photography [131] and metrology [73], holographic interferometry [152], holographic data storage [70], digital holography [34,36,178], holographic three-dimensional television [113,114], temporal pulse processing [21,45,89], solitons [39], and fiber Bragg gratings [98].
14.14.4 Other Applications The fractional Fourier transform has found several other applications not falling under the above categories. We discuss some of these here. The FRT has been employed in quantum mechanics [56,57,93,96]. It has been shown that certain kinds of time-varying second-order differential equations (with nonconstant coefficients) can be solved by exploiting the additional degree of freedom associated with the fractional order parameter a [74,100,109]. Based on the relationship of the fractional Fourier transform to harmonic oscillation (Section 14.2), it may be expected to play an important role in the study of vibrating systems [84]. It has so far received only limited attention in the area of control theory and systems [28], but we believe it has considerable potential for use in this field. The FRT has been shown to be related to perspective projections [169]. The transform has been employed to realize free-space optical interconnection architectures [50].
References 1. O. Akay and G. F. Boudreaux-Bartels. Broadband interference excision in spread spectrum communication systems via fractional Fourier transform. In Proceedings of the 32nd Asilomar Conference on Signals, Systems, and Computers, IEEE, Piscataway, NJ, 1998, pp. 832–837. 2. O. Akay and G. F. Boudreaux-Bartels. Fractional convolution and correlation via operator methods and an application to detection of linear FM signals. IEEE Trans Signal Process, 49:979–993, 2001. 3. T. Alieva and M. J. Bastiaans. Phase-space distributions in quasi-polar coordinates and the fractional Fourier transform. J Opt Soc Am A-Opt Image Sci Vis, 17:2324–2329, 2000.
14-23
Fractional Fourier Transform
4. T. Alieva and M. J. Bastiaans. Alternative representation of the linear canonical integral transform. Opt Lett, 30:3302– 3304, 2005. 5. T. Alieva, M. J. Bastiaans, and M. L. Calvo. Fractional transforms in optical information processing. EURASIP J Appl Signal Process, 22:1498–1519, 2005. 6. T. Alieva, M. J. Bastiaans, and L. Stankovic. Signal reconstruction from two close fractional Fourier power spectra. IEEE Trans Signal Process, 51:112–123, 2003. 7. T. Alieva and M. L. Calvo. Image reconstruction from amplitude-only and phase-only data in the fractional Fourier domain. Opt Spectrosc, 95:110–113, 2003. 8. L. B. Almeida. The fractional Fourier transform and timefrequency representations. IEEE Trans Signal Process, 42:3084–3091, 1994. 9. L. B. Almeida. Product and convolution theorems for the fractional Fourier transform. IEEE Signal Processing Lett, 4:15–17, 1997. 10. A. S. Amein and J. J. Soraghan. Azimuth fractional transformation of the fractional chirp scaling algorithm (FrCSA). IEEE Trans Geosci Remote Sensing, 44:2871–2879, 2006. 11. A. S. Amein and J. J. Soraghan. Fractional chirp scaling algorithm: Mathematical model. IEEE Trans Signal Process, 55:4162–4172, 2007. 12. N. M. Atakishiyev and K. B. Wolf. Fractional FourierKravchuk transform. J Opt Soc Am A-Opt Image Sci Vis, 14:1467–1477, 1997. 13. L. Barker, Ç. Candan, T. Hakioglu, M. A. Kutay, and H. M. Ozaktas. The discrete harmonic oscillator, Harper’s equation, and the discrete fractional Fourier transform. J Phys A-Math Gen, 33:2209–2222, 2000. 14. B. Barshan and B. Ayrulu. Fractional Fourier transform pre-processing for neural networks and its application to object recognition. Neural Netw, 15:131–140, 2002. 15. M. J. Bastiaans. Wigner distribution function and its application to first-order optics. J Opt Soc Am A-Opt Image Sci Vis, 69:1710–1716, 1979. 16. M. J. Bastiaans and T. Alieva. First-order optical systems with unimodular eigenvalues. J Opt Soc Am A-Opt Image Sci Vis, 23:1875–1883, 2006. 17. M. J. Bennett, S. McLaughlin, T. Anderson, and N. McDicken. The use of the fractional Fourier transform with coded excitation in ultrasound imaging. IEEE Trans Bio Eng, 53:754–756, 2006. 18. L. M. Bernardo and O. D. D. Soares. Fractional Fourier transforms and imaging. J Opt Soc Am A-Opt Image Sci Vis, 11:2622–2626, 1994. 19. B. Borden. On the fractional wideband and narrowband ambiguity function in radar and sonar. IEEE Signal Process Lett, 13:545–548, 2006. 20. M. Brunel and S. Coetmellec. Fractional-order Fourier formulation of the propagation of partially coherent light pulses. Opt Commun, 230:1–5, 2004. 21. M. Brunel, S. Coetmellec, M. Lelek, and F. Louradour. Fractional-order Fourier analysis for ultrashort pulse
22. 23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37. 38.
characterization. J Opt Soc Am A-Opt Image Sci Vis, 24:1641–1646, 2007. A. Bultan. A four-parameter atomic decomposition of chirplets. IEEE Trans Signal Process, 47:731–745, 1999. A. Bultheel and H. Martínez-Sulbaran. Recent developments in the theory of the fractional Fourier and linear canonical transforms. Bull Belg Math Soc, 13:971–1005, 2006. Y. J. Cai, D. Ge, and Q. Lin. Fractional Fourier transform for partially coherent and partially polarized Gaussian-Schell model beams. J Opt A-Pure Appl Opt, 5:453–459, 2003. Ç. Candan. Matlab code for generating discrete fractional Fourier transform matrix, 1998. http:==www.ee.bilkent. edu.tr=haldun=wileybook.html Ç. Candan, M. A. Kutay, and H. M. Ozaktas. The discrete fractional Fourier transform. IEEE Trans Signal Process, 48:1329–1337, 2000. C. Candan. On higher order approximations for HermiteGaussian functions and discrete fractional Fourier transforms. IEEE Signal Process Lett, 14:699–702, 2007. U. O. Candogan, H. Özbay, and Haldun M. Ozaktas. Controller implementation for a class of spatially-varying distributed parameter systems. In Proceedings of the 17th IFAC World Congress, IFAC, Laxenburg, Austria, 2008. C. Capus and K. Brown. Short-time fractional Fourier methods for the time-frequency representation of chirp signals. J Acoust Soc Am, 113:3253–3263, 2003. G. Cariolaro, T. Erseghe, and P. Kraniauskas. The fractional discrete cosine transform. IEEE Trans Signal Process, 50:902–911, 2002. G. Cariolaro, T. Erseghe, P. Kraniauskas, and N. Laurenti. A unified framework for the fractional Fourier transform. IEEE Trans Signal Process, 46:3206–3219, 1998. A. E. Çetin, H. Özaktas¸, and H. M. Ozaktas. Resolution enhancement of low resolution wavefields with POCS algorithm. Electron Lett, 39:1808–1810, 2003. L. F. Chen and D. M. Zhao. Optical color image encryption by wavelength multiplexing and lensless Fresnel transform holograms. Opt Express, 14:8552–8560, 2006. L. F. Chen and D. M. Zhao. Color information processing (coding and synthesis) with fractional Fourier transforms and digital holography. Opt Express, 15:16080– 16089, 2007. S. Chiu. Application of fractional Fourier transform to moving target indication via along-track interferometry. EURASIP J Appl Signal Process, 20:3293–3303, 2005. S. Coetmellec, D. Lebrun, and C. Ozkul. Application of the two-dimensional fractional-order Fourier transformation to particle field digital holography. J Opt Soc Am A-Opt Image Sci Vis, 19:1537–1546, 2002. L. Cohen. Time-Frequency Analysis. Prentice Hall, Englewood Cliffs, NJ, 1995. W. X. Cong, N. X. Chen, and B. Y. Gu. Beam shaping and its solution with the use of an optimization method. Appl Opt, 37:4500–4503, 1998.
14-24
39. S. De Nicola, R. Fedele, M. A. Manko, and V. I. Manko. Quantum tomography wave packets, and solitons. J Russ Laser Res, 25:1–29, 2004. 40. I. Djurovic, S. Stankovic, and I. Pitas. Digital watermarking in the fractional Fourier transformation domain. J Netw Comput Appl, 24:167–173, 2001. 41. B. Z. Dong, Y. Zhang, B. Y. Gu, and G. Z. Yang. Numerical investigation of phase retrieval in a fractional Fourier transform. J Opt Soc Am A-Opt Image Sci Vis, 14:2709–2714, 1997. 42. R. G. Dorsch and A. W. Lohmann. Fractional Fourier transform used for a lens design problem. Appl Opt, 34:4111–4112, 1995. 43. R. G. Dorsch, A. W. Lohmann, Y. Bitran, D. Mendlovic, and H. M. Ozaktas. Chirp filtering in the fractional Fourier domain. Appl Opt, 33:7599–7602, 1994. 44. D. Dragoman and M. Dragoman. Near and far field optical beam characterization using the fractional Fourier tansform. Opt Commun, 141:5–9, 1997. 45. D. Dragoman, M. Dragoman, and K. H. Brenner. Variant fractional Fourier transformer for optical pulses. Opt Lett, 24:933–935, 1999. 46. L. Durak and O. Arikan. Short-time Fourier transform: Two fundamental properties and an optimal implementation. IEEE Trans Signal Process, 51:1231–1242, 2003. 47. N. Engheta. On fractional paradigm and intermediate zones in electromagnetism: I—Planar observation. Microw Opt Technol Lett, 22:236–241, 1999. 48. M. F. Erden, M. A. Kutay, and H. M. Ozaktas. Repeated filtering in consecutive fractional Fourier domains and its application to signal restoration. IEEE Trans Signal Process, 47:1458–1462, 1999. 49. M. F. Erden and H. M. Ozaktas. Accumulated Gouy phase shift in Gaussian beam propagation through first-order optical systems. J Opt Soc Am A-Opt Image Sci Vis, 14:2190–2194, 1997. 50. M. F. Erden and H. M. Ozaktas. Synthesis of general linear systems with repeated filtering in consecutive fractional Fourier domains. J Opt Soc Am A-Opt Image Sci Vis, 15:1647–1657, 1998. 51. M. F. Erden, H. M. Ozaktas, and D. Mendlovic. Propagation of mutual intensity expressed in terms of the fractional Fourier transform. J Opt Soc Am A-Opt Image Sci Vis, 13:1068–1071, 1996. 52. M. F. Erden, H. M. Ozaktas, and D. Mendlovic. Synthesis of mutual intensity distributions using the fractional Fourier transform. Opt Commun, 125:288–301, 1996. 53. T. Erseghe, P. Kraniauskas, and G. Cariolaro. Unified fractional Fourier transform and sampling theorem. IEEE Trans Signal Process, 47:3419–3423, 1999. 54. M. G. Ertosun, H. Atlı, H. M. Ozaktas, and B. Barshan. Complex signal recovery from two fractional Fourier transform intensitites: Order and noise dependence. Opt Commun, 244:61–70, 2005.
Transforms and Applications Handbook
55. M. G. Ertosun, H. Atlı, H.M. Ozaktas, and B. Barshan. Complex signal recovery from multiple fractional Fouriertransform intensities. Appl Opt, 44:4902–4908, 2005. 56. H. Y. Fan and Y. Fan. EPR entangled states and complex fractional Fourier transformation. Eur Phys J D, 21:233– 238, 2002. 57. H. Y. Fan and Y. Fan. Fractional Fourier transformation for quantum mechanical wave functions studied by virtue of IWOP technique. Commun Theor Phys, 39:417–420, 2003. 58. D. Feng, Y. B. Yan, S. Lu, K. H. Tian, and G. F. Jin. Designing diffractive phase plates for beam smoothing in the fractional Fourier domain. J Mod Opt, 49:1125–1133, 2002. 59. J. García, D. Mendlovic, Z. Zalevsky, and A. Lohmann. Space-variant simultaneous detection of several objects by the use of multiple anamorphic fractional-Fouriertransform filters. Appl Opt, 35:3945–3952, 1996. 60. G. Gonon, O. Richoux, and C. Depollier. Acoustic wave propagation in a 1-D lattice: Analysis of nonlinear effects by the fractional Fourier transform method. Signal Process, 83:1269–2480, 2003. 61. F. A. Grunbaum. The eigenvectors of the discrete Fourier transform: A version of the Hermite functions. J Math Anal Appl, 88:355–363, 1982. 62. H. E. Guven, H. M. Ozaktas, A. E. Cetin, and B. Barshan. Signal recovery from partial fractional Fourier domain information and its applications. IET Signal Process, 2:15– 25, 2008. 63. B. M. Hennelly and J. T. Sheridan. Image encryption and the fractional Fourier transform. Optik, 114:251–265, 2003. 64. B. M. Hennelly and J. T. Sheridan. Generalizing, optimizing, and inventing numerical algorithms for the fractional Fourier, Fresnel, and linear canonical transforms. J Opt Soc Am A-Opt Image Sci Vis, 22:917–927, 2005. 65. B. M. Hennelly and J. T. Sheridan. Fast numerical algorithm for the linear canonical transform. J Opt Soc Am A-Opt Image Sci Vis, 22:928–937, 2005. 66. B. M. Hennelly and J. T. Sheridan. Optical encryption and the space bandwidth product. Opt Commun, 247:291–305, 2005. 67. F. Hlawatsch and G. F. Boudreaux-Bartels. Linear and quadratic time-frequency signal representations. IEEE Signal Processing Magazine, pages 21–67, April 1992. 68. A. K. Jain. Fundamentals of Digital Image Processing. Prentice Hall, Englewood Cliffs, NJ, 1989. 69. S. Jang, W. S. Choi, T. K. Sarkar, M. Salazar-Palma, K. Kim, and CE. Baum. Exploiting early time response using the fractional Fourier transform for analyzing transient radar returns. IEEE Trans Antennas Propag, 52:3109–3121, 2004. 70. S. I. Jin, Y. S. Bae, and S. Y. Lee. Holographic data storage with fractional Fourier transform. Opt Commun, 198:57–63, 2001. 71. S. I. Jin, Y. S. Bae, and S. Y. Lee. Generalized Vander Lugt correlator as an optical pattern classifier and its optimal learning rate. Opt Commun, 206:19–25, 2002.
Fractional Fourier Transform
72. X. Y. Jing, H. S. Wong, and D. Zhang. Face recognition based on discriminant fractional Fourier feature extraction. Pattern Recognit Lett, 27:1465–1471, 2006. 73. D. P. Kelly, J. E. Ward, B. M. Hennelly, U. Gopinathan, F. T. O’Neill, and J. T. Sheridan. Paraxial speckle-based metrology systems with an aperture. J Opt Soc Am A-Opt Image Sci Vis, 23:2861–2870, 2006. 74. F. H. Kerr. Namias’ fractional Fourier transforms on L2 and applications to differential equations. J Math Anal Appl, 136:404–418, 1988. 75. R. Khanna, K. Singh, and R. Saxena. Fractional Fourier transform based beamforming for next generation wireless communication systems. IETE Tech Rev, 21:357–366, 2004. 76. A. Koç, H. M. Ozaktas, C. Candan, and M. A. Kutay. Digital computation of linear canonical transforms. IEEE Trans on Signal Process, 56:2383–2394, 2008. 77. P. Kraniauskas, G. Cariolaro, and T. Erseghe. Method for defining a class of fractional operations. IEEE Trans Signal Process, 46:2804–2807, 1998. 78. C. J. Kuo and Y. Luo. Generalized joint fractional Fourier transform correlators: A compact approach. Appl Opt, 37:8270–8276, 1998. 79. M. A. Kutay. Matlab code for fast computation of the fractional Fourier transform, 1996. http:==www.ee.bilkent. edu.tr=haldun=wileybook.html 80. M. A. Kutay. Generalized filtering configurations with applications in digital and optical signal and image processing. PhD thesis, Bilkent University, Ankara, 1999. 81. M. A. Kutay, M. F. Erden, H. M. Ozaktas, O. Arıkan, Ç. Candan, and Ö. Güleryüz. Cost-efficient approximation of linear systems with repeated and multi-channel filtering configurations. In Proceedings of the 1998 IEEE International Conference on Acoustics, Speech, Signal Processing, IEEE, Piscataway, NJ, 1998, pp. 3433–3436. 82. M. A. Kutay, M. F. Erden, H. M. Ozaktas, O. Arıkan, Ö. Güleryüz, and Ç. Candan. Space-bandwidth-efficient realizations of linear systems. Opt Lett, 23:1069–1071, 1998. 83. M. A. Kutay and H. M. Ozaktas. Optimal image restoration with the fractional Fourier transform. J Opt Soc Am A-Opt Image Sci Vis, 15:825–833, 1998. 84. M. A. Kutay and H. M. Ozaktas. The fractional Fourier transform and harmonic oscillation. Nonlinear Dyn, 29:157–172, 2002. 85. M. A. Kutay, H. M. Ozaktas, O. Arıkan, and L. Onural. Optimal filtering in fractional Fourier domains. IEEE Trans Signal Process, 45:1129–1143, 1997. 86. M. A. Kutay, H. Özaktas¸, H. M. Ozaktas, and O. Arıkan. The fractional Fourier domain decomposition. Signal Process, 77:105–109, 1999. 87. F. L. Lewis. Optimal Estimation. Wiley, New York, 1986. 88. A. W. Lohmann. Image rotation, Wigner rotation, and the fractional order Fourier transform. J Opt Soc Am A-Opt Image Sci Vis, 10:2181–2186, 1993.
14-25
89. A. W. Lohmann and D. Mendlovic. Fractional Fourier transform: Photonic implementation. Appl Opt, 33:7661– 7664, 1994. 90. A. W. Lohmann and D. Mendlovic. Fractional joint transform correlator. Appl Opt, 36:7402–7407, 1997. 91. A. W. Lohmann, D. Mendlovic, and Z. Zalevsky. Fractional transformations in optics. In Progress in Optics XXXVIII, Elsevier, Amsterdam, the Netherlands, 1998. Chapter IV, pp. 263–342. 92. A. W. Lohmann, Z. Zalevsky, and D. Mendlovic. Synthesis of pattern recognition filters for fractional Fourier processing. Opt Commun, 128:199–204, 1996. 93. H. L. Lu, and H. Y. Fan. Two-variable Hermite function as quantum entanglement of harmonic oscillator’s wave functions. Commun Theor Phys, 47:1024–1028, 2007. 94. A. A. Malyutin. Use of the fractional Fourier transform in pi=2 converters of laser modes. Quantum Electron, 34:165– 171, 2004. 95. M. A. Manko. Fractional Fourier transform in information processing, tomography of optical signal, and green function of harmonic oscillator. J Russ Laser Res, 20:226–238, 1999. 96. M. A. Manko. Propagators, tomograms, wavelets and quasidistributions for multipartite quantum systems. Open Syst Inf Dyn, 14:179–188, 2007. 97. M. Martone. A multicarrier system based on the fractional Fourier transform for time-frequency-selective channels. IEEE Trans Commun, 49:1011–1020, 2001. 98. E. Mazzetto, C. G. Someda, J. A. Acebron, and R. Spigler. The fractional Fourier transform in the analysis and synthesis of fiber Bragg gratings. Opt Quantum Electron, 37:755-787, 2005. 99. D. F. McAlister, M. Beck, L. Clarke, A. Meyer, and M.G. Raymer. Optical phase-retrieval by phase-space tomography and fractional-order Fourier transforms. Opt Lett, 20:1181–1183, 1995. 100. A. C. McBride and F. H. Kerr. On Namias’s fractional Fourier transforms. IMA J Appl Math, 39:159–175, 1987. 101. M. L. Mehta. Eigenvalues and eigenvectors of the finite Fourier transform. J Math Phys, 28:781–785, 1987. 102. D. Mendlovic and H. M. Ozaktas. Fractional Fourier transforms and their optical implementation: I. J Opt Soc Am A-Opt Image Sci Vis, 10:1875–1881, 1993. 103. D. Mendlovic, H. M. Ozaktas, and A.W. Lohmann. Fractional correlation. Appl Opt, 34:303–309, 1995. 104. D. Mendlovic, Z. Zalevsky, A. W. Lohmann, and R. G. Dorsch. Signal spatial-filtering using the localized fractional Fourier transform. Opt Commun, 126:14–18, 1996. 105. D. Mendlovic, Z. Zalevsky, and H. M. Ozaktas. Applications of the fractional Fourier transform to optical pattern recognition. In Optical Pattern Recognition, Cambridge University Press, Cambridge, U.K., 1998, Chapter 4, pp. 89–125. 106. D. A. Mustard. The fractional Fourier transform and a new uncertainty principle. School of Mathematics Preprint
14-26
107. 108. 109.
110.
111.
112.
113.
114.
115.
116. 117.
118.
119.
120.
121.
122.
AM87=14, The University of New South Wales, Kensington, Australia, 1987. D. Mustard. The fractional Fourier transform and the Wigner distribution. J Aust Math Soc B, 38:209–219, 1996. D. Mustard. Fractional convolution. J Aust Math Soc B, 40:257–265, 1998. V. Namias. The fractional order Fourier transform and its application to quantum mechanics. J Inst Math Appl, 25:241–265, 1980. R. Narasimhan. Adaptive channel partitioning and modulation for linear time-varying channels. IEEE Trans Commun, 51:1313–1324, 2003. N. K. Nishchal, J. Joseph, and K. Singh. Securing information using fractional Fourier transform in digital holography. Opt Commun, 235:253–259, 2004. X. M. Niu and S. H Sun. Robust video watermarking based on discrete fractional Fourier transform. Chin J Electron, 10:428–434, 2001. L. Onural, A. Gotchev, H. M. Ozaktas, and E. Stoykova. A survey of signal processing problems and tools in holographic three-dimensional television. IEEE Trans Circuits Syst Video Technol, 17:1631–1646, 2007. L. Onural and H. M. Ozaktas. Signal processing issues in diffraction and holographic 3DTV. Signal Process Image Commun, 22:169–177, 2007. H. M. Ozaktas, O. Arikan, M. A. Kutay, and G. Bozdagı. Digital computation of the fractional Fourier transform. IEEE Trans Signal Process, 44:2141–2150, 1996. H. M. Ozaktas and O. Aytür. Fractional Fourier domains. Signal Process, 46:119–124, 1995. H. M. Ozaktas, B. Barshan, D. Mendlovic, and L. Onural. Convolution, filtering, and multiplexing in fractional Fourier domains and their relation to chirp and wavelet transforms. J Opt Soc Am A-Opt Image Sci Vis, 11:547–559, 1994. H. M. Ozaktas and M. F. Erden. Relationships among ray optical, Gaussian beam, and fractional Fourier transform descriptions of first-order optical systems. Opt Commun, 143:75–86, 1997. H. M. Ozaktas, A. Koç, I. Sari, and M. A. Kutay. Efficient computation of quadratic-phase integrals in optics. Opt Lett, 31:35–37, 2006. H. M. Ozaktas and M. A. Kutay. Time-order signal representations. Technical Report BU-CE-0005, Bilkent University, Department of Computer Engineering, Ankara, January 2000. Also in Proceedings of the First IEEE Balkan Conference on Signal Processing, Communications, Circuits, Systems, Bilkent University, Ankara, 2000. CD-ROM. H. M. Ozaktas and M. A. Kutay. The fractional Fourier transform. In Proceedings of the European Control Conference, European Union Control Association and University of Porto, Porto, Portugal, 2001. H. M. Ozaktas and M. A. Kutay. The fractional Fourier transform with applications in optics and signal processing—
Transforms and Applications Handbook
123.
124.
125.
126.
127. 128.
129.
130.
131.
132.
133. 134.
135.
136. 137.
138.
139. 140.
Supplementary bibliography, 2008. http:==www.ee.bilkent. edu.tr=haldun=wileybook.html H. M. Ozaktas, M. A. Kutay, and D. Mendlovic. Introduction to the fractional Fourier transform and its applications. In Advances in Imaging and Electron Physics 106, Academic Press, San Diego, CA, 1999, pp. 239–291. H. M. Ozaktas and D. Mendlovic. Fourier transforms of fractional order and their optical interpretation. Opt Commun, 101:163–169, 1993. H. M. Ozaktas and D. Mendlovic. Fractional Fourier transforms and their optical implementation: II. J Opt Soc Am A-Opt Image Sci Vis, 10:2522–2531, 1993. H. M. Ozaktas and D. Mendlovic. Fractional Fourier transform as a tool for analyzing beam propagation and spherical mirror resonators. Opt Lett, 19:1678–1680, 1994. H. M. Ozaktas and D. Mendlovic. Fractional Fourier optics. J Opt Soc Am A-Opt Image Sci Vis, 12:743–751, 1995. H. M. Ozaktas and U. Sümbül. Interpolating between periodicity and discreteness through the fractional Fourier transform. IEEE Trans Signal Process, 54:4233–4243, 2006. H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay. The Fractional Fourier Transform with Applications in Optics and Signal Processing. John Wiley & Sons, New York, 2001. L. Durak, A. K. Ozdemir, and O. Arikan. Efficient computation of joint fractional fourier domain signal representation. J Opt Soc Am A-Opt Image Sci Vis, 25:765–772, 2008. R. E. Patten, B. M. Hennelly, D. P. Kelly, F. T. O’Neill, Y. Liu, and J. T. Sheridan. Speckle photography: Mixed domain fractional Fourier motion detection. Opt Lett, 31:32–34, 2006. S. C. Pei and J. J. Ding. Relations between fractional operations and time-frequency distributions, and their applications. IEEE Trans Signal Process, 49:1638–1655, 2001. S. C. Pei and J. J. Ding. Eigenfunctions of linear canonical transform. IEEE Trans Signal Process, 50:11–26, 2002. S. C. Pei and J. J. Ding. Fractional cosine, sine, and Hartley transforms. IEEE Trans Signal Process, 50:1661– 1680, 2002. S. C. Pei, W. L. Hsue, and J. J. Ding. Discrete fractional Fourier transform based on new nearly tridiagonal commuting matrices. IEEE Trans Signal Process, 54:3815–3828, 2006. S. C. Pei and M. H. Yeh. Improved discrete fractional Fourier transform. Opt Lett, 22:1047–1049, 1997. S. C. Pei and M. H. Yeh. Discrete fractional Hilbert transform. Trans Circuits Syst II-Analog Digit Signal Process, 47:1307–1311, 2000. S. C. Pei, M. H. Yeh, and C. C. Tseng. Discrete fractional Fourier transform based on orthogonal projections. IEEE Trans Signal Process, 47:1335–1348, 1999. P. Pellat-Finet. Fresnel diffraction and the fractional-order Fourier transform. Opt Lett, 19:1388–1390, 1994. P. Pellat-Finet and E. Fogret. Complex order, fractional Fourier transforms and their use in diffraction theory. Application to optical resonators. Opt Commun, 258:103–113, 2006.
Fractional Fourier Transform
141. J. Perez, D. Mas, C. Illueca, J. J. Miret, C. Vazquez, and C. Hernandez. Complete algorithm for the calculation light patterns inside the ocular media. J Mod Opt, 52:1161–1176, 2005. 142. A. M. Pons, A. Lorente, C. Illueca, D. Mas, and J. M. Artigas. Fresnel diffraction in a theoretical eye: a fractional Fourier transform approach. J Mod Opt, 46:1043–1050, 1999. 143. S. Qazi, A. Georgakis, L. K. Stergioulas, and M. ShikhBahaei. Interference suppression in the Wigner distribution using fractional Fourier transformation and signal synthesis. IEEE Trans Signal Process, 55:3150–3154, 2007. 144. M. G. Raymer, M. Beck, and D. F. McAlister. Complex wave-field reconstruction using phase-space tomography. Phys Rev Lett, 72:1137–1140, 1994. 145. M. G. Raymer, M. Beck, and D. McAlister. Spatial and temporal optical field reconstruction using phase-space tomography. In Quantum Optics VI, Springer, Berlin, Germany, 1994. 146. A. Sahin, H. M. Ozaktas, and D. Mendlovic. Optical implementations of two-dimensional fractional Fourier transforms and linear canonical transforms with arbitrary parameters. Appl Opt, 37:2130–2141, 1998. 147. B. Santhanam and J. H. McClellan. The discrete rotational Fourier transform. IEEE Trans Signal Process, 44:994–998, 1996. 148. B. Santhanam and T. S. Santhanam. Discrete Gauss Hermite functions and eigenvectors of the centered discrete Fourier transform. In Proceedings of the International Conference on Acoust, Speech, Signal Processing, IEEE, Piscataway, NJ, 2007, pp. 418–422. 149. D. Sazbon, Z. Zalevsky, E. Rivlin, and D. Mendlovic. Using Fourier=Mellin-based correlators and their fractional versions in navigational tasks. Pattern Recog, 35:2993–2999, 2002. 150. K. K. Sharma and S. D. Joshi. Signal reconstruction from the undersampled signal samples. Opt Commun, 268:245– 252, 2006. 151. K. K. Sharma and S. D. Joshi. Papoulis-like generalized sampling expansions in fractional Fourier domains and their application to superresolution. Opt Commun, 278:52–59, 2007. 152. J. T. Sheridan, and R. Patten. Holographic interferometry and the fractional Fourier transformation. Opt Lett, 25:448–450, 2000. 153. R. Simon and N. Mukunda. Iwasawa decomposition in first-order optics: universal treatment of shape-invariant propagation for coherent and partially coherent beams. J Opt Soc Am A-Opt Image Sci Vis, 15:2146–2155, 1998. 154. G. Strang. Linear Algebra and Its Applications, 3rd edn. Harcourt Brace Jovanovich, New York, 1988. 155. H. Sun, G. S. Liu, H. Gu, and W. M. Su. Application of the fractional Fourier transform to moving target detection in airborne SAR. IEEE Trans Aerosp Electron Syst, 38:1416– 1424, 2002.
14-27
156. U. Sümbül and H. M. Ozaktas. Fractional free space, fractional lenses, and fractional imaging systems. J Opt Soc Am A-Opt Image Sci Vis, 20:2033–2040, 2003. 157. R. Tao, B. Deng, and Y. Wang. Research progress of the fractional Fourier transform in signal processing. Sci China Ser F-Inf Sci, 49:1–25, 2006. 158. M. Testorf. Design of diffractive optical elements for the fractional Fourier transform domain: Phase-space approach. Appl Opt, 45:76–82, 2006. 159. A. Torre. The fractional Fourier transform and some of its applications to optics. Progress Opt, 66:531–596, 2002. 160. C. O. Torres and Y. Torres. The van Cittert-Zernike theorem: A fractional order Fourier transform point of view. Opt Commun, 232:11–14, 2004. 161. G. Unnikrishnan, and K. Singh. Double random fractional Fourier-domain encoding for optical security. Opt Eng, 39:2853–2859, 2000. 162. C. Vijaya, and J. S. Bhat. Signal compression using discrete fractional Fourier transform and set partitioning in hierarchical tree. Signal Process, 86:1976–1983, 2006. 163. F. Wang, and Y. J. Cai. Experimental observation of fractional Fourier transform for a partially coherent optical beam with Gaussian statistics. J Opt Soc Am A-Opt Image Sci Vis, 24:1937–1944, 2007. 164. K. B. Wolf. Integral Transforms in Science and Engineering. Plenum Press, New York, 1979. 165. K. B. Wolf and G. Krötzsch. Geometry and dynamics in the fractional discrete Fourier transform. J Opt Soc Am A-Opt Image Sci Vis, 24:651–658, 2007. _ S¸. Yetik, M. A. Kutay, and H. M. Ozaktas. Image repre166. I. sentation and compression with the fractional Fourier transform. Opt Commun, 197:275–278, 2001. _ S¸. Yetik, M. A. Kutay, H. Özaktas¸, and H.M. Ozaktas. 167. I. Continuous and discrete fractional Fourier domain decomposition. In Proceedings of the IEEE International Conference on Acoustics, Speech, Signal Processing, IEEE, Piscataway, NJ, 2000, pp. I:93–96. _ S¸. Yetik and A. Nehorai. Beamforming using the frac168. I. tional Fourier transform. IEEE Trans Signal Process, 51:1663–1668, 2003. _ S¸. Yetik, H. M. Ozaktas, Billur Barshan, and L. Onural. 169. I. Perspective projections in the space-frequency plane and fractional Fourier transforms. J Opt Soc Am A-Opt Image Sci Vis, 17:2382–2390, 2000. 170. Z. R. Yu. A new expression of the fractional Fourier transformation. Commun Theor Phys, 36:399–400, 2001. 171. Z. Zalevsky, D. Mendlovic, and J. H. Caulfield. Localized, partially space-invariant filtering. Appl Opt, 36:1086–1092, 1997. 172. Z. Zalevsky, D. Mendlovic, and R. G. Dorsch. GerchbergSaxton algorithm applied in the fractional Fourier or the Fresnel domain. Opt Lett, 21:842–844, 1996. 173. Z. Zalevsky, D. Mendlovic, M. A. Kutay, H. M. Ozaktas, and J. Solomon. Improved acoustic signals discrimination
14-28
174.
175.
176.
177.
using fractional Fourier transform based phase-space representations. Opt Commun, 190:95–101, 2001. A. I. Zayed. A convolution and product theorem for the fractional Fourier transform. IEEE Signal Process. Lett, 5:101–103, 1998. A. I. Zayed. A class of fractional integral transforms: A generalization of the fractional Fourier transform. IEEE Trans Signal Process, 50:619–627, 2002. F. Zhang, Y. Q. Chen, and G.Bi. Adaptive harmonic fractional Fourier transform. IEEE Signal Process Lett, 6:281– 283, 1999. Y. Zhang, B.-Z. Dong, B.-Y. Gu, and G.-Z. Yang. Beam shaping in the fractional Fourier transform domain. J Opt Soc Am A-Opt Image Sci Vis, 15:1114–1120, 1998.
Transforms and Applications Handbook
178. Y. Zhang, G. Pedrini, W. Osten, and H. J. Tiziani. Applications of fractional transforms to object reconstruction from in-line holograms. Opt Lett, 29:1793–1795, 2004. 179. D. M. Zhao. Multi-element resonators and scaled fractional Fourier transforms. Opt Commun, 168:85-88, 1999. 180. B. H. Zhu and S. T. Liu. Multifractional correlation. Opt Lett, 26:578–580, 2001. 181. B. H. Zhu, S. T. Liu, and L. X. Chen. Fractional profilometry correlator for three dimensional object recognition. Appl Opt, 40:6474–6478, 2001.
15 Lapped Transforms 15.1 Introduction................................................................................................................................. 15-1 Notation . Brief History . Block Transforms . Factorization of Discrete Transforms Discrete MIMO Linear Systems . Block Transform as a MIMO System
.
15.2 Lapped Transforms.................................................................................................................... 15-4 Orthogonal Lapped Transforms
.
Nonorthogonal Lapped Transforms
15.3 LTs as MIMO Systems.............................................................................................................. 15-8 15.4 Factorization of Lapped Transforms ..................................................................................... 15-9 15.5 Hierarchical Connection of LTs: Introduction.................................................................. 15-10 Time–Frequency Diagram Variable-Length LTs
.
Tree-Structured Hierarchical Lapped Transforms
.
15.6 Practical Symmetric LTs......................................................................................................... 15-14 The Lapped Orthogonal Transform: LOT . The Lapped Biorthogonal Transform: LBT The Generalized LOT: GenLOT . The General Factorization: GLBT
.
15.7 The Fast Lapped Transform: FLT ........................................................................................ 15-22 15.8 Modulated LTs.......................................................................................................................... 15-23 15.9 Finite-Length Signals ............................................................................................................... 15-25
Ricardo L. de Queiroz Xerox Corporation
Overall Transform
.
Recovering Distorted Samples
.
Symmetric Extensions
15.10 Conclusions ............................................................................................................................... 15-28 References .............................................................................................................................................. 15-28
15.1 Introduction In this chapter, an effort will be made to cover the basic aspects of lapped transforms. It is a subject that has been extensively studied, making available a large number of papers and books. This is mostly true because of the direct correspondence among lapped transforms, filter banks, wavelets, and time–frequency transformations. Some of those topics are well covered in other chapters in this handbook. In any case it will be certainly impractical to reference all the contributions in the field. Therefore, the presentation will be more focused rather than general. We refer the reader to chapters on wavelet and time–frequency transforms in this handbook, as well as Refs. [20,44,50,53] for a more detailed treatment of filter banks. We expect the reader to have a background in digital signal processing. An introductory chapter in this handbook on signals and systems, the chapter on Z-transforms, and the chapter on the discrete cosine transform (DCT) are certainly useful.
15.1.1 Notation In terms of notation, our conventions are: In is the n 3 n identity matrix. On is the n 3 n null matrix, while On3m stands for the n 3 m null matrix. Jn is the n 3 n counter-identity, or exchange, or reversing matrix, illustrated by the following example:
2
0 J3 ¼ 4 0 1
0 1 0
3 1 0 5: 1
J reverses the order of elements of a vector. []T means transposition. []H means transposition combined with conjugation, where this combination is usually called the Hermitian of the vector or matrix. Unidimensional concatenation of matrices and vectors is indicated by a comma. In general, capital bold face letters are reserved for matrices, so that a represents a (column) vector while A represents a matrix.
15.1.2 Brief History In the early 1980s transform coding was maturing itself and the DCT38 was the preferred transformation method. At that time, DCT-based image compression was state-of-the-art, but researchers were uncomfortable with the blocking artifacts which are common (and annoying) artifacts found at images which were compressed at low bit rates using block transforms. To resolve this problem, the idea of a lapped transform (LT, for short) was developed in the early 1980s at MIT. The idea was to extend the basis function beyond the block boundaries, creating an overlap, in order to eliminate the blocking effect. This idea was not new, but the new ingredient to overlapping blocks would 15-1
15-2
Transforms and Applications Handbook
be the fact that the number of transform coefficients would be the same as if there was no overlap, and that the transform would maintain orthogonality. Cassereau3 introduced the lapped orthogonal transform (LOT). However, it was Malvar12–14 who gave the LOT an elegant design strategy and a fast algorithm, thus making the LOT practical and a serious contender to replace the DCT for image compression. It was later pointed by Malvar16 the equivalence between a LOT and a multirate filter bank which is now a very popular signal processing tool.50 Based on cosine modulated filter banks,27 modulated lapped transforms were designed.15,40 Modulated transforms were generalized for an arbitrary overlap later, creating the class of extended lapped transforms (ELT).18–21 Recently a new class of LTs with symmetric bases were developed yielding the class of generalized LOTs (GenLOTs).29,31,34 The GenLOTs were made to have an arbitrary length (not a multiple of the block size),46 extended to the nonorthogonal case49 and even made to have filters of different lengths.48 As we mentioned, filter banks and LTs are the same, although studied independently in the past. Because of this duality, it would be impractical to mention all related work in the field. Nevertheless, Vaidyanathan’s book50 is considered an excellent text on filter banks, while Malvar’s book20 is a good reference to bridge the gap between lapped transforms and filter banks. We, however, refer to LTs for uniform FIR filter banks with fast implementation algorithms based on special factorizations of the basis functions, with particular design attention for signal (mainly image) coding.10,14,15,20,30,36,46,55
15.1.3 Block Transforms We assume a one-dimensional input sequence x(n) which is transformed into several coefficients yi(n), where yi(n) would belong to the ith subband. In traditional block-transform processing, such as in image and audio coding, the signal is divided into blocks of M samples, and each block is processed independently,4,9,20,26,37–39 Let the samples in the mth block be denoted as xTm
¼ [x0 (m), x1 (m), . . . , xM1 (m)],
(15:1)
for xk(m) ¼ x(mM þ k) and let the corresponding transform vector be y Tm
¼ [y0 (m), y1 (m), . . . , yM1 (m)]:
(15:2)
For a real unitary transform A, AT ¼ A1. The forward and inverse transforms for the mth block are ym ¼ Axm ,
(15:3)
x m ¼ A T ym :
(15:4)
The rows of A, denoted aTn (0 n M 1), are called the basis vectors because they form an orthogonal basis for the M-tuples over the real field.39 The transform vector coefficients [y0(m), y1(m), . . . , yM 1(m)] represent the corresponding weights of vector xm with respect to this basis. If the input signal is represented by vector x while the subbands are grouped into blocks in vector y, we can represent the transform H which operates over the entire signal as a block diagonal matrix: H ¼ diag{ . . . , A, A, A, . . . },
(15:5)
where, of course, H is an orthogonal matrix if A is also an orthogonal matrix. In summary, a signal is transformed by segmentation into blocks followed by transformation, which amounts to transforming the signal with a sparse matrix. Also, it is well known that the signal energy is preserved under an orthogonal transformation,9,38 assuming stationary signals, i.e., Ms2x ¼
M X1
s2i ,
(15:6)
i¼0
where s2i is the variance of yi(m) s2x is the variance of the input samples
15.1.4 Factorization of Discrete Transforms Four our purposes, discrete transforms of interest are linear and governed by a square matrix with real entries. Square matrices can be factorized into a product of sparse matrices of the same size. Notably, orthogonal matrices can be factorized by a product of plane (Givens) rotations.8 Let A be an M 3 M real orthogonal matrix and let Q(i, j, un) be a matrix with entries Qkl which is like the identity matrix IM except for four entries: Qii ¼ cos (un ) Qjj ¼ cos (un ) Qij ¼ sin (un ) Qji ¼
sin (un ), (15:7)
i.e., Q(i, j, un) corresponds to a plane rotation along the ith and the jth axes by an angle un. Then, A can be factorized as A¼S
M YZ MY1
Q(i, j, un )
(15:8)
i¼0 j¼iþ1
where n is increased by one for every matrix and S is a diagonal matrix with entries 1 to correct for any sign error.8 This correction is not necessary in most cases and is not required if we could apply variations of the rotation matrix defined in Equation 15.7 as
and Qii ¼ cos (un ) Qjj ¼ cos (un )
Qij ¼ sin (un ) Qji ¼ sin (un ): (15:9)
15-3
Lapped Transforms
cos(θk) θ1
sin(θk) θ4
θ2
–sin(θk)
θ3
(a)
θ5
θ6
(b)
θk
cos(θk)
α1 α2 θ1 θ4
θ2 (c)
θ7
α3
θ3
θ8
α4 θ5
θ10 θ9
θ6
θ11
θ12
FIGURE 15.1 Factorization of a 4 3 4 matrix. (a) Orthogonal factorization into givens rotations. (b) Detail of the rotation element. (c) Factorization of a nonorthogonal matrix through SVD with the respective factorization of SVD’s orthogonal factors into rotations.
All combinations of pairs of axes shall be used for a complete factorization. Figure 15.1a shows an example of the factorization of a 4 3 4 orthogonal matrix into plane rotations (the order differs from that in Equation 15.8, but the factorization is also complete). If the matrix is not orthogonal, we can always decompose the matrix using singular value decomposition (SVD).8 A is decomposed through SVD as A ¼ ULV
(15:10)
where U and V are orthogonal matrices L is a diagonal matrix containing the singular values of A While L is already a sparse matrix, we can further decompose the orthogonal matrices using Equation 15.8, i.e., A¼S
M2 Y Y M1
i¼0 j¼iþ1
! ! M1 Y Y M1 U V Q i, j, un L Q i, j, un i¼0 j¼iþ1
(15:11)
where uUn and uVn compose the set of angles for U and V, respectively. Figure 15.1c illustrates the factorization for a 4 3 4 nonorthogonal matrix, where ai are the singular values. The factorization is an invaluable tool for the design of block and lapped transforms as we will explain later. In the orthogonal case the angles are all the degrees of freedom. In an M 3 M orthogonal matrix, there are M(M 1)=2 angles, and by spanning all the angle spaces (0 to 2p for each one) one spans the space of all M 3 M orthogonal matrices. The idea is to span the angles in order to design orthogonal matrices through unconstrained optimization. In the general case, there are M2 degrees of freedom either by utilizing the matrix entries directly or by using
the SVD decomposition. However, we are mainly concerned with invertible matrices. Using the SVD-based method, one can design invertible matrices by freely spanning the angles, with the only mild constraint to assure that all singular values are not zero. The author commonly uses unconstrained nonlinear optimization based on simplex search provided by MATLAB1 to span all angles and possibly singular values as well.
15.1.5 Discrete MIMO Linear Systems Let a multi-input multi-output (MIMO)50 discrete linear FIR system have M input and M output sequences with respective Z-transforms Xi(z) and Yi(z), for 0 i M 1. Then, Xi(z) and Yi(z) are related by 3 E0, 1 (z) E0, M1 (z) E0, 0 (z) 6 Y (z) 7 6 E1, 0 (z) E1, 1 (z) E1, M1 (z) 7 7 6 1 7 6 7 6 6 7 .. .. .. .. .. 7 6 7¼6 5 4 5 4 . . . . . YM 1 (z) EM1, 0 (z) EM1, 1 (z) EM1, M1 (z) 2 3 X0 (z) 6 X (z) 7 6 1 7 7 (15:12) 6 .. 6 7 4 5 . XM1 (z) 2
Y0 (z)
3
2
where Eij(z) are entries of the given MIMO system E(z). E(z) is called the transfer matrix of the system and we have chosen it to be square for simplicity. It is a regular matrix whose entries are polynomials. Of relevance to us is the case wherein the entries belong to the field of real-coefficient polynomials of z1, i.e., the entries represent real-coefficient FIR filters. The degree of E(z) (or the McMillan degree, Nz) is the minimum number of delays necessary to implement the system. The order of E(z) is the
15-4
Transforms and Applications Handbook
y0 (m)
M z–1
y1 (m)
M z
ŷ0 (m)
z–1
ŷ1 (m)
M z–1
–1
M
M A
AT
M
M
M
M
z–1
x(n)
ˆx (n)
M
z–1
z–1 M
yM–1 (m)
ŷM–1(m)
z–1 M
FIGURE 15.2 The signal samples are parallelized into polyphase components through a sequence of delays and decimators (# M means subsampling by a factor of M). Effectively the signal is ‘‘blocked’’ and each block is transformed by system A into M subband samples (transformed samples). Inverse transform (for orthogonal transforms) is accomplished by system AT whose outputs are polyphase components of the reconstructed signal, which are then serialized by a sequence of up-samplers (" M means subsampling by a factor of M, padding the signal with M 1 zeros) and delays.
maximum degree among all Eij(z). In both cases, causal FIR filters are assumed. A special subset of great interest comprises the transfer matrices which are normalized paraunitary. In the paraunitary case, E(z) becomes a unitary matrix when evaluated on the unit circle: EH (ejv )E(ejv ) ¼ E(ejv )EH (ejv ) ¼ IM :
(15:13)
Furthermore: E1 (z) ¼ ET (z1 ):
(15:14)
For causal inverses of paraunitary systems, E0 (z) ¼ z n ET (z 1 ):
and decimators as shown in Figure 15.2. Each block is transformed by system A into M subband samples (transformed samples). Inverse transform (for orthogonal transforms) is accomplished by system AT whose output are polyphase components of the reconstructed signal, which are then serialized by a sequence of upsamplers and delays. In this system, blocks are processed independently. Therefore, the transform can be viewed as a MIMO system of order 0, i.e., E(z) ¼ A, and if A is unitary, so is E(z) which is obviously also paraunitary. The system matrix relating the polyphase components to the subbands is referred to as the polyphase transfer matrix (PTM).
15.2 Lapped Transforms (15:15)
is often used, where n is the order of E(z), since E0 (z)E(z) ¼ znIM. For paraunitary systems, the determinant of E(z) is of the form az Nz , for a real constant a,50 where we recall that Nz is the McMillan degree of the system. For FIR causal entries, they are also said to be lossless systems.50 In fact, an orthogonal matrix is one where all Eij(z) are constant for all z. We also have interest in invertible, although nonparaunitary, transfer matrices. In this case, it is required that the matrix be invertible in the unit circle, i.e., for all z ¼ ejv, v real. Nonparaunitary systems are also called biorthogonal or perfect reconstruction (PR).50
15.1.6 Block Transform as a MIMO System The sequences xi(m) in Equation 15.1 are called the polyphase components of the input signal x(n). In the other hand, the sequences yi(m) in Equation 15.2 are the subbands resulting from the transform process. In an alternative view of the transformation process, the signal samples are ‘‘blocked’’ or parallelized into polyphase components through a sequence of delays
The motivation for a transform with overlap as we mentioned in the introduction was to try to improve the performance of block (nonoverlapped) transforms for image and signal compression. Compression commonly implies signal losses due to quantization.9 As the bases of block transforms do not overlap, there may be discontinuities along the boundary regions of the blocks. Different approximations of those boundary regions in each side of the border may cause an artificial ‘‘edge’’ between blocks, the so-called blocking effect. In Figure 15.3 is shown an example signal which is to be projected into bases, by segmenting the signal into blocks and projecting each segment into the desired bases. Alternatively, one can view the process as projecting the whole signal into several translated bases (one translation per block). Figure 15.3a shows translated versions of the first basis of the DCT, in order to account for all the different blocks. In Figure 15.3b, shows the same diagram for the first basis of a typical short LT. Note that the bases overlap spatially. The idea is that overlap would help decrease, if not eliminate, the blocking effect. Although Figure 15.3 shows just one basis for either DCT or LT, there are M of them. An example of the bases for M ¼ 8 is shown in Figure 15.4. It shows the bases for the DCT and for the LOT, which is a particular LT that will be discussed later.
15-5
Lapped Transforms
x(n)
(a)
(b)
(a)
7
7
6
6
5
5
Basis/filter number
Basis/filter number
FIGURE 15.3 The example discrete signal on top is to be projected into a number of bases. (a) spatially displaced versions of the first DCT basis, (b) spatially displaced versions of the first basis of a typical short LT.
4 3 2
4 3
1
0
0 4
7
(b)
15.2.1 Orthogonal Lapped Transforms
2
1
0
can readily perceive the blocking artifacts at the boundaries of 8 3 8 pixel blocks. By replacing the DCT with the LOT at the same compression ratio, we obtain the image shown in Figure 15.5b, where blocking is largely reduced. This brief introduction to the motivation behind the development of LTs helps to illustrate the overall problem, without detail on how to apply LTs. In this section we will develop the LT framework.
0
8
15
FIGURE 15.4 Bases for the 8-point DCT (M ¼ 8) (a) and for the LOT (b) with M ¼ 8. The LOT is a particular LT which will be explained later.
The reader may note that not only are the LOT bases longer, but they are also smoother than their DCT counterparts. Figure 15.5a shows an example of an image compressed using the standard JPEG baseline coder,26 where the reader
For lapped transforms,20 the basis vectors can have length L, such that L > M, extending across traditional block boundaries. Thus, the transform matrix is no longer square and most of the equations valid for block transforms do not apply to an LT. We will concentrate our efforts on orthogonal LTs20 and consider L ¼ NM, where N is the overlap factor. Note that N, M, and hence L are all integers. As in the case of block transforms, we define the transform matrix as containing the orthonormal basis vectors as its rows. A LT matrix P of dimensions M 3 L can be divided into square M 3 M submatrices Pi (i ¼ 0, 1, . . . , N 1) as P ¼ [P0 , P1 , . . . , PN1 ]:
(15:16)
15-6
Transforms and Applications Handbook
(a)
(b)
FIGURE 15.5 Zoom of image compressed using JPEG at 0.5 bit per pixel. (a) DCT, (b) LOT.
The orthogonality property does not hold because P is no longer a square matrix and it is replaced by the PR property,20,23 defined by N1l X i¼0
Pi PTiþl ¼
N1l X i¼0
PTiþl Pi ¼ d(l)IM ,
(15:17)
for l ¼ 0, 1, . . . , N 1, where d(l) is the Kronecker delta, i.e., d(0) ¼ 1 and d(l) ¼ 0 for l 6¼ 0. As we will see later (Equation 15.17) states the PR conditions and orthogonality of the transform operating over the entire signal. If we divide the signal into blocks, each of size M, we would have vectors xm and ym such as in Equations 15.1 and 15.2. These blocks are not used by LTs in a straightforward manner. The actual vector which is transformed by the matrix P has to have L samples and, at block number m, it is composed of the samples of xm plus L M samples. These samples are chosen by picking (L M)=2 samples at each side of the block xm, as shown in Figure 15.6, for N ¼ 2. However, the number of transform coefficients at each step is M, and, in this respect, there is no change in the way we represent the transform-domain blocks ym. The input vector of length L is denoted as vm, which is centered around the block xm, and is defined as v Tm
M M x mM (N 1) x mM þ (N þ 1) 1 : 2 2 (15:18)
M
M
ym ¼ Pvm :
(15:19)
The inverse transform is not direct as in the case of block transforms, i.e., with the knowledge of ym we do not know the samples in the support region of vm, or in the support region of xm. We can reconstruct a vector ^vm from ym, as ^v m ¼ PT ym ,
(15:20)
where ^v m 6¼ vm. To reconstruct the original sequence, it is necessary to accumulate the results of the vectors ^vm, in a sense that a particular sample x(n) will be reconstructed from the sum of the contributions it receives from all ^vm, such that x(n) was included in the region of support of the corresponding vm. This additional complication comes from the fact that P is not a square matrix.20 However, the whole analysis-synthesis system (applied to the entire input vector) is orthogonal, assuring the PR property using Equation 15.20. We can also describe the process using a sliding rectangular window applied over the samples of x(n). As an M-sample block ym is computed using vm, ymþ1 is computed from vmþ1 which is obtained by shifting the window to the right by M samples, as shown in Figure 15.7. As the reader may have noticed, the region of support of all vectors vm is greater than the region of support of the input vector. Hence, a special treatment has to be given to the
2M
2M M
Then, we have
M 2M
M
M
M
M
2M
FIGURE 15.6 The signal samples are divided into blocks of M samples. The lapped transform uses neighboring block samples, as in this example for N ¼ 2, i.e., L ¼ 2M, yielding an overlap of (L M)=2 ¼ M=2 samples on either side of a block.
15-7
Lapped Transforms M samples
x (n)
vm
vm+1 y (n)
ym+1
ym
vˆm
xˆ (n)
vˆm+1
FIGURE 15.7 Illustration of a lapped transform with N ¼ 2 applied to signal x(n), yielding transform domain signal y(n). The input L-tuple as vector vm is obtained by a sliding window advancing M samples, generating ym. This sliding is also valid for the synthesis side.
transform at the borders. We will discuss this fact later and assume infinite-length signals until then, or assume the length is very large and the borders of the signal are far enough from the region to which we are focusing our attention. If we denote by x the input vector and by y the transformdomain vector, we can be consistent with our notation of transform matrices by defining a matrix H such that y ¼ Hx and ^x ¼ HTy. In this case, we have 3
2
.. 0 7 6 . 7 6 P 7 6 7: H¼6 P 7 6 7 6 P 5 4 .. 0 .
(15:21)
Where the displacement of the matrices P obeys the following: 2
..
6 . 6 H¼6 6 4 0
..
..
.
P0
P1 P0
. PN1 P1 .. .. . .
PN1
3 0 7 7 7: 7 5 .. .
(15:22)
H has as many block-rows as transform operations over each vector vm. Let the rows of P be denoted by 1 3 L vectors pTi (0 i M 1), so that PT ¼ [p0, . . . , pM 1]. In an analogy to the block transform case, we have yi (m) ¼ pTi v m :
(15:23)
The vectors pi are the basis vectors of the lapped transform. They form an orthogonal basis for an M-dimensional subspace (there are only M vectors) of the L-tuples over the real field. As a remark, assuming infinite length signals, from the orthogonality
of the basis vectors and from the PR property in Equation 15.17, the energy is preserved, such that Equation 15.6 is valid. In order to compute the variance of the subband signals of a block or lapped transform, assume that x(n) is a zero-mean stationary process with a given autocorrelation function. Let its L 3 L autocorrelation matrix be Rxx. Then, from Equation 15.23: E[yi (m)] ¼ pTi E[v m ] ¼ pTi 0L1 ¼ 0,
(15:24)
s2i ¼ E[yi2 (m)] ¼ pTi E[v m vTm ]pi ¼ pTi Rxx pi ,
(15:25)
so that:
i.e., the output variance is easily computed from the input autocorrelation matrix for a given basis P. Assuming that the entire input and output signals are represented by the vectors x and y, respectively, and that the signals have infinite length, then, from Equation 15.21, we have y ¼ Hx
(15:26)
x ¼ HT y:
(15:27)
and, if H is orthogonal,
Note that H is orthogonal if and only if Equation 15.17 is satisfied. Thus, the meaning for Equation 15.17 becomes clear, as it forces the transform operating over the entire input-output signals to be orthogonal. So, the LT is called orthogonal. For block transforms as there is no overlap, it is sufficient to state the orthogonality of A because H will be a block-diagonal matrix. These formulations for LTs are general, and if the transform satisfies the PR property described in Equation 15.17, then the LTs are independent of the contents of the matrix P. The definition of P with a given N can accommodate any LT whose length of the basis vectors lies between M and NM. For the case of block transforms, N ¼ 1, i.e., no overlap. In fact, block transforms are a special case of lapped transforms and can be easily padded with zeroes. Similarly, basis functions can be increased by zeropadding as long as Equation 15.17 is respected. Causal notation—If one is not concerned with particular localization of the transform with respect to the origin x(0) of the signal x(n), it is possible to change the notation to apply a causal representation. In this case, we can represent vm as vTm ¼ xTm
T T Nþ1 , . . . , xm 1 , xm
,
(15:28)
which is identical to the previous representation, except for a shift in the origin to maintain causality. The block ym is found in a similar fashion as y m ¼ Pv m ¼
N 1 X i¼0
PN
1 i xm i :
(15:29)
15-8
Transforms and Applications Handbook
Similarly, ^vm can be reconstructed as in Equation 15.20 where the support region for the vector is the same, except that the relation between it and the blocks ^ xm will be changed accordingly.
15.2.2 Nonorthogonal Lapped Transforms
^vm ¼ QT y m :
(15:31)
We also define another transform matrix as ..
. 6 . 6 Q0 6 H ¼6 4 0
3
..
..
Q1 Q0
0
. QN1 Q1 .. .. . .
QN1
0 7 7 7: 7 5 .. .
(15:32)
The forward and inverse transformation are now: y ¼ HF x, x ¼ HI y:
(15:33)
In the orthonormal case. HF ¼ H and HI ¼ HT. In the general case, it is required that HI ¼ H1 F . With the choice of Q as the 0 inverse LT, then HI ¼ H T , while HF ¼ H. Therefore the perfect reconstruction condition is 0
QTk Pkþm ¼
T
H H ¼ I1 :
y0 (m)
M
y1 (m)
M
As we discussed in Sections 15.1.3 and 15.1.6, the input signal can be decomposed into M polyphase signals xi(m), each sequence having one Mth of the original rate. As there are M subbands yi(m), under some circumstances and since only linear operations are used to transform the signal, there is a MIMO system F(z) that converts the M polyphase signals to the M subband signals. Those transfer matrices are also called PTM (Section 15.1.6). The same is true for the inverse transform (from subbands ^yi(m) to polyphase ^xi(m)). Therefore, we can use the diagram shown in Figure 15.8 to represent the forward and inverse transforms. Note that Figure 15.8 is identical to Figure 15.2 except for the fact that the transforms have memory, i.e., depend not only on the present input vector, but also on past input vectors. One can view the system as a clocked one, in which at every clock, a block is input, transformed, and output. The parallelization and serialization of blocks is performed by the chain of delays, upsamplers and down-samplers as shown in Figure 15.8. If we express the forward and inverse PTM as matrix polynomials:
F(z) ¼
ŷ0 (m) ŷ1 (m)
M M
F(z)
Fi z1 ,
(15:36)
i¼0
xˆ (n) z–1 z–1
G(z)
M
M
M
M
M
N 1 X
M
M
z–1
(15:35)
15.3 LTs as MIMO Systems
z–1
x (n)
k¼0
QTkþm Pk ¼ d(m)IM ,
which establish general necessary and sufficient conditions for the perfect reconstruction of the signal by using P as a forward LT and Q as an inverse LT. Unlike the orthogonal case in Equation 15.17, here both sets are necessary conditions, i.e., a total of 2N 1 matrix equations.
(15:34)
z–1
z–1
N1m X
(15:30)
in the same way as we did for P with the same size. The difference is that Q instead of P is used in the reconstruction process so that Equation 15.20 is replaced by
2
N1m X k¼0
So far, we have discussed orthogonal LTs. In those, a segment of the signal is projected onto the basis functions of P, yielding the coefficients (subband samples). The signal is reconstructed by the overlapped projection of the same bases weighted by the subband samples. In the nonorthogonal case, we define another LT matrix Q as Q ¼ [Q0 , Q1 , . . . , QN1 ],
The reader can check that the above equation can also be expressed in terms of the LTs P and Q as
yM – 1 (m) ŷM – 1 (m)
M
z–1 z–1
FIGURE 15.8 The filter bank represented as a MIMO system is applied to the polyphase components of the signal. The matrices F(z) and G(z) are called polyphase transfer matrices. For a PR system both must be inverses of each other and for paraunitary filter banks they must be paraunitary matrices, i.e., G(z) ¼ F1(z) ¼ FT(z1). For a PR paraunitary causal system of order N, we must choose G(z) ¼ z(N1)FT(z1).
15-9
Lapped Transforms
G(z) ¼
N 1 X
Gi z1 ,
(15:37)
i¼0
then the forward and inverse transforms are given by N1 X
Fi xmi ,
(15:38)
^ xm ¼
N1 X
Gi ^y mi :
(15:39)
i¼0
i¼0
In the absence of any processing ^ym ¼ ym and F(z) and G(z) are connected together back-to-back, so that PR is possible if they are inverses of each other. Since the inverse of a causal FIR MIMO system may be noncausal, we can delay the entries of the inverse matrix to make it causal. Since the MIMO system’s PTM is assumed to have order N (because N is the overlap factor of the equivalent LT), PR requires that: (15:40)
In this case, ^ x ¼ xmNþ1, i.e., the signal is perfectly reconstructed after a system’s delay. Because of the delay chains combined with the block delay (system’s order), the reconstructed signal delay is ^x(n) ¼ x(n NM þ 1) ¼ x(n L 1). By combining Equations 15.38 through 15.40 we can restate the PR conditions as
i¼0
j¼0
k¼0
Gk Fkþm ¼
Pi PTiþl ¼
NX 1 l
N1m X k¼0
Gkþm Fk ¼ d(m)IM :
Gk ¼ QTk
(15:44)
i¼0
HHT ¼ HT H ¼ I1 :
(15:45) (15:46)
In other words, if the system’s PTM is paraunitary, then the LT (H) is orthogonal and vice-versa.
15.4 Factorization of Lapped Transforms There is an important results for paraunitary PTM which states that any paraunitary E(z) can be decomposed into a series of orthogonal matrices and delay stages.6,51 In this decomposition there are Nz delay stages and Nz þ 1 orthogonal matrices, where Nz is the McMillan degree of E(z) (the degree of the determinant of E(z)). Then, Nz Y
(Y(z)Bi )
(15:47)
i¼1
where Y(z) ¼ diag{z 1, 1, 1, . . . , 1} Bi are orthogonal matrices (15:42)
The reader should note the striking similarity of the above equation against Equation 15.35. In fact, the simple comparison of the transformation process in space domain notation (Equation 15.33) against the MIMO system notation in Equations 15.38 and 15.39 would reveal the following relations: Fk ¼ PN1k
E (z 1 ):
PTi Piþl ¼ d(l)IM ,
(15:41)
which, by equating the powers of z, can be rewritten as N1m X
NX 1 l
E(z) ¼ B0 Gi Fi zij ¼ zNþ1 IM ,
(N 1) T
As a result, the reader can verify that it implies that Pi ¼ Qi and that:
i¼0
G(z)F(z) ¼ z Nþ1 IM ! G(z) ¼ z Nþ1 F1 (z):
G(z) ¼ z
F(z) ¼ E(z),
ym ¼
N 1 X N 1 X
As mentioned earlier, paraunitary (lossless) systems are a class of MIMO systems of interest. Let E(z) be a paraunitary PTM so that E 1(z) ¼ ET(z 1), and let:
(15:43)
for 0 k < N. In fact, the conditions imposed in Equations 15.34, 15.35, 15.40, and 15.42 are equivalent and each one of them implies the others. This is a powerful tool in the design of lapped transforms. As an LT, the matrix is nonsquare but the entries are real. As a MIMO system, the matrix is square, but the entries are polynomials. One form may complement the other, facilitating tasks such as factorization, design, and implementation.
It is well-known that an M 3 M orthogonal matrix can be expressed as a product of M(M 1)=2 plane rotations. However, in this case, only B0 is a general orthogonal matrix, while the matrices B1 through BNz have only M 1 degrees of freedom.52 This result states that it is possible to implement an orthogonal LT using a sequence of delays and orthogonal matrices. It also defines the total number of degrees of freedom in a lapped transform, i.e., if one changes arbitrarily any of the plane rotations composing the orthogonal transforms, one will span all possible orthogonal lapped transform, for given values of M and L. It is also possible to prove29 that the (McMillan) degree of E(z) is bounded by Nz (L M)=2 with equality for a general structure to implement all LTs whose bases have length up to L ¼ NM, i.e., E(z) of order N 1. In fact Equation 15.47 may be able to implement all lapped transforms (orthogonal or not) whose degree is Nz. For that it is only required that all the multiplicative factors that compose the PTM are invertible. Let us consider a more particular factorization:
15-10
Transforms and Applications Handbook
F(z) ¼
(N1)=(K1) Y
Bi (z)
K 1 X
0 Y
BTik z (K1k) :
k¼0
i¼(N1)=(K1)
!
(15:49)
In the case the PTM is not paraunitary, all factors have to be invertible in the unit circle for PR. More strongly put, there have to be factors Ci(z) of order K such that: Ci (z)Bi (z) ¼ z Kþ1 IM :
(15:50)
Being that the case, the inverse PTM is simply given by
G(z) ¼
0 Y
Ci (z)
(15:51)
i¼(N1)=(K1)
With factorization, the design of F(z) is broken down in the design of Bi(z). Lower order factors simplify the constraint analysis and facilitate the design of a useful transform, either paraunitary or allowing inverse. Even more desirable is to factorize the PTM as
F(z) ¼ B0
N1 Y
L(z)Bi
(15:52)
i¼0
where Bi are square matrices and L(z) is a paraunitary matrix containing only entries 1 and z1. In this case, if the PTM is paraunitary:
G(z) ¼
0 Y
!
~ BTi L(z)
i¼N1
BT0
(15:53)
~ where L(z) ¼ z 1 L(1=z). If the PTM is not paraunitary, then: G(z) ¼
BN–1
i¼0
P k is a stage of order K 1. If F(z) is where Bi (z) ¼ K1 k¼0 Bik z paraunitary then all Bi(z) must be paraunitary, so that perfect reconstruction is guaranteed if
G(z) ¼ zNþ1 FT (z 1 ) ¼
z–1
(15:48)
0 Y
i¼N1
~ B1 i L(z)
!
B1 0 ,
(15:54)
i.e., the design can be simplified by only applying invertible real matrices Bi. This factorization approach is the basis for most useful LTs. It allows efficient implementation and design. We will discuss some useful LTs later on. For example, for M even, the symmetric delay factorization (SDF) is quite useful. In that,
...
BN–2
...
z–1
B0
B1 1
1
(a)
1 BTN – 1
...
T BN – 2
...
1 T
T B1
z–1
B0 z–1
(b)
FIGURE 15.9 Flow graph for implementing an LT where F(z) can be factorized using symmetric delays and N stages. Signals x(n) and y(n) are segmented and processed using blocks of M samples, all branches carry M=2 samples, and blocks Bi are M 3 M orthogonal or invertible matrices. (a) Forward transform section; (b) inverse transform section.
L(z) ¼
z1 IM=2 0
0 IM=2
I ~ , L(z) ¼ M=2 0
0 : z IM=2 1
(15:55)
The flow graph for implementing an LT which can be parameterized using SDF is shown in Figure 15.9. If we are given the SDF matrices instead of the basis coefficients, one can easily reconstruct the LT matrix. For this, start with the last stage and recur the structure in Equation 15.52 using Equation 15.55. Let P(i) be the partial reconstruction of P after including up to the ith stage. Then, P(0) ¼ BN1 P(i) ¼ BN1i
IM=2 0M=2
0M=2 0M=2
0M=2 0M=2
0M=2 IM=2
P(i1) 0M
P ¼ P(N1)
(15:56) 0M P(i1) (15:57) (15:58)
Similarly, one can find Q from the factors B1 i .
15.5 Hierarchical Connection of LTs: Introduction So far we have focused on the construction of a single LT resulting in M subband signals. What happens if we cascade LTs by connecting them hierarchically, in such a way that a subband signal is the actual input for another LT? Also, what are the consequences of submitting only part of the subband signals to further stages of LTs? We will try to introduce the answer to those questions. The subject has been intensively studied and a large number of publications are available. Our intent, however, is just to provide a basic introduction, while leaving more detailed analysis to the references. The relation between filter banks and discrete wavelets42,50,52 is well-known. Under conditions
15-11
Lapped Transforms
that are easily satisfied,50 an infinite cascade of filter banks will generate a set of continuous orthogonal wavelet bases. In general, if only the low-pass subband is connected to another filter bank, for a finite number of stages, we call the resulting filter bank a discrete wavelet transform (DWT).50,52 A free cascading of filter banks, however, is better known as a discrete wavelet packet (DWP).5,28,42,54 As LTs and filter banks are equivalent in most senses, the same relations apply to LTs and wavelets. The system resulting from the hierarchical association of several LTs will be called here a hierarchical lapped transform (HLT).17
15.5.1 Time–Frequency Diagram The description of the cascaded connection of LTs is better carried with the aid of simplifying diagrams. The first is the time–frequency (TF) diagram. It is based on the TF plane, which is well known from the fields of spectral and time– frequency analysis.1,2,25 The time–frequency representation of signals is a well-known method (for example, the time-dependent discrete Fourier transform (DFT) and the construction of spectrograms; see Refs. [1,2,25] for details on TF signal representation, and other chapters in this handbook for the DFT). The TF representation is obtained by expressing the signal x(n) with respect to bases that are functions of both frequency and time. For example, the size-r DFT of a sequence extracted from x(n) (from x(n) to x(n þ r 1))25 can be a(k, n) ¼
r1 X i¼0
j2pki x(i þ n) exp : r
(15:59)
Using a sliding window w(m) of length r which is nonzero only in the interval n m n þ r 1, (which in this case is rectangular), we can rewrite the last equation as
a(k, n) ¼
1 X
x(i)w(i) exp
i¼ 1
jk(i r
n)2p
:
(15:60)
ω
a(k, n) ¼
t
0 (b)
x(i)f(n
i, k)
(15:61)
i¼ 1
As the signal is assumed to have an infinite number of samples, consider a segment of Nx samples extracted from signal x (n), which can be extended in any fashion in order to account for the overlap of the window of r samples outside the signal domain. In such segment we can construct a spectrogram with a resolution of r samples in the frequency axis and Nx samples in the time axis. Assuming a maximum frequency resolution we can have a window with length up to r ¼ Nx. In this case, the diagram for the spectrogram is given in Figure 15.10a. We call such diagrams TF diagrams, because they only indicate the number of samples used in the TF representation of the signal. Assuming an ideal partition of the TF plane (using filters with ideal frequency response and null transition regions), each TF coefficient would represent a distinct region in a TF diagram. Note that in such representation, the signal is represented by Nx2 TF coefficients. We are looking for maximally decimated TF representation which is defined as a representation of the signal where the TF plane diagram would be partitioned into Nx regions, i.e., Nx TF coefficients will be generated. Also, we require that all Nx samples of x(n) can be reconstructed from the Nx TF coefficients. If we use less than Nx samples in the TF plane, we clearly cannot reconstruct all possible combinations of samples in x(n), from the TF coefficients, solely using linear relations. Under these assumptions, Figure 15.10b shows the TF diagram for the original signal (only resolution in the time axis) for Nx ¼ 16. Also, for Nx ¼ 16, Figure 15.10c shows a TF diagram with maximum frequency resolution, which could be achieved by transforming the original Nx-sample sequence with an Nx-sample DCT or DFT.
ω
π
Nx
1 X
where f(n, k) represents the bases for the space of the signal n represents the index where the basis is located in time k is the frequency index
ω
π
0 (a)
For more general bases we may write:
π
Nx
t
0
Nx
t
(c)
FIGURE 15.10 Examples of rectangular partitions of the time–frequency plane for a signal which has Nx samples. (a) Spectrogram with a Nx-length window, resulting in Nx2 TF samples; (b) Input signal, no processing; (c) A transform such as the DCT or DFT is applied to all Nx samples.
15-12
Transforms and Applications Handbook
15.5.2 Tree-Structured Hierarchical Lapped Transforms The tree diagram is helpful to describe the hierarchical connection of filter banks. In this diagram we represent an M-band LT by nodes and branches of an M-ary tree. Figure 15.11a shows an M-band LT, where all the M subband signals have sampling rates M times smaller than that of x(n). Figure 15.11b shows the equivalent notation for the LT in a tree diagram, i.e., a singlestage M-branch tree, which is called here a tree cell. Recalling Figure 15.10, the equivalent TF diagram for an M-band LT is shown in Figure 15.11c, for a 16-sample signal and for M ¼ 4. Note that the TF diagram of Figure 15.11c resembles that of Figure 15.10a. This is because for each 4 samples in x(n) there is a corresponding set of 4 transformed coefficients. So, the TF representation is maximally decimated. Compared to Figure 15.10b, Figure 15.11c implies an exchange of resolution from time to frequency domain achieved by the LT. The exchange of resolution in the TF diagram is obtained by the LT. As we connect several LTs following the paths of a tree, each new set of branches (each new tree cell) connected to the tree will force the TF diagram to exchange from time to frequency resolution. We can achieve a more versatile TF
representation by connecting cells in unbalanced ways. For example, Figure 15.12, shows some examples of HLTs given by their tree diagrams and respective TF diagrams. Figure 15.12a shows the tree diagram for the 3-stages DWT. Note that only the lowpass subband is further processed. Also, as all stages are chosen to be 2-channel LTs, this HLT can be represented by a binary tree. In Figure 15.12b, a more generic hierarchical connection of 2-channel LTs is shown. First the signal is split into low- and high-pass. Each output branch is further connected to another 2-channel LT. In the third stage only the most low-pass subband signal is connected to another 2-channel LT. Figure 15.12c shows a 2-stage HLT obtaining the same TF diagram as Figure 15.12b. Note that the succession of 2-channel LTs was substituted by a single stage 4-channel LT, i.e., the signal is split into four subbands and, then, one subband is connected to another LT. Figure 15.12d shows the TF diagram corresponding to Figure 15.12a, while Figure 15.12e shows the TF diagram corresponding to Figure 15.12b and c. Note that, as the treepaths are unbalanced, we have irregular partitions of the TF plane. For example, in the DWT, low-frequency TF coefficients have poor time localization and good frequency resolution, while high-frequency ones have poor frequency resolution and better time localization.
ω π y0 (n) y1 (n)
y0 (n) y1 (n)
Blocking and PTM
x (n)
x(n)
yM–1 (n)
0
yM–1 (n)
(a)
(b)
Nx
t
(c)
FIGURE 15.11 Representation of an M-channel LT as tree nodes and branches. (a) Forward section of an LT, including the blocking device. (b) Equivalent notation for (a) using an M-branch single-stage tree. (c) Equivalent TF diagram for (a) or (b) assuming M ¼ 4 and Nx ¼ 16.
ω
ω
π
0 (a)
(b)
(c)
(d)
π
t Nx
0 (e)
t Nx
FIGURE 15.12 Tree and TF diagrams. (a) The 3-stage DWT binary-tree diagram, where only the low-pass subband is submitted to further LT states. (b) A more generic 3-stages tree diagram. (c) A 2-stages tree-diagram resulting in the same TF diagram as (b). (d) TF diagram for (a). (e) TF diagram for (b) or (c).
15-13
Lapped Transforms
0
0
0
1 3
1
1
2,3
2
3
3
2
2
1
1
0
0
1
0 0
10
20
(a)
0
20
40
0
60
(b)
20
40
60
(c)
FIGURE 15.13 Two HLTs and resulting bases. (a) The 2-channel 16-tap-bases LT, showing low-and high-frequency bases, f0(n) and f1(n), respectively. (b) Resulting basis functions of a 2-stage HLT based on (a), given by f0(n) through f3(n). Its respective tree diagram is also shown. (c) Resulting HLT, by pruning one high-frequency branch in (b). Note that the two high-frequency basis functions are identical to the high-frequency basis function of (a) and, instead of having two distinct bases for high frequencies occupying distinct spectral slots, the two bases are now shifted in time. Thus, better time localization is attainable, at the expense of frequency resolution.
15.5.3 Variable-Length LTs In the tree-structured method to cascade LTs, every time an LT is added to the structure, more subbands are created by further subdividing previous subbands, so that the overall TF diagram of the decomposition is altered. There is a useful alternative to the tree structure in which the number of subbands does not change. We refer to Figure 15.14, where the ‘‘blocking’’ part of the diagram corresponds to the chain of delays and decimators (as in Figure 15.8) that parallelizes the signal into polyphase components. System A(z) of M bases of length NAM is postprocessed by system B(z) of K bases of length NBK. Clearly, entries in A(z) have order NA 1 and entries in B(z) have order NB 1. Without loss generality, we associate system B(z) to the first K output subbands of A(z). The overall PTM is given by
B(z) 0 F(x) ¼ A(z), 0 IMK
(15:62)
where F(z) has K bases of order NA þ NB 2 and M K bases of order NA 1. As the resulting LT has M channels the final orders dictate that the first K bases have length (NA þ NB 1) M while the others still have length NAM. In other words the effect of cascading A(z) and B(z) was only to modify K bases, so that the length of the modified bases is equal or larger than the length of the initial bases. An example is shown in Figure 15.15. We start with the bases corresponding to A(z) shown in Figure 15.15a. There are 8 bases of length 16 so that A(z) has order 1. A(z) is postprocessed by B(z) which is a 4 3 4 PTM of order 3 whose
y0 (n) B (z) x (n)
Blocking
To better understand how connecting an LT to the tree can achieve the exchange between time and frequency resolution, Figure 15.13 shows the basis functions resulting from two similar tree-structured HLTs. The difference between them is one tree cell which is applied or not to a terminal branch of the tree.
yK–1 (n) A (z)
yK (n)
yM–1 (n)
FIGURE 15.14 Cascade of PTMs A(z) of M channels and B(z) of K channels. The total number of subbands does not change, however, of A(z) bases are increased in length and order.
15-14
Transforms and Applications Handbook
7
7 3
4 3 2 1
6 Basis/filter number
5
Basis/filter number
Basis/filter number
6
2
1
F(z) ¼
5 4
0
3 2
8
15
(b)
0
8
15
(c)
0
20
39
FIGURE 15.15 Example of constructing variable-length bases through cascading LTs. (a) The basis corresponding to A(z): (b) an LT with 8 bases of length 16 (order 1). (c) The bases corresponding to F(z): 4 of the 8 bases have order 1, i.e., length 16, while the remaining 4 have order 4, i.e., length 40.
corresponding bases are shown in Figure 15.15b. The resulting LT is shown in Figure 15.15c. There are 4 bases of length 16 and 4 of length 40. The shorter ones are identical to those in Figure 15.15b, while the longer ones have orders which are the sums of the others of A(z) and B(z), i.e., order 4, and the shape of the longer bases in F(z) is very different from the corresponding ones in A(z). The effect of postprocessing few bases is a means to construct a new LT with larger bases from an initial one. In fact it can be shown that variable length LTs can be factorized using postprocessing stage.47,48 A general factorization of LTs is as shown in Figure 15.16. Assume a variable-length F(z) whose bases are arranged in decreasing length order. Such a PTM can be factorized as
F(z) ¼
M2 Y i¼0
Bi (z) 0 0 Ii
(15:63)
where I0 is understood to be nonexisting and Bi(z) has size (M i) 3 (M i). The factors Bi can have individual orders Ki
Blocking
B0 (z)
k¼0
(15:64)
We have discussed LTs in a general sense as a function of several parameters such as matrix entries, orthogonal or invertible factors, etc. The design of an LT suitable for a given application is the single most important step in the study of LTs. In order to do that, one may factorize the LT to facilitate optimization techniques. An LT with symmetric bases in commonly used in image processing and compression applications. By symmetric bases we mean that: pi, j ¼ (1)pi, L1j :
B1 (z)
F(z) ¼ z(N1) SF(z 1 )JM ,
(15:66)
where S is a diagonal matrix whose diagonal entries sii are 1, depending whether the ith basis is symmetric (þ1) or antisymmetric (1). Note that we require that all bases share the same center of symmetry.
15.6.1 The Lapped Orthogonal Transform: LOT LOT12–14 was the first useful LT with a well defined factorization. Malvar developed the fast LOT based on the work by Cassereau3 to provide not only a factorization, but a factorization based on the DCT. The DCT is attractive for many reasons, among them, fast implementation and near-optimal performance for block transform coding.38 Also, since it is a popular transform, it has
BM–2 (z)
y0 (n)
B2 (z)
yM–1 (n)
FIGURE 15.16 General factorization of a variable-length LT.
(15:65)
The bases can be symmetric or antisymmetric. In terms of the PTM, this constraint is given by42,43
BM–3 (z) x (n)
i¼0
Bik (z) 0 : 0 Ii
15.6 Practical Symmetric LTs
0 0
M i 1 Y2 KY
In a later section we will show a very useful LT which is based on the factorization principles of Equation 15.64.
1
0 (a)
and can be factorized differently into factors Bik(z) for 0 k < Ki. Hence,
15-15
Lapped Transforms
where Q is defined in Section 15.1.4. Suggestions of rotation angles which were designed to yield a good transform for image compression are20
a reduced cost and it easily available in either software or hardware. The DCT matrix D is defined as having entries: dij ¼
rffiffiffiffiffi 2 (2j þ 1)ip ki cos M 2M
(15:67)
pffiffiffi where k0 ¼ 1 and ki ¼ 1= 2, for 1 i M 1. The LOT as defined by Malvar is orthogonal. Then, according to our notation, P ¼ Q and H 1 ¼ HT. It is also a symmetric LT with M even. The LT matrix is given by PLOT
I ¼ M 0
0 VR
De De
Do Do
JM=2 (De Do ) JM=2 (De Do )
M ¼ 4 ! u0 ¼ 0:1p
(15:70)
M ¼ 8 ! {u0 , u1 , u2 } ¼ {0:13, 0:16, 0:13} p
(15:71)
M ¼ 16 ! {u0 , . . . , u7 } ¼ {0:62, 0:53, 0:53, 0:50, 0:44, 0:35, 0:23, 0:11} p:
(15:72)
For M 16 it is suggested to use:
(15:68)
VR ¼ DTIV DT
where De is the M=2 3 M matrix with the even-symmetric basis functions of the DCT Do is the matrix of the same size with the odd-symmetric ones
(15:73)
where DIV is the DCT type IV matrix38 whose entries are dijIV
In our notation, De also corresponds to the even numbered rows of D and Do corresponds to the odd numbered rows of D. VR is an M=2 3 M=2 orthogonal matrix, which according to Refs. [15,21] should be approximated by M=2 1 plane rotations as
rffiffiffiffiffi 2 (2j þ 1)(2i þ 1)p ¼ cos : M 4M
(15:74)
A block diagram for the implementation of the LOT is shown in Figure 15.17 for M ¼ 8.
15.6.2 The Lapped Biorthogonal Transform: LBT VR ¼
0 Y
i¼M=2 2
Q(i, i þ 1, ui )
0
0
0
1
1
2
2
4
3
3
6
4
4
5
5
6
6
7
7
2
The LOT is a large improvement over the DCT for image compression mainly because it reduces the so-called blocking
(15:69)
z–1
1/2
–1
z
1/2
z–1
1/2
4
z–1
1/2
6
DCT
0 2
1/2
1
1
1/2
3 5
3
VR
1/2
5
1/2
7
7
(a) 1/2
0
1/2
2
1/2
4
1/2
6
0
0
0
2
1
1
4
2
2
3
3
4
4
5
5
6 IDCT
1 3 5 7 (b)
VTR
–1
1/2
z
1/2
z–1
1/2
z–1
1/2
z–1
1 3 5
6
6
7
7
7
FIGURE 15.17 Implementation of the LOT for M ¼ 8. (a) Forward transform, (b) inverse transform.
15-16
Transforms and Applications Handbook
0
0
0
1
1
2
2 3
2
4
3
6
4
4
5
5
6
6
7
7
DCT
z–1
1/2
z–1
1/2
z–1
1/2
z–1
1/2
√2
0 2 4 6
1/2
1
1
1/2
3 5
3
VR
1/2
5
1/2
7
7
(a) 1/2
0
1/2
2
1/2
4
1/2
6 1 3 5 7
VTR
0
0
2
1
1
4
2
2
3
3
4
4
5
5
5
6
6
7
7
7
6
1/2
z–1
1/2
–1
1
IDCT
√2
1
z
1/2
3
–1
z
1/2
–1
z
0
(b)
FIGURE 15.18 Implementation of the LBT for M ¼ 8. (a) forward transform, (b) inverse transform. Note that there is only one extra multiplication as compared to the LOT.
0 VR
De YDo De YDo
JM=2 (De YDo ) JM=2 (De YDo )
(15:75)
where Ypisffiffiffi the M=2 3 M=2 diagonal matrix given by Y ¼ diag 2, 1, . . . , 1 . Note that it only implies that one of the DCT’s output in multiplied by a constant. The inverse is given by the LT QLBT which is found in an identical manner as in Equation 15.75 pffiffiffi except that the multiplier is inverted, i.e., Y ¼ diag 1= 2, 1, . . . , 1 . The diagram for implementing an LBT for M ¼ 8 is shown in Figure 15.18. Because of the multiplicative factor, the LT is no longer orthogonal. However the factor is very easily inverted. The result
2
1
Basis/filter number
I ¼ M 0
3
3
3 Basis/filter number
PLBT
is a reduction of amplitude of lateral samples of the first bases of the LOT into the new bases of the forward LBT, as it can be seen in Figure 15.19. In Figure 15.19 the reader can note the reduction in the amplitude of the boundary samples of the LBT and an
Basis/filter number
effects. Although there is a very large reduction, blocking is not eliminated. The reason for that lies in the format of the low frequency bases of LOT. In image compression, few bases are used to reconstruct the signal. From Figure 15.4, one can see that the ‘‘tails’’ of the lower frequency bases of the LOT do not exactly decay to zero. For that reason there is some blocking effect (Figure 15.4a) in images compressed using the LOT at lower bit rates. To help resolve this problem, Malvar recently proposed to modify the LOT, creating the lapped biorthogonal transform (LBT).22 (Biorthogonal is a jargon used in the filter banks community to designate transforms and filter banks which are not orthogonal.) In any case, the factorization of the LBT is almost identical to that of the LOT. However:
2
1
0
0
0
4 LOT
7
2
1
0
0
4 LBT
7
0
4 ILBT
7
FIGURE 15.19 Comparison of bases for the LOT (PLOT), inverse LBT (QLBT) and forward LBT (PLBT). The extreme samples of the lower frequency bases of the LOT are larger than those of the inverse LBT. This is an advantage for image compression.
15-17
Lapped Transforms
enlargement of the same samples in the inverse LBT. This simple ‘‘trick’’ improved noticeably the performance of the LOT=LBT for image compression at negligible overhead. Design of the other parameters of the LOT are not changed. It is recommended to use the LBT instead of the LOT whenever a nonorthogonal LT can be used.
L(z) ¼
IM=2 0M=2
0M=2 , z 1 IM=2
and let D bet the M 3 M DCT matrix. The, for the general LOT, F(z) ¼ F1 WL(z)WD:
15.6.3 The Generalized LOT: GenLOT
U 0
0 V
D e Do De Do
JM=2 (De Do ) : JM=2 (De Do )
(15:76)
As long as U and V remain orthogonal matrices, the LT is orthogonal. In terms of the PTM, F(z) can be expressed similarly. Let: "
#
1 IM=2 IM=2 , W ¼ pffiffiffi 2 IM=2 IM=2 " # Ui 0M=2 Fi ¼ , 0M=2 Vi
0 1
1
2
2
2
4
3
3
6
(15:81)
Ki (z) ¼ Fi WL(z)W,
(15:82)
and where K0 is any orthogonal symmetric matrix. The inverse is given by
(15:78)
0
F(z) ¼ KN1 (z)KN2 (z) K1 (z)K0 where
(15:77)
0
G(z) ¼ KT0 K01 (z)K02 (z) K0N1 (z)
1/2
0
1/2
2
U1
1/2
4
1/2
6
DCT 4
4
1
5
5
6
6
7
7
3 5 7
z–1
1/2
z–1
1/2
z–1
1/2
z–1
1/2
1 3
V1
5 7
(a)
1/2
0 2 4
1/2 UT1
1/2 1/2
6
0
0
0
2
1
1
4
2
2
3
3
4
4
5
5
6
6
7
7
6 IDCT
1 3 5 7
VT1
(15:80)
Where U1 ¼ U and V1 ¼ V. Note that the regular LOT is the case where U1 ¼ IM=2 and V1 ¼ VR. The implementation diagram for M ¼ 8 is shown in Figure 15.20. From this formulation along with other results it was realized34 that all orthogonal symmetric LTs can be expressed as
The formulation for the LOT14 which is shown in Equation 15.68, is not the most general there is for this kind of LT. In fact it can be generalized to become: P¼
(15:79)
1/2
z–1
1/2
z–1
1/2
z–1
5
1/2
z–1
7
1 3
(b)
FIGURE 15.20 Implementation of a more general version of the LOT for M ¼ 8. (a) Forward transform, (b) inverse transform.
(15:83)
15-18
Transforms and Applications Handbook
Forward GenLOT 0 1 2 3
0 1 2 3
4 5 6 7
4 5 6 7
0 2 4 6 DCT 1 3 5 7
β
0 2 4 6
β β β β β β β
K2(z)
K1(z)
KN–1(z) 1 3 5 7
Inverse GenLOT β
0 2 4 6
0 2 4 6
β β β K´N–1(z)
K2´(z)
K´1(z)
1 3 5 7
1 3 5 7
Inverse DCT
0 1 2 3
4 5 6 7
4 5 6 7
Stage Ki´(z)
Stage Ki (z)
z–1 z–1 z–1 z–1
β β β β
0 1 2 3
Ui
UTi
Vi
VTi
z–1 z–1 z–1 z–1
FIGURE 15.21 Implementation of a GenLOT for pffiffiffieven M, (M ¼ 8). Forward and inverse transforms are shown along with details of each stage. b ¼ 2(N1) accounts for all terms of the form 1= 2 which make the butterflies (W) orthogonal.
where: K0i (z) ¼ z 1 WL(z 1 )WFTi :
(15:84)
From the perspective, the GenLOT is defined as the orthogonal LT as in Equation 15.81 in which K0 ¼ D, i.e., F(z) ¼ KN1 (z) K1 (z)D:
nonlinear unconstrained optimization. Rotation angles are searched to minimize some cost function. GenLOT examples are given elsewhere34 and we present two examples, for M ¼ 8, in Tables 15.1 and 15.2, which are also plotted in Figure 15.22. In case M is odd, the GenLOT is defined as F(z) ¼ K(N1)=2 (z) K1 (z)D:
(15:85)
A diagram for implementing a GenLOT for even M is shown in Figure 15.21. In this diagram, the p scaling parameters are ffiffiffi b ¼ 2(N1) and account for the terms 1= 2 in the definition of W. The degrees of freedom of a GenLOT are the orthogonal matrices Ui and Vi. There are 2(N 1) matrices to optimize, each of size M=2 3 M=2. From Section 15.1.4 we know that each one can be factorized into M(M 2)=8 rotations. Thus, the total number of rotations is (L M) (M 2)=4, which is less than the initial number of degrees of freedom in a symmetric M 3 L matrix, LM=2. However, it is still a large number of parameters to design. In general, GenLOTs are designed through
(15:86)
where the stages Ki have necessarily order 2 as Ki (z) ¼ Fo2i Wo Lo1 (z)Wo Fo2i1 Wo Lo2 (z)Wo
(15:87)
and where: U2i 0 ¼ , 0 V2i 2 3 U2i1 0 5, ¼4 1 0 V2i1
Fo2i
Fo2i1
(15:88)
(15:89)
15-19
Lapped Transforms TABLE 15.1 p0n
GenLot Example for N ¼ 4 p1n
p2n
p3n 0.002945
0.004799
0.004829
0.002915
0.009320
0.000069
0.005744
0.007422
0.001800 0.008083
0.031409
0.035122 0.017066
0.016486 0.031155
0.001423 0.027246
0.030980 0.003473
0.012735
0.053050
0.007163
0.006394 0.011794
0.032408
0.000288
0.005997
0.009604
0.035674
0.018272
0.090207
0.126784 0.261703
0.333730
0.021269
0.054379 0.112040
0.357269 0.383512
0.450401 0.369819
0.370002
0.140761
0.011121
0.043266 0.131531
0.109817 0.123484
0.000813
p5n 0.000109
p6n
0.000483
0.000390
0.001691
0.001454 0.000951
0.004317
0.000232
0.009462
0.001945
0.001342
0.000531
0.010146
0.005262
0.003206
0.007504
0.005715 0.003043
0.006029 0.005418
0.018132
0.000459 0.047646
0.011562
0.046926
0.072761
0.224818 0.032818
0.224522 0.035078
0.126901 0.418643
0.129558 0.419231
0.083325
0.379088
0.478277
0.318691
0.384874
0.316307
p7n
0.000211
0.010439
0.358887
0.292453 0.097014
p4n
0.001326
0.001554 0.000789
0.002826 0.000028
0.003163 0.001661 0.005605
0.000165
0.010084
0.130875
0.089467
0.028641
0.378415
0.339368
0.216652
0.433937
0.146036
0.427668
0.013004
0.136666 0.107446 0.344379 0.045807
0.048534
0.022488 0.147727
0.439129 0.371449
0.043066
0.025219 0.109817 0.317070 0.392556
Note: The even bases are symmetric while the odd ones are antisymmetric, so that only their first half is shown.
TABLE 15.2
GenLot Example for N ¼ 6 p4n
p5n
p6n
p7n
0.000137 0.000222
0.000225 0.000228
0.000234 0.000388
0.000058 0.000471
0.000196 0.000364
0.000253 0.000163
0.000187
0.002439
0.001211
0.000689
0.000029
0.000535
0.002360
0.000017 0.000283
0.000536
0.000853
0.000078 0.000220
0.000056
0.000633
0.000502
0.001855
0.000515
0.006838
0.000834
0.000977
0.001687
0.001429
0.001440
0.001148
0.000698
0.000383
0.000109
0.000056
0.000886
0.001658
0.001778
0.002809
0.003177
0.001429
0.006584
0.001056
0.001893
0.002206
0.005386
0.005220
0.000561
0.000751
0.001165
0.009734 0.005196
0.002899 0.013699
0.018592 0.008359
0.004888 0.021094
0.006600 0.020406
0.018889 0.009059
0.000261 0.012368
0.006713 0.005263
p0n
0.001021
0.000137
p1n
p2n
0.000243
p3n
0.000572
0.001676
0.001344
0.027993
0.028046 0.013289
0.013063
0.002655
0.011238
0.002219
0.033554 0.003214
0.062616 0.019082
0.058899 0.018132
0.031538 0.004219
0.034379
0.055004
0.048827
0.007109
0.020287 0.028214
0.002130
0.006775 0.018286
0.029911 0.004282
0.106776 0.107167
0.133701
0.147804
0.058553
0.026759
0.231898
0.330343
0.318102
0.430439
0.381693
0.368335
0.417648
0.144412
0.002484
0.059401
0.023539
0.026048
0.024169
0.024407
0.056646
0.070612 0.197524
0.052703
0.088796 0.049701
0.051123 0.086462 0.051188
0.048429
0.066383 0.193302
0.144748
0.241758
0.123524
0.026563
0.239193
0.143627
0.366426
0.377886
0.376982
0.365965
0.061832
0.393949
0.312564
0.409688
0.025910
0.125263
0.174852
0.174803
0.314092
0.318912
0.319987
0.411214
0.395534
0.060887
0.000157
0.001673
0.001643
0.002180
0.001404 0.006828 0.009849
0.049853
0.097006 0.104953 0.020370
0.147501
0.000823
0.000792
0.000402
0.006836
0.004060 0.019040 0.021475
0.031732
0.031014 0.006324 0.048085
0.130959
0.332858
0.228016
0.369244
0.384842
0.431705 0.145256
0.317994 0.419936
Note: The even bases are symmetric while the odd ones are antisymmetric, so that only their first half is shown.
(a)
Transforms and Applications Handbook
7
7
6
6
5
5
Basis/filter number
Basis/filter number
15-20
4 3 2
0 16
31
(Mþ1)=210 s
(b)
I(M1)=2
(M1)=210 s
0
24
47
3
01(M1)=2 5, I(M1)=2
(M1)=2z1
Lo2 (z) ¼ diag{ 1, 1, . . . , 1 , z1 , . . . , z1 }, |fflfflfflfflfflffl{zfflfflfflfflfflffl} |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl}
2
0
0(M1)=21 1 0(M1)=21
Lo1 (z) ¼ diag{ 1, 1, . . . , 1 , z1 , . . . , z1 }, |fflfflfflfflfflffl{zfflfflfflfflfflffl} |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl}
3
1
I(M1)=2
W o ¼ 4 01(M1)=2 I(M1)=2
4
1
0
2
(Mþ1)=2z1
(15:90)
(15:91)
(15:92)
Although it may seem that the formulation of the odd-channel case is more complex than the one for the even-M case, the implementation is very similar in complexity as shown in Figure 15.23. The main difference is that two stages have to be connected together. The inverse transform is accomplished in the same way as for the even channel case:
FIGURE 15.22 Example of optimized GenLOT bases for M ¼ 8 and for (a) N ¼ 4, and (b) N ¼ 6.
G(z) ¼ DT K01 (z)K02 (z) K0N1 (z)
(15:93)
Forward GenLOT 0 1 2
0 1 2
3
3
4 5 6
0 2 4 DCT
6 1 3 5
4 5 6
β β β β
0 2 4 K1 (z)
K2 (z)
6
K(N – 1)/2 (z)
β β β
1 3 5 Inverse GenLOT β β β
0 2 4 6
K´(N – 1)/2 (z)
K´2 (z)
K´1 (z)
β β β β
1 3 5
0 2 4 6
Inverse DCT
1 3 5
0 1 2
0 1 2
3
3
4 5 6
4 5 6
Stage Ki(z) U2i – 1 z–1 z–1 z–1 z–1
U2i
4 V2i – 1
z–1 z–1 z–1
V2i
FIGURE 15.23 Implementation of a GenLOT for M odd. Forward and inverse transforms are shown along with details of each stage and b ¼ 2(N1).
15-21
Lapped Transforms
where the inverse factors are:
Stage Ki (z)
K0i (z) ¼ z2 KTi (z1 ),
(15:94)
UiA
μi0 μi1 μi2 μi3
UiB
ViA
νi0 νi1 νi2 νi3
ViB
whose structure is evident from Figure 15.23. z–1 z–1 z–1 z–1
15.6.4 The General Factorization: GLBT 49
The general factorization for all symmetric LTs can be viewed either as an extension of GenLOTs or as a generalization of the LBT. It can be shown that for M even, all LTs obeying Equation 15.65 or Equation 15.66 can be factorized as in Equation 15.81, where the Ki(z) factors are given in Equation 15.82 with the matrices Ui and Vi (which compose Fi) being only required to be general invertible matrices. From Section 15.1.4, each factor can be decomposed as Ui ¼ UiB Uid UiA , Vi ¼ ViB Vid ViA ,
(a) Stage K´i (z)
(15:95)
where UiA, UiB, ViA, and ViB are general M=2 3 M=2 orthogonal matrices, while Uid and Vid are diagonal matrices with nonzero diagonal entries. The first factor K0 is given by K0 ¼ F0 W,
(15:96)
where Fi is given as in Equation 15.78, and factors U0 and V0 are only required to be invertible. The general factorization can be viewed as a generalized LBT (GLBT) and its implementation flow graph for M even is shown in Figure 15.24. The inverse GLBT is similar to the GenLOT case, where: K0i (z) ¼ z1 WL(z)WF1 i :
(15:97)
UTiB
1/μi0 1/μi1 1/μi2 1/μi3
UTiA
VTiB
1/νi0 1/νi1 1/νi2 1/νi3
VTiA
(b)
FIGURE 15.24 Implementation of the factors of the general factorization (GLBT) for M even. (a) factor of the forward transform: Ki(z). (b) factor of the inverse transform: K0i (z).
The diagram for the implementation of the inverse stages of the GLBT is shown in Figure 15.24. Examples of bases for the GLBT of particular interest to image compression are given in Tables 15.3 and 15.4. For the odd case, the GLBT can be similarly defined. It follows the GenLOT factorization:
and F1 i ¼
z–1 z–1 z–1 z–1
F(z) ¼ K(N1)=2 (z) K1 (z)K0
U1 i 0M=2
0M=2 V1 i
¼
T UTiA U1 id UiB 0M=2
0M=2 T VTiA V1 id ViB
(15:98)
while 1 K1 0 ¼ WF0 :
TABLE 15.3 p0n 0.21192
0.13962
where the stages Ki are as in Equation 15.87 with the following differences: (1) all factors Ui and Vi are only required to be invertible; (2) the center element of F2i1 is a nonzero constant u0 and not 1. Again K0 is a symmetric invertible matrix. Forward and inverse stages for the odd-channel case are illustrated in Figure 15.25.
Forward GLBT Bases Example for M ¼ 8 and N ¼ 2 p1n 0.18197
0.19662
0.03387
0.09540
0.23114 0.35832
0.34101 0.46362
0.09360
(15:99)
0.10868
0.46619
0.42906
0.53813
0.22604
p2n 0.00011 0.16037
p3n 0.09426 0.05334
0.17973
0.25598
0.06347
0.01332
0.36293 0.35056
0.39498 0.16415
0.42944
0.36070
0.00731
0.42662
(15:100)
p4n 0.03860 0.09233 0.24358
0.05613
0.42912 0.13163
0.45465 0.32595
p5n 0.03493 0.12468
0.12311
0.10218
0.36084 0.31280
0.07434 0.43222
p6n
p7n
0.04997
0.01956
0.09240
0.03134
0.01067
0.16423
0.01991 0.11627
0.35631 0.47723
0.22434 0.31907
0.40585
0.38322
0.15246
0.39834
Note: The even bases are symmetric while the odd ones are antisymmetric, so that only their first half is shown.
15-22
Transforms and Applications Handbook Inverse GLBT Bases Example for M ¼ 8 and N ¼ 2
TABLE 15.4 p0n
p1n
0.01786
p2n
0.01441
p3n
p4n
p5n
p6n
p7n
0.06132
0.01952
0.05243
0.05341
0.04608
0.08332
0.01681
0.16037
0.12407
0.04888
0.16065
0.06575
0.12462
0.24092
0.20555
0.22148
0.34661
0.12304
0.03560
0.13556
0.02194
0.16256
0.21793
0.09042
0.36530
0.39610
0.27739 0.32711
0.40526 0.33120
0.36617
0.13190
0.05692 0.10665
0.38107
0.35547
0.44324
0.30000
0.32843 0.03939
0.02181
0.12298 0.38507
0.12623 0.38248
0.13397
0.35462 0.08361
0.28191
0.00021
0.02108
0.08432
0.12747
0.30170
0.23278
0.13232
0.41414
0.41231 0.35155
0.45455
0.34133 0.40906
Note: The even bases are symmetric while the odd ones are antisymmetric, so that only their first half is shown.
Stage Ki (z)
U2i – 1 z–1
U2i
4u0
z–1 –1
z
z–1 z–1
V2i – 1
z–1
V2i
z–1
(a)
Stage K´i (z) z–1 z–1 –1 U2i
z–1 z–1 z–1
U2i–1– 1
z–1 z–1
4/u0
V2i–1– 1
–1 V2i
(b)
FIGURE 15.25 Implementation of the factors of the general factorization (GLBT) for M odd. (a) factor of the forward transform: Ki(z). (b) factor of the inverse transform: K0i (z).
15.7 The Fast Lapped Transform: FLT The motivation behind the fast lapped transform (FLT) is to design an LT with minimum possible complexity compared to a block transform and, yet, to provide some advantage over a block transform. For that we use the principles of Section 15.5.3 and define the FLT as the LT whose PTM is given by
E(z) 0 DM F(z) ¼ 0 IMK
G(z) ¼
DTM
E0 (z) 0
0 IMN
where E(z) is a K 3 K PTM and DM is the M 3 M DCT matrix. The PTM for the inverse LT is given by
,
(15:102)
where E0 (z) is the inverse of E(z). The design of E(z) can be done in two basic ways. Firstly, one can use direct optimization. Secondly, one can design E(z) as E(z) ¼ C(z)DTK
(15:101)
(15:103)
where C(z) is a known LT and DK is the K 3 K DCT matrix, i.e., we perform an inverse DCT followed by a known LT. For example, if C(z) is the LOT, GenLOT, or LBT, of K channels, the first stage (DK) cancels the inverse DCT. Examples of FLT are
15-23
Lapped Transforms
0 1 2 3 4 5 6 7
0 1 2 3 DCT 4 5 6 7
z–1
α11
z–1
α01
1 3 5 7
α20
α10
α00
0 2 4 6
α21
0 2 4 6 1 3 5 7
(a)
0 1 2 3 4 5 6 7
0 1 2 3 4
DCT
0 2 4 6 1 3 5 7
5 6 7
z–1 z–1
1/2 1/2
U
1/2 1/2
V
0 2 4 6 1 3 5 7
(b)
0 1 2 3 4 5 6 7
0 1 2 3 4 5 6 7
DCT
1/2 1/2
0 2 4 6 1 3 5 7
√2
z–1 z–1
1/2 1/2
0 2 4 6 V
1 3 5 7
(c)
FIGURE 15.26 Implementation of examples of the FLT. On (a), K ¼ 2; (b), case K ¼ 4; (c), case K ¼ 4 where C(z) is the LBT, thus having its DCT stage cancelled.
given in Figure 15.26. In that example, the first case where K ¼ 2, direct optimization is recommended, for which the values {a00, a01, a10, a20, a21} ¼ {1.9965, 1.3193, 0.4388, 0.7136, 0.9385, 1.2878} yield an excellent FLT for image compression. Figure 15.26b the case K ¼ 4 can be optimized by optimizing two invertible matrices. In the case where we use the method in Equation 15.103 and the LBT as the K channel postprocessing stage, we can see that the LBT’s DCT stage is cancelled yielding a very simple flow-graph. The respective bases for forward and inverse transforms for the two FLTs (K ¼ 2 with the given parameters, and K ¼ 4 using the LBT) are shown in Figure 15.27. Both bases are excellent for image coding, virtually eliminating ringing, despite the minimal complexity added to the DCT (which by itself can be implemented in a very fast manner).38
orthogonal cosine modulated LTs. Both designations (MLT and ELT) are frequently applied to this class of filter banks. Other cosine-modulation approaches have also been developed and the most significant difference among them is the low- pass prototype choice and the phase of the cosine sequence.11,15,19,20,24,27,40,44,45,49 In the ELTs, the filters’ length L is basically an even multiple of the block size M, as L ¼ NM ¼ 2KM. Thus, K is referred to as the overlap factor of the ELT. The MLT-ELT class is defined by
15.8 Modulated LTs
for k ¼ 0, 1, . . . , M 1 and n ¼ 0, 1, . . . , L 1. h(n) is a symmetric window modulating the cosine sequence and the impulse response of a low-pass prototype (with cutoff frequency at p=2M) which is translated in frequency to M different frequency slots in order to construct the LT. A very useful ELT is the one with K ¼ 2, which will be designated as ELT-2, while ELTs with other values of K will be referred as ELT-K.
Cosine modulated LTs50 use a low-pass prototype to modulate a cosine sequence. By a proper choice of the phase of the cosine sequence, Malvar developed the modulated lapped transform (MLT),15 which led to the so-called ELT.18–21 The ELT allows several overlapping factors, generating a family of
pk, n
1 L1 p p ¼ h(n) cos k þ þ (N þ 1) n 2 2 M 2 (15:104)
15-24
Transforms and Applications Handbook
3 1
3
0
Basis number
Basis number
Basis number
Basis number
1 2
1
1
0 0
(a)
2
0
12
23
(b)
0
12
23
0
(c)
0
8
15
(d)
0
8
15
FIGURE 15.27 Bases of the FLT in the case M ¼ 8 for forward and inverse LTs. (a–d) Forward transform bases for the case K ¼ 2, inverse transform bases for the case K ¼ 2, forward transform bases for the case K ¼ 4, inverse transform bases for the case K ¼ 4. The remaining bases, not shown are the regular bases of the DCT and have length 8.
z–2 ΘK – 1
ΘK – 2 1
(a)
1 ΘK – 1 (b)
ΘK – 2 z–2
... ...
... ...
z–2
z–1
Θ1
Θ0
DCT IV
1
1
1
1 Θ0
Θ1 z
DCT IV z
–2
–1
FIGURE 15.28 Flow graph for the direct (a) and inverse (b) ELT. Each branch carries M=2 samples.
θM 2
M/2
θ
0 ...
0
M/2 – 1
Output
–1
θo
M/2 – 1 M/2
–cos θ sin θ
...
# Ci Si JM=2 , Qi ¼ JM=2 Si JM=2 Ci JM=2 n o Ci ¼ diag cos (u0, i ), cos (u0, i ), . . . , cos uM2 1, i n o Si ¼ diag sin (u0, i ), sin (u1, i ), . . . , sin uM2 1, i :
Input
...
"
ui,j are rotation angles. These angles are the free parameters in the design of an ELT because they define the modulating window h(n). Note that there are KM angles, while h(n) has
...
The ELTs have as their major plus a fast implementation algorithm. The algorithm is based on a factorization of the PTM into a series of plane rotation stages and delays and a DCT type IV38 orthogonal transform in the last stage, which has fast implementation algorithms. The lattice-style algorithm is shown in Figure 15.28 for an ELT with generic overlap factor K. In Figure 15.28 each branch carries M=2 samples and both analysis (forward transform) and synthesis (inverse transform) flow-graphs are shown. The plane rotation stages are of the form indicated in Figure 15.29 and contain M=2 orthogonal butterflies to implement the M=2 plane rotations. The stages Qi contain the plane rotations and are defined by
sin θ
(15:105)
M–1
M–1 cos θ
FIGURE 15.29 Implementation of plane rotations stage showing the displacement of the M=2 butterflies.
15-25
(a)
7
7
6
6 Basis/filter number
Basis/filter number
Lapped Transforms
5 4 3
3
1
1
0
0 15
(b)
0
16
31
p þ mM=2þk 2 p ¼ þ mM=21k 2
uk, 0 ¼
(15:106)
uk, 1
(15:107)
where 1g (2k þ 1) þ g 2M
(15:108)
and g is a control parameter, for 0 k (M=2) 1. In general, although suboptimal for individual applications, g ¼ 0.5 provides a balanced trade-off of stopband attenuation and transition range for the equivalent filters (which are the bases of the LT viewed as a filter bank). The equivalent modulating window h(n) is related to the angles as h(n) ¼ cos (un0 ) cos (un1 ) 1
n) ¼ cos (un0 )sin(un1 )
h(M þ n) ¼ sin (un0 ) cos (un1 ) h(2M for 0 n (M=2) are
1
n) ¼
(15:110)
1. The corresponding modulating window
(15:111)
15.9 Finite-Length Signals
2KM samples, however, h(n) is symmetric and brings the total number of degrees of freedom to KM. In general, there is no simple relation among the rotation angles and the window. Optimized angles for several values of M and K are presented in extensive tables in Ref. [21]. In the ELT-2 case, however, one can use a parameterized design.19–21 In this design, we have
h(M
p 1 kþ 2M 2
for 0 n (M=2) 1. The bases for the ELT using the suggested angles are shown in Figure 15.30. In this figure, the 8-channel examples are for N ¼ 2 (K ¼ 1) and for N ¼ 4 (K ¼ 2).
FIGURE 15.30 Example of ELT bases for the given angles design method for M ¼ 8. (a) K ¼ 1, N ¼ 2, (b) K ¼ 2, N ¼ 4.
mi ¼
p 2
h(n) ¼ h(2M 1 n) ¼ cos (un0 ) h(M þ n) ¼ h(M 1 n) ¼ sin (un0 )
4
2
8
for 0 k (M=2) h(n) is
5
2
0
uk,0 ¼
(15:109)
sin (un0 ) sin (un1 )
1. In the case K ¼ 1, some example angles
Since the LT matrices are not square, in order to obtain n transformed subband samples one has to evaluate more than n samples of the input signal. For the same reason, n subband samples would generate more than n signals samples after inverse transformation. All the analysis so far has assumed infinitelength signals. Processing finite-length signals, however, is not trivial. Without proper consideration there will be a distortion in the reconstruction of the boundary samples of the signal. There are basically three methods to process finite-length signals with LTs: . .
.
Signal extension and windowing of subband coefficients Same as above but using different extensions for different bases Using time-varying bases for the boundary regions
We will discuss the first method only. The second is just applicable to few transforms and filter banks and can be covered elsewhere. The subject of time-varying LTs is very rich and provides for solutions to several problems including the processing of boundary samples. We will not cover it in this chapter. The reader is referred to Refs. [7,28,29,32,41] and their references for further information on time-varying LTs.
15.9.1 Overall Transform Here we assume the model of extension and windowing described in Figure 15.31.33 The input vector x is assumed to have Nx ¼ NBM samples and is divided into three sections: xT ¼ xlT , xcT , xrT , where x1 and xr contain the first and last l samples of x, respectively. Following the signal extension model, x is extended into ~x as ~xT ¼ xTe, l , xT , xe,T r ¼ (Rl xl )T , xlT , xcT , xrT , (Rr xr )T :
(15:112)
The extended sections are found by a linear transform of the boundary samples of x as shown in Figure 15.32, i.e., xe, l ¼ Rl xl , xe, r ¼ Rr xr
(15:113)
15-26
x
Transforms and Applications Handbook
Linear extension
For. LT
Linear extension
Inv. LT
y
Windowing
(a) – y
– x
Windowing
Note that there are NB block rows and that l ¼ (N 1)M=2. The ~ and H defined in Equation 15.21 is that H is difference between P ~ is assumed to have only NB block assumed to be infinite and P ~ with respect to Qi, so rows. We can use the same notation for Q ~ and H0 defined in Equation that, again, the difference between Q ~ is assumed to 15.32 is that H0 is assumed to be infinite and Q have only NB block rows. The forward and inverse transform systems are given by
(b)
FIGURE 15.31 Extension and windowing in transformation of a finite-length signal using LTs. (a) Overall forward transform section. (b) Overall inverse transform section.
λ
NS – 2λ
λ
λ
~ T ~y: ~ x, ~x ¼ Q ~y ¼ P~
(15:115)
In the absence of quantization or processing of the subband signals, then ~y ¼ y and ~x ¼ Q ~x ~ T P~ ~ T ~y ¼ Q ~x ¼ T~
λ
(15:116)
where ~x is the reconstructed vector in the absence of quantization ~¼Q ~ TP ~ is the transform matrix between ~x and ~x. Note that and T ~ T has size (Nx þ l) 3 (Nx þ l) because it maps two extended signals. From Equation 15.35 we can easily show that the transform matrix is 0 TL ________ T 6 7 ~ ~ ~ IN 2l T¼Q P¼4 5 ________ 2
x
0
xe, l
xl
xc
xr
xe, r
x ~ x
FIGURE 15.32 Illustration of signal extension of vector x into vector ~x. In each border, l ¼ (L M)=2 samples outside initial signal boundaries are found by linear relations applied to the l boundary samples of x, i.e., xe,l ¼ Rlxl and xe,r ¼ Rrxr. As only l samples are affected across the signal boundaries, it is not necessary to use an infinite-length extension. Also, xl and xr contain the samples possibly affected by the border distortions after the inverse transformation.
and Rl and Rr are arbitrary l 3 l ‘‘extension’’ matrices. For example, Rl ¼ Rr ¼ Jl yields a symmetric extension. The transformation from the Nx þ 2l samples in ~x to vector y with NBM ¼ Nx subband samples is achieved through the block~ i.e., banded matrix P, 0
B . B P0 B ~¼B P B B @ 0
1
..
..
P1 P0
. PN1 P1 P0 P1 .. .
PN1
PN1
0 C C C C: C C A .. .
(15:114)
3
(15:117)
TR
where Tl and Tr are some 2l 3 2l matrices. Thus, distortion is just incurred to the l boundary samples in each side of x (2l samples in each side of ~x). In another view of the process, regardless of the extension method, there is a transform T such that y ¼ Tx, x ¼ T1 y
(15:118)
without resorting to signal extension. The key is to find T and to invert it. If T is made orthogonal one can easily invert it by applying transposition. This is the concept behind the use of time-varying LTs for correcting boundary distortions. For example, the LT can be changed near the borders to ensure T’s orthogonality.32 We will not use time-varying LTs here but rather use extended signals and transform matrices.
15.9.2 Recovering Distorted Samples Let: 2
6 6 [Fl jFr ] ¼ 6 4
P0
0
P1 P0
PN2 PN1 P1 PN2 . . .. . . P1 P0
0 PN1 .. .
..
. PN2
PN1
3
7 7 7, 5
(15:119)
15-27
Lapped Transforms
3 Q0 Q1 QN2 QN1 0 7 6 Q0 Q1 QN2 QN1 7 6 [Cl jCr ] ¼ 6 7: .. .. .. .. 5 4 . . . . Q1 0 Q0 QN2 QN1 2
(15:120)
Hence, Tl ¼ CTl Fl ,
Tr ¼ CTr Fr :
(15:121)
If we divide ~x in the same manner as ~x, ~x ¼ xTe, l , xTl , xTc , xTr , xTe, r ,
then,
xe, l xl
¼ Tl
xe, l xl
¼ Tl
Rl xl xl
¼ Tl
(15:122)
Rt x ¼ Gl xl (15:123) Il l
where Gl ¼ Tl
Rl Il
(15:124)
is a 2l 3 l matrix. If and only if Gl has rank l, then xl can be recovered through the pseudo-inverse of Gl as xl ¼ Gþ l
xe, l xl
1 x ¼ GTl Gl GTl e, l : xl
(15:125)
For the other (‘‘right’’) border the identical result is trivially found to be
This is an extension of Ref. [33] to nonorthogonal LTs, with the particular concern to test whether the pseudo inverses exist. The model in Figure 15.31 and the proposed method are not applicable for some LTs. The notable classes of LTs include those LTs whose bases have different length and different symmetries. Examples are: (1) some two-channel nonorthogonal LTs with odd-length; (2) the FLT; (3) other composite systems, i.e., cascaded systems such as those used in Refs. [35,36]. For the first example, it is trivial to use symmetric extensions, but different symmetries for different bases.44 The second example has the same reasoning, however an FLT can be efficiently implemented by applying the method just described to each of the stages of the transformation (i.e., first apply the DCT and then use the method above for the second part). The reason for problems is that different filters would require different extensions during the forward transformation process, therefore, the model in Figure 15.31 is not applicable. The above method works very well for M-channel filter banks whose filters have same length. The phase of the filters and the extensions can be arbitrary, and the method has been shown to be consistent for all uniform-length filter banks of interest tested.
15.9.3 Symmetric Extensions In case the LT is symmetric and obeys Equations 15.65 and 15.66, there is a much simpler method to implement the LT over a finite-length signal of NB blocks of M samples. In the forward transform section we perform symmetric extension as described, applied to the last l ¼ (L M)=2 samples on each border, resulting in a signal ~x(n) with Nx þ 2l ¼ Nx þ L M samples, as x(l 1), . . . , x(0), x(0), . . . , x(Nx 1), x(Nx 1), . . . , x(Nx l): (15:128)
xr ¼ Gþ r
xr xe, r
1 xr ¼ GTr Gr GTr , xe, r
(15:126)
where Gr ¼ Tr
Il Rr
(15:127)
is also assumed to have rank l. It is necessary that Fl, Fr, Cl and Fr have rank l, but not sufficient since rank can be reduced by the matrix products. It is also possible to express in more detail the conditions but without any useful analytical solution, so that numerical rank checking is the best choice. Summarizing, the steps to achieve PR for given Rl and Rr are: . . . . . .
Select P and Q and identify their submatrices Pi and Qi Find Fl, Fr, Cl, Cr, from Equations 15.119 and 15.120 Find Tl and Tr from Equation 15.121 Find Gl and Gr from Equations 15.124 and 15.127 Test rank of Gl and Gr þ If ranks are l, obtain Gþ l , Gr and reconstruct xl and xr
The signal is processed by the PTM F(z) as a clocked system, without concern for border locations. The internal states of the system F(z) can be initialized in any way. So, the NB þ N 1 blocks of the extended signal are processed yielding an equal number of blocks of subband samples. Discard the first N 1 output blocks, obtaining NB transform-domain blocks corresponding to NB samples of each subband. The general strategy to achieve perfect reconstruction without great increase in complexity or change in the implementation algorithm, is to extend the samples in the subbands, generating more blocks to be inverse transformed, in such a way that after inverse transformation, assuming no processing of the subband signals, the signal recovered is identical to the original at the borders. The extension of the kth subband signal depends on the symmetry of the kth basis. Let pkn ¼ vkpk, L1n for 0 k M 1 and 0 n L 1, i.e., vk ¼ 1 if pkn is symmetric and vk ¼ 1 if pkn is antisymmetric. Before inverse transformation, for each subband signal yk(m), of NB samples, fold the borders of yk(m), (as in the analysis section) in order to find a signal ~y (m), and invert the sign of the extended samples if pkn is k
15-28
Transforms and Applications Handbook
antisymmetric. For s samples reflected around the borders, then the kth subband signal will have samples: vk ^yk (s 1), . . . , vk ^yk (0), ^yk (0), . . . ^yk (NB 1), vk ^yk (NB 1), . . . , vk ^yk (NB s): The inverse transformation can be performed as .
.
N odd—Reflect s ¼ (N 1)=2 samples around each border, getting, thus, NB þ N 1 blocks with subband samples to be processed. To obtain the inverse transformed samples ^x(n), initialize the internal states in any way, run the system G(z) over the NB þ N 1 blocks, and discard the first N 1 reconstructed blocks, retaining the Nx ¼ NBM remaining samples. N even—Reflect s ¼ N=2 samples around each border, getting, thus, NB þ N blocks to be processed. To obtain the inverse transformed samples ^x(n), initialize the internal states in any way and run the system G(z) over the NB þ N blocks. Discard the first N 1 reconstructed blocks and the first M=2 samples of the Nth block. Include in the reconstructed signal the last M=2 samples of the Nth block and the subsequent (NB 1)M samples. In the last block, include the first M=2 samples in the reconstructed signal and discard the rest.
The approach will assure the perfect reconstruction property and orthogonality of the overall transformation if the LT is orthogonal.32 The price paid is to run the algorithm over extra N or N 1 blocks. As it is common to have NB >> N, the computational increase is only marginal.
15.10 Conclusions We hope this material will serve as an introduction to lapped transforms and give some insight into their nature. This chapter should be viewed as a first step, whereas the references, and the references therein, should give a more detailed treatment of the subject. It was shown how lapped transforms can replace block transforms allowing overlap of the basis functions. It was also shown that the analyses of MIMO systems, mainly their factorizations, are invaluable tools for the design of useful lapped transforms. That was the case in the design of transforms such as the LOT, LBT, GenLOT, GLBT, FLT, MLT, and ELT. Those practical LTs were presented by not only presenting the general factorization, but by also plotting bases and describing in detail how to construct at least a good design example, either by printing basis entries or by providing all parameters necessary to construct the bases. Even if the particular examples are not ideal for a particular application the reader might have in mind, they may provide an experimental example, from which one can build up on, by exploring the references and performing customized optimization. It is worth pointing that we intentionally avoided viewing the transforms as filter banks, so that the bases were not discussed as
impulse responses of filters and their frequency response was not analyzed. Nevertheless the framework is the same and so is the analysis of MIMO systems. Therefore, this chapter should give insight in such a vast field which is based on the study of multirate systems. Good luck in the field of lapped transforms!
References 1. Boashash, B., Ed. 1992. Time-Frequency Signal Analysis, New York: John Wiley & Sons. 2. Bordreaux-Bartels, G. F. Mixed Time-Frequency Signal Transformations, Boca Raton, FL: CRC Press. 3. Cassereau, P. 1985. A new class of optimal unitary transforms for image processing, Master’s thesis, Mass. Inst. Tech., Cambridge, MA. 4. Clarke, R. J. 1985. Transform Coding of Images, Orlando, FL: Academic Press. 5. Coifman, R., Meier, Y., Quaker, D., and Wickerhauser, V. 1991. Signal processing and compression using wavelet packets, Technical Report, Department of Mathematics, Yale University, New Heaven, CT. 6. Doganata, Z., Vaidyanathan, P. P., and Nguyen, T. Q. 1988. General synthesis procedures for FIR lossless transfer matrices, for perfect reconstruction multirate filter banks applications, IEEE Trans. Acoust. Speech Signal Process., 36 (10), 1561–1574. 7. Herley, C., Kovacevic, J., Ramchandran, K., and Vetterli, M. 1993. Tilings of the time-frequency plane: Construction of arbitrary orthogonal bases and fast tiling algorithms, IEEE Trans. Signal Process., 41, 3341–3359. 8. Hohn, F. E. 1964. Elementary Matrix Algebra, 2nd edn., New York: Macmillan. 9. Jayant, N. S. and Noll, P. 1984. Digital Coding of Waveforms, Englewood Cliffs, NJ: Prentice-Hall. 10. Jozawa, H. and Watanabe, H. September 4–6, 1991. Intrafiled=Interfield adaptive lapped transform for compatible HDTV coding, Proceedings of the 4th International Workshop on HDTV and Beyond, Torino, Italy. 11. Koilpillai, R. D. and Vaidyanathan, P. P. 1992. Cosine modulated FIR filter banks satisfying perfect reconstruction, IEEE Trans. Signal Process., 40, 770–783. 12. Malvar, H. S. 1986. Optimal pre- and post-filtering in noisy sampled-data systems, PhD dissertation. Mass. Inst. Tech., Cambridge, MA. 13. Malvar, H. S. 1988. Reduction of blocking effects in imaging coding with a lapped orthogonal transform, Proceedings of the International Conference on Acoustics Speech Signal Processing, Glasgow, Scotland, pp. 781–784. 14. Malvar, H. S. and Staelin, D. H. 1989. The LOT: Transform coding without blocking effects, IEEE Trans. Acoust. Speech Signal Process., ASSP-37, 553–559. 15. Malvar, H. S. 1990. Lapped transforms for efficient transform=subband coding, IEEE Trans. Acoust. Speech Signal Process., ASSP-38, 969–978.
15-29
Lapped Transforms
16. Malvar, H. S. 1988. The LOT: A link between block transform coding and multirate filter banks, Proceedings of the International Symposium on Circuits and Systems, Espoo, Finland, pp. 835–838. 17. Malvar, H. S. 1990. Efficient signal coding with hierarchical lapped transforms, Proceedings of the International Conference on Acoustics, Speech, Signal Processing, Albuquerque, NM, pp. 761–764. 18. Malvar, H. S. 1990. Modulated QMF filter banks with perfect reconstruction, Electron. Lett., 26, 906–907. 19. Malvar, H. S. 1991. Extended lapped transform: Fast algorithms and applications, Proceedings of the International Conference on Acoustics, Speech, Signal Processing, Toronto, Canada, pp. 1797–1800. 20. Malvar, H. S. 1992. Signal Processing with Lapped Transforms. Norwood, MA: Artech House. 21. Malvar, H. S. 1992. Extended lapped transforms: Properties, applications and fast algorithms, IEEE Trans. Signal Process., 40, 2703–2714. 22. Malvar, H. S. 1998. Biorthogonal and nonuniform lapped transforms for transform coding with reduced blocking and ringing artifacts, IEEE Trans. Signal Process., 46, 1043–1053. 23. Nayebi, K., Barnwell, T. P., and Smith, M. J. 1992. The time domain filter bank analysis: A new design theory, IEEE Trans. Signal Process., 40, 1412–1429. 24. Nguyen, T. Q. and Koilpillai, R. D. 1996. Theory and design of arbitrary-length cosine-modulated filter banks and wavelets satisfying perfect reconstruction, IEEE Trans. Signal Process., 44, 473–483. 25. Oppenheim, A. V. and Schafer, R. W. 1989. Discrete-Time Signal Processing, Englewoods Cliffs, NJ: Prentice-Hall. 26. Pennebaker, W. B. and Mitchell, J. L. 1993. JPEG: Still Image Compression Standard, New York: Van Nostrand Reinhold. 27. Princen, J. P. and Bradley, A. B. 1986. Analysis=synthesis filter bank design based on time domain aliasing cancellation, IEEE Trans. Acoust. Speech Signal Process., ASSP-34, 1153–1161. 28. de Queiroz, R. L. and Rao, K. R. 1993. Time-varying lapped transforms and wavelet packets, IEEE Trans. Signal Process., 41, 3293–3305. 29. de Queiroz, R. L. 1996. On lapped transforms, PhD dissertation, The University of Texas at Arlington, Arlington, TX. 30. de Queiroz, R. L. and Rao, K. R. 1995. The extended lapped transform for image coding, IEEE Trans. Image Process., 4, 828–832. 31. de Queiroz, R. L., Nguyen, T. Q., and Rao, K. R. January 1994. The generalized lapped orthogonal transforms, Electron. Lett., 30, 107–107. 32. de Queiroz, R. L. and Rao, K. R. 1995. On orthogonal transforms of images using paraunitary filter banks, J. Vis. Commun. Image Rep., 6(2), 142–153. 33. de Queiroz, R. L. and Rao, K. R. 1995. On reconstruction methods for processing finite-length signals with
34.
35.
36.
37.
38.
39. 40.
41.
42.
43.
44. 45.
46.
47.
48.
49.
paraunitary filter banks, IEEE Trans. Signal Process., 43, 2407–2410. de Queiroz, R. L., Nguyen, T. Q., and Rao, K. R. 1996. The GenLOT: Generalized linear-phase lapped orthogonal transform, IEEE Trans. Signal Process., 44, 497–507. de Queiroz, R.L. 1997. Uniform filter banks with nonuniform bands: Post-processing design, Proc. of Intl. Cong. Acoust. Speech Signal Proc., Seattle, WA, Vol. III, 1341–1344. de Queiroz, R. L. and Eschbach, R. 1997. Fast downscaled inverses for images compressed with M-channel lapped transforms, IEEE Trans. Image Process., 6, 794–807. Rabbani, M. and Jones, P. W. 1991. Digital Image Compression Techniques, Bellingham, WA: SPIE Optical Engineering Press. Rao, K. R. and Yip, P. 1990. Discrete Cosine Transform: Algorithms, Advantages, Applications, San Diego, CA: Academic Press. Rao, K. R. (ed.), 1985. Discrete Transforms and Their Applications, New York: Van Nostrand Reinhold. Schiller, H. 1988. Overlapping block transform for image coding preserving equal number samples and coefficients, Proc. SPIE, Vis. Commun. Image Process., 1001, 834–839. Sodagar, I., Nayebi, K., and Barnwell, T. P. 1993. A class of time-varying wavelt transforms, Proc. Intl. Conf. Acoust. Speech Signal Process., Minneapolis, MN, Vol. III, pp. 201–204. Soman, A. K. and Vaidyanathan, P. P. 1992. Paraunitary filter banks and wavelet packets, Proc. Intl. Conf. Acoust. Speech Signal Process., IV, 397–400. Soman, A. K., Vaidyanathan, P. P., and Nguyen, T. Q. 1993. Linear-phase paraunitary filter banks: Theory, factorizations and applications, IEEE Trans. Signal Process., 41, 3480–3496. Strang, G. and Nguyen, T. 1996. Wavelets and Filter Banks, Wellesley, MA: Wellesley-Cambridge. Temerinac, M. and Edler, B. 1992. A unified approach to lapped orthogonal transforms, IEEE Trans. Image Process., 1, 111–116. Tran, T. D., de Queiroz, R., and Nguyen, T. Q. 2000. Linear phase perfect reconstruction filter bank: Lattice structure, design, and application in image coding, IEEE Trans. on Signal Processing., 48, 133–147. Available at http:==image. unb.br=queiroz=papers=fullpaper_glbt.pdf Tran, T. D. 1998. Linear phase perfect reconstruction filter banks: Theory, structure, design, and application in image compression, PhD thesis, University of Wisconsin, Madison, WI. Tran, T. D., de Queiroz, R. L., and Nguyen, T. Q. 1998. The variable-length generalized lapped biorthogonal transform, Proc. Intl. Conf. Image Process., Chicago, IL, Vol. III, pp. 697–701. Tran, T. D., de Queiroz, R., and Nguyen, T. Q. 1998. The generalized lapped biorthogonal transform, Proc. Intl. Conf. Acoust. Speech Signal Proc., Seattle, WA, Vol. III, pp. 1441–1444.
15-30
50. Vaidyanathan, P. P. 1993. Multirate Systems and Filter Banks, Englewood Cliffs, NJ: Prentice-Hall. 51. Vaidyanathan, P. P. and Hoang, P. 1988. Lattice structures for optimal design and robust implementation of 2-channel PR-QMF banks, IEEE Trans. Acoust. Speech Signal Process., ASSP-36, 81–94. 52. Vetterli, M. and Herley, C. 1992. Wavelets and filter banks: Theory and design, IEEE Trans. Signal Process., 40, 2207– 2232.
Transforms and Applications Handbook
53. Vetterli, M. and Kovacevic, J. 1995. Wavelets and Subband Coding, Englewood Cliffs, NJ: Prentice-Hall. 54. Wickerhauser, M. V. 1992. Acoustical signal compression using wavelet packets, in Wavelets: A Tutorial in Theory and Applications, ed. C. K. Chui, San Diego, CA: Academic Press. 55. Young, R. W. and Kingsbury, N. G. 1993. Frequency domain estimation using a complex lapped transform, IEEE Trans. Image Process., 2, 2–17.
16 Zak Transform 16.1 Introduction................................................................................................................................. 16-1 Brief History of Zak Transform
.
Organization of the Chapter
16.2 Preliminary Background........................................................................................................... 16-1 Remarks about Notation
.
Linear Spaces of Functions
16.3 Continuous Zak Transform ..................................................................................................... 16-2 Definitions . General Properties . Algebraic Properties . Topological Properties Geometric Properties . Inverse Transform . Relationships to Other Transformations . Extensions of the Continuous Zak Transform
.
16.4 Discrete Zak Transform.......................................................................................................... 16-14 Definitions . Properties Zak Transform
.
Inverse Transform
.
Extensions of the Discrete
16.5 Finite Zak Transform .............................................................................................................. 16-15
Mark E. Oxley Air Force Institute of Technology
Bruce W. Suter Air Force Research Laboratory
Definition . Properties Zak Transform
.
Mathematics . Physics Transform Algorithm
.
Inverse Transform
.
Extensions of the Finite
16.6 Applications............................................................................................................................... 16-17 Engineering
.
Suter–Stevens Fast Fourier
16.7 Summary .................................................................................................................................... 16-19 References .............................................................................................................................................. 16-20
16.1 Introduction The Zak transform inputs a signal and outputs a mixed time– frequency representation of the signal. The signal may be realvalued or complex-valued, defined on a continuum set (e.g., the real numbers) or a discrete set (e.g., the integers or a finite subset of integers). This chapter investigates the various properties and attributes of the Zak transform.
16.1.1 Brief History of Zak Transform The Zak transform was discovered by several people independently in different fields and, consequently, was called by different names. It was called the ‘‘Gel’fand mapping’’ in the Russian literature because I.M. Gel’fand introduced it in his work [15] on eigenfunction expansions associated with Schrödinger operators with periodic potentials. In 1967, the transform was rediscovered by a solid-state physicist, Zak [39–41] who called it the ‘‘k-q representation.’’ Zak introduced this representation to construct a quantum mechanical representation for the motion of a Bloch electron in the presence of a magnetic or electric field. In [29], W. Schempp mentions that some properties of another version of the Zak transform, called the ‘‘Weil–Brezin mapping’’ (see [10,38]), were known to the mathematician, Carl F. Gauss. Since Zak was, indeed, the first to systematically study this transform in a more general setting and recognize its usefulness,
the general consent among experts in the field is to call it the Zak transform.
16.1.2 Organization of the Chapter This chapter begins with some preliminary background material in Section 16.2 that will be used throughout the chapter. The continuous Zak transform is defined on continuum signals and investigated in Section 16.3. The discrete Zak transform is defined on discrete signals and investigated in Section 16.4. Whereas the finite signals are a special case of discrete signals, the finite Zak transform warrants its own study in Section 16.5. Applications follow in Section 16.6 with important references.
16.2 Preliminary Background 16.2.1 Remarks about Notation We will be consistent as possible with the notation and use the following fonts for certain objects. pffiffiffiffiffiffiffi Special constants—e, Euler’s number; p, pi; i ¼ 1, the imaginary complex number. Set of scalars—Roman, uppercase, blackboard bold, e.g., R, real numbers; C, complex numbers. Vector—Roman, lowercase, boldfaced, e.g., t, m. 16-1
16-2
Transforms and Applications Handbook
Matrix—Roman, uppercase, Sans Serif, e.g., M. Sets—Roman, uppercase, italics, e.g., C, S. Function—Roman, lowercase, italics, e.g., f , g; except for the Dirac delta function, d. Variable—Roman or Greek, lowercase, italics, e.g., t, v. Linear space of functions—Roman, uppercase, calligraphic, e.g., L. Transformation—Roman, uppercase, boldfaced, e.g., Z, F.
16.2.2 Linear Spaces of Functions We will use various linear spaces of functions and use the following notation. Function: The complex-valued function f defined on the nonempty set S will be denoted by f : S ! C. Here the domain of definition of the function f is D(f ) ¼ S. The image of f is the range set R(f ) C. The symbol f denotes the function (name). The symbols f (t) denote the unique output of function f given the input t. Hence, f (t) is ‘‘not’’ the function, but a complex number. Set of functions: Define the set of complex-valued functions whose domain of definition is set S, by F (S, C) ¼ {f : S ! C: D(f ) ¼ S}: Linear spaces of functions: We will use several linear spaces of functions.
Lp(R, C)—Define the linear space of Lebesgue measurable functions that are Lebesgue p-integrable over the set R, by
L (R, C) ¼
:
f 2 L(R, C):
ð
R
9 = j f (t)j dm < 1 ; p
where dm represents integration with respect to the Lebesgue measure. The possible values of p 2 [1, 1). p Lloc (R, C)—Define the linear space of Lebesgue measurable functions that are locally Lebesgue p-integrable over the set R, by p Lloc (R, C)
¼
(
ð
f 2 L(R, C): )
p
j f (t)j dm < 1 for every compact subset K R :
K
The importance of this linear space of functions lies in the fact that the behavior at infinity is unimportant. ‘p(S, F)—Given a field of scalars F (e.g., R or C), and p 2 [1, 1), define the linear space of F-valued, p-summable sequences defined on the countable set S to be p
‘ (S, F) ¼
C(R, C)—Define the linear space of continuous functions that are defined over the set R, by
(
f 2 F (S, F):
X n2S
p
)
j f (n)j < 1 :
The order of summation is arbitrary since the series is absolutely summable.
C(R, C) ¼ {f : R ! C: f is continuous at every t 2 R}: For f , g 2 C(R, C) we write f ¼ g to mean point-wise equality, that is, f (t) ¼ g(t) for every t 2 R. L(R, C)—Define the linear space of Lebesgue measurable functions that are defined over the set R, by
8
0. The Zak transform ZT of f 2 L1loc (R, C) is defined to be pffiffiffiffi X f (t þ kT)ei2pkTv [ZT f ](t, v) ¼ T k2Z
for a.e. (t, v) 2 [0, T] [0, T
1
ð1 ð1 X m
¼
ð1 ð1 X m
¼
1 m ð X
]. ¼
When T ¼ 1 we will see that this definition reduces to Definition 16.1. Another version is the following.
Definition 16.3 (Version 3) The Zak transform Z of f 2 L1loc (R, C) is defined to be X f (t þ k)e [Zf ](t, n) ¼
ikn
(16:2)
k2Z
for a.e. (t, n) 2 R2 .
0 0
0 0
mþ1 ð n
transform Za of f 2 L1loc (R, C) is defined to be pffiffiffi X [Za f ](t, v) ¼ a f (at þ ak)e i2pkv k2Z
for a.e. (t, v) 2 R2 .
Choosing a ¼ 1 yields Definition 16.1. We will use Definition 16.1 throughout this chapter.
16.3.2 General Properties The first question to answer is: When does the Zak transform ‘‘make sense?’’ That is, for what collection of functions does the Zak transform exist? The first three theorems give some answers to the convergence of the series.
0
i2pkv
j f (t þ k)j e
dt dv
j f (t þ k)jdt dv
j f (t þ k)jdt
j f (t)j dt:
Now, let m ! 1 and n ! 1 (any order) to get ð1 ð1 X m f (t þ k)e lim lim m!1 n!1 k¼ n 0 0
m!1 n!1
Definition 16.4 (Version 4) [24] Let a > 0. The Zak
k¼ n
k¼ n
lim lim
Choosing n ¼ 2pv yields Definition 16.1.
k¼ n
dt dv
i2pkv
mþ1 ð n
j f (t)j dt ¼
i2pkv dt dv 1 ð
1
j f (t)j dt < 1
(16:3)
since f 2 L1 (R, C). Therefore, the series converges in the L1 sense. By the Lebesgue dominated convergence theorem (LDCT) [25] we have ð1 ð1 X m f (t þ k)e lim lim m!1 n!1 k¼ n 0 0
¼
ð1 ð1 0 0
X m lim lim f (t þ k)e m!1 n!1 k¼ n
ð1 ð1 X 1 f (t þ k)e ¼ k¼ 1 0 0
¼
ð1 ð1 0 0
dt dv
i2pkv
dt dv
i2pkv
j [Zf ](t, v)jdt dv,
dt dv
i2pkv
16-4
Transforms and Applications Handbook
so,
Now, let m ! 1 and n ! 1 (any order) to get ð1 ð1 0 0
j [Zf ](t, v)jdt dv ¼
1 ð
1
ð1 ð1 X m f (t þ k)e lim lim m!1 n!1 k¼ n
j f (t)j dt < 1:
0 0
Therefore, Zf 2 L1 ([0, 1]2 , C) for every f 2 L1 (R, C), so D(Z) ¼ L1 (R, C). Also, this shows that, R(Z) L1 ([0, 1]2 , C). This theorem shows the Zak transform Z maps L1 (R, C) into 1 L ([0, 1]2 , C). We are interested in signals with finite energy so we also consider the linear space L2 (R, C).
¼ lim lim
m!1 n!1
mþ1 ð n
j f (t)j2 dt ¼
ð1 ð1 X m lim lim f (t þ k)e m!1 n!1 k¼ n 0 0
¼
The Zak transform is defined for every f 2 L2 (R, C), and Zf is defined for a.e. (t, v) 2 R R, thus, D(Z) ¼ L2 (R, C). The corresponding range set of Z is R(Z) L2 ([0, 1]2 , C).
0 0
¼
ð1 ð1
¼
ð1 ð1
¼
ð1 ð1 X m m X
¼
ð1 X m m X
¼
ð1
¼
ð1 X m
j f (t þ k)j2 dt
¼
ð1
j f (t þ k)j2 dt
¼
m X
k¼ n
0 0
m X
k¼ n
0 0
f (t þ k)e
f (t þ k)e
‘¼ n 0 0 k¼ n
0
0
k¼ n ‘¼ n m m X X
k¼ n ‘¼ n
0 k¼ n m X
k¼ n mþ1 ð n
0
i2pkv
i2pkv
! !
m X
‘¼ n m X
‘¼ n
f (t þ ‘)e
j f (t)j2 dt:
dt dv
f (t þ ‘)ei2p‘v dt dv
e
i2p(k ‘)v
0
f (t þ k)f (t þ ‘)d(k
!
!
i2pkv i2p‘v
01 ð f (t þ k)f (t þ ‘)@ e
i2p‘v
‘)dt
dt dv 1
dvAdt
1
j f (t)j2 dt < 1
0 0
ð1 ð1 0 0
2 dt dv
i2pkv
X m lim lim f (t þ k)e m!1 n!1 k¼ n
0 0
¼
f (t þ k)f (t þ ‘)e
ð1 ð1
ð1 ð1 X 1 f (t þ k)e ¼ k¼ 1
Proof Let f 2 L2 (R, C) and m, n 2 N, 2 i2pkv dt dv
1 ð
since f 2 L2 (R, C). Therefore, the series converges in the L2 sense. By the LDCT [25] we have
THEOREM 16.2
ð1 ð1 X m f (t þ k)e k¼ n
2 dt dv
i2pkv
2 dt dv
i2pkv
j [Zf ](t, v)j2 dt dv,
2 dt dv
i2pkv
so, ð1 ð1 0 0
2
j [Zf ](t, v)j dt dv ¼
1 ð
1
j f (t)j2 dt < 1:
Therefore, Zf 2 L2 ([0, 1]2 , C) and so, Z maps L2 (R, C) into L2 ([0, 1]2 , C). We are interested in continuous signals as well, so we consider the linear space C(R, C).
THEOREM 16.3 If f 2 C(R, C) such that j f (t)j c(1 þ jt j
1 e
)
as jt j ! 1
(16:4)
for some constants c, e > 0, then Zf is defined and continuous at every (t, v) 2 R R. Condition (16.4) is called a decay condition. Two well-known properties unique to the Zak transform are the following theorems.
16-5
Zak Transform
[Zf ](t, v þ n) ¼ [Zf ](t, v),
THEOREM 16.4 a.e. (t, v) 2 R2 .
(Quasi-periodic) For f 2 L1 (R, C) then [Zf ](t þ 1, v) ¼ e
i2pv
Proof These are simple extensions of Theorem 16.4 above.
[Zf ](t, v),
[Zf ](t, v þ 1) ¼ [Zf ](t, v),
(16:5) (16:6)
Since Zf is quasi-periodic then Zf is not L1 over R R (nor over R [0, 1] nor [0, 1] R) but is locally integrable.
a.e. (t, v) 2 R2 .
THEOREM 16.6
Proof Property (16.5): [Zf ](t þ 1, v) ¼ ¼
X k2Z
X ‘2Z
f (t þ 1 þ k)ei2pkv
For every f 2 L1 (R, C)
f (t þ ‘)ei2p(‘1)v
¼ ei2pv
X ‘2Z
where ‘ ¼ 1 þ k
f (t þ ‘)ei2p‘v
¼ ei2pv [Zf ](t, v):
](t, v). 1. (Conjugation) [Zf ](t, v) ¼ [Zf 2. (Symmetry) if f is an odd function, then [Zf ]( t, v) ¼ [Zf ](t, v). 3. (Symmetry) if f is an even function, then [Zf ]( t, v) ¼ [Zf ](t, v).
Proof The proofs are straightforward from using Definition 16.1.
Property (16.6): Some numerical results from special evaluations are given. [Zf ](t, v þ 1) ¼ ¼ ¼
X k2Z
X k2Z
X k2Z
f (t þ k)e
i2pk(vþ1)
f (t þ k)ei2pkv ei2pk f (t þ k)ei2pkv
since ei2pk ¼ 1
¼ [Zf ](t, v): Property (16.6) implies Zf is periodic in v with period 1. Therefore, we need only consider an interval of length 1, say, [0, 1]. Property (16.5) implies Zf is almost periodic in t with period 1. The scale term ei2pv keeps the periodic condition from holding true. This is the non-Abelian nature of the Zak transform [4]. But, the scale term ei2pv has modulus one for all real values of v, consequently, Zf is said to be quasi-periodic in t. Likewise, one need only consider an interval of length 1 for t. Define the unit square in R2 in the first quadrant to be Q, that is, Q {(t, v) 2 R2 : 0 t 1, 0 v 1} ¼ [0, 1] [0, 1] ¼ [0, 1]2 ,
then Zf is completely determined on Q. The Zak transform is a mapping from Lp (R, C) into Lp (Q, C) for p 2 {1, 2}. More general results are the following properties.
THEOREM 16.5
For every f 2 L1 (R, C) P 1. [Zf ](t, 0) ¼ k2Z f (t þ k) P 2. [Zf ](t, 12) ¼ k2Z ( 1)k f (t þ k) P 3. [Zf ](0, 0) ¼ k2Z f (k) P 4. [Zf ](0, 12) ¼ k2Z ( 1)k f (k) P k 5. [Zf ](1, 12) ¼ k2Z ( 1) f (k)
such that Zf exists at these points in Q. Proof For f 2 L1 (R, C): 1. Definition 16.1 and e i2pk0 ¼ 1 for all k 2 Z implies this equation is true. 2. Definition 16.1 and e i2pk(1=2) ¼ e ipk ¼ ( 1)k for all k 2 Z implies this equation is true. 3. From Theorem 16.7, statement 1 with t ¼ 0, this equation is true. 4. From Theorem 16.7, statement 2 with t ¼ 0, this equation is true. 5. Definition 16.1 evaluated at (1, 12) yields X 1 [Zf ] 1, f (1 þ k)e i2pk(1=2) ¼ 2 k2Z X ¼ f (‘)e ip(‘ 1) where ‘ ¼ 1 þ k ‘2Z
For every f 2 L1 (R, C) and m, n 2 Z, then [Zf ](t þ m, v) ¼ e
THEOREM 16.7
i2pmv
[Zf ](t, v),
¼
¼
X
( 1)‘ 1 f (‘)
‘2Z
X ‘2Z
( 1)‘ f (‘):
16-6
Transforms and Applications Handbook
THEOREM 16.8 (Zeroes) If f 2 C(R, C) and satisfies the decay condition (16.4), then Zf has at least one zero on the horizontal line segment connecting the points (0, 12) and (1, 12). Proof Statements 4 and 5 of Theorem 16.7 imply a sign change of the continuous function [Zf ]( , 12). The intermediate value theorem implies there exists at least one zero on the horizontal line segment connecting the points (0, 12) and (1, 12).
For values of t, s 2 [0, 1] such that t s 2 = [0, 1] we use the periodic extension of the function, or equivalently, use modulo 1 arithmetic. The same assumption is made for n, v 2 [0, 1].
THEOREM 16.10 Let f , g 2 L1 (R, C) then
1. Zf 1* Zg ¼ Z(f * g). 2. Zf 2* Zg ¼ Z(f g) where f g is the point-wise function multiplication of f with g.
16.3.3 Algebraic Properties The Zak transform is a linear transformation from Lp (R, C) into Lp (Q, C) for p 2 {1, 2}.
h
THEOREM 16.9 The Zak transform Z has the following properties: 1. (Total) Z is defined on the set L2 (R, C), that is, D(Z) ¼ L2 (R, C) and R(Z) L2 (Q, C). 2. (Linear) Z is a linear transformation, that is, for any complex scalars a, b 2 C and f , g 2 L2 (R, C) Z(af þ bg) ¼ aZ(f ) þ bZ(g):
1
Definition 16.5: (Convolutions) Let f , g 2 L (R, C) the convolution f * g 2 L1 (R, C) is defined to be [f * g](t) ¼
Proof (1)
1 ð
f (s)g(t
1
s)ds ¼
1 ð
g(s)f (t
s)ds:
i 1 Zf * Zg (t, v) ¼
ð1
¼
ð1
¼
ð1 X X
¼
XX
[Zf ](s, v)[Zg](t
X k2Z
0
0
f (s þ k)e
k2Z ‘2Z
k2Z ‘2Z
Let x, y 2 L1 (Q, C) the convolution x 1* y (with respect to the first variable) is defined to be h
i 1 x * y (t, v) ¼
x(s, v)y(t
0
s, v)ds ¼
ð1
y(s, v)x(t
s, v)ds:
0
2
The convolution x * y (with respect to the second variable) is defined to be h
ð1 i x * y (t, v) ¼ x(t, n)y(t, v 2
0
n)dn ¼
ð1 0
y(t, n)x(t, v
n)dn:
!
f (s þ k)g(t
X
i2p(kþ‘)v
1
s þ ‘) dsAe
by LDCT 01 XX ð @ f (s þ k)g(t þ [k þ ‘] ¼ 0
0 kþ1 ð XX @ ¼ f (s)g(t þ m k
¼
X
m2Z
1
[f * g](t þ m)e
¼ [Z(f * g)](t, v):
i2pmv
i2p‘v
ds
i2p(kþ‘)v
[s þ k]) dsAe
1
!
ds
1
s) dsAe
where s ¼ s þ k, m ¼ k þ ‘ 0 1 1 ð X @ f (s)g(t þ m s) dsAe ¼ m2Z
s þ ‘)e
g(t
‘2Z
s þ ‘)e
0
m2Z k2Z
ð1
i2pkv
01 ð @ f (s þ k)g(t
‘2Z k2Z
1
s, v)ds
0
i2pmv
i2pmv
i2p(kþ‘)v
16-7
Zak Transform
Proof For linear transformations, continuous and bounded are equivalent [25], therefore, we will show that Z is bounded. Let f 2 L2 (R, C) then
(2) 2
[Zf * Zg](t, v) ð1 ¼ [Zf ](t, n)[Zg](t, v n)dn
ð kZf k2out ¼ j Zf (t, v)j2 dt dv
0
¼ ¼ ¼ ¼ ¼ ¼
ð1
0 ð1
X k2Z
f (t þ k)ei2pkn
XX
‘2Z 0 k2Z
XX k2Z ‘2Z
XX k2Z ‘2Z
X k2Z
X k2Z
!
X ‘2Z
Q
g(t þ ‘)ei2p‘(vn)
!
2 ð X i2pkv f (t þ k)e ¼ dt dv k2Z
dn
Q
f (t þ k)g(t þ ‘)ei2pkn ei2p‘(vn) dn
¼
ðX
Q
f (t þ k)g(t þ ‘)ei2p‘v
ð1
ei2p(k‘)n dn
by LDCT
0
¼ [Z(f g)](t, v):
16.3.4 Topological Properties Define the norm kkin on the linear space L2 (R, C) to be, for each f 2 L2 (R, C) 0
11=2
ð @ kf kin ¼ j f (t)j2 dtA
¼
ð1 X X
¼
ð1 X X
¼
ð1 X
f (t þ k)f (t þ k)dt
¼
1 Xð
j f (t þ k)j2 dt
¼
R
then (L2 (R, C), kkin ) is a complete normed linear space (i.e., a Banach space). Define the norm kkout on the linear space L2 (Q, C) to be, for each g 2 L2 (Q, C) 11=2
0
ð C B kxkout ¼ @ j g(t, v)j2 dt dvA Q
then (L2 (Q, C), kkout ) is a Banach space.
THEOREM 16.11
k2Z ‘2Z
0 0
0
0
0
k2Z ‘2Z
k2Z ‘2Z
k2Z
k2Z 1 ð
1
0
f (t þ ‘)ei2p‘v dt dv
f (t þ k)f (t þ ‘)ei2pkv ei2p‘v dt dv
01 1 ð i2p(k‘)v f (t þ k)f (t þ ‘)@ e dvAdt 0
f (t þ k)f (t þ ‘)d(k ‘)dt
j f (t þ k)j2 dt
¼ k f k2in : Hence, kZk sup
k Zf kout : 0 6¼ f 2 L2 (R, C) ¼ 1 < 1: k f kin
Since Z is a bounded linear transformation, then, Z is a continuous linear transformation. Notice that Z preserves the energy of the signal, since kZf kout ¼ k f kin for every f 2 L2 (R, C).
16.3.5 Geometric Properties
The Zak transform is a continuous linear transformation from (L2 (R, C), kkin ) into (L2 (Q, C), kkout ). Also, kZf kout ¼ kf kin for every f 2 L2 (R, C).
‘2Z
¼
i2pkv
[f g](t þ k)e
k2Z
X
ð1 ð1 X X
f (t þ k)g(t þ ‘)ei2p‘v d(k ‘)
f (t þ k)g(t þ k)ei2pkv
f (t þ k)ei2pkv
Define the inner product hjiin on the linear space L2 (R, C) to be, for each f , g 2 L2 (R, C) h f jg iin ¼
1 ð
1
f (t)g(t) dt,
16-8
Transforms and Applications Handbook
then (L2 (R, C), hjiin ) is a complete inner product space (i.e., a Hilbert space). Define the inner product hjiout on the linear space L2 (Q, C) to be, for each x, y 2 L2 (Q, C) ð hxjyiout ¼ x(t, v)y(t, v) dt dv,
This theorem says Ang(Zf , Zg) ¼ Ang(f , g) for every pair (f , g) 2 L2 (R, C) L2 (R, C).
Q
then (L2 (Q, C), hjiout ) is a Hilbert space.
THEOREM 16.13
THEOREM 16.12
(Adjoint) The adjoint of Z is Z 1 , that is, Z * ¼ Z 1 . Proof Since Z is a unitary transformation then
(Unitary) The Zak transform is a unitary transformation, that is, for every f , g 2 L2 (R, C) h Zf jZg iout ¼ h f jg iin :
h f jg iin ¼ h Zf jZg iout ¼ h f jZ * Zg iout for all f 2 L2 (R, C). So hfjg
Proof ð h Zf jZg iout ¼ [Zf ](t,v)[Zg](t,v) dt dv
for all f 2 L2 (R, C), which implies Z * Zg ¼ g for all g 2 L2 (R, C). Hence, Z * Z ¼ I, the identity operator, implying Z * ¼ Z 1.
Q
¼
ð
X
ð
X
k2Z
Q
¼
k2Z
Q
¼
f (t þk)e
f (t þk)ei2pkv
ð XX
Q
i2pkv
k2Z ‘2Z
! !
!
X
g(t þ‘)ei2p‘v
X
g(t þ‘)ei2p‘v dt dv
‘2Z
‘2Z
dt dv
!
f (t þk)g(t þ‘)ei2p(k‘)v dt dv
01 1 ð XX ¼ f (t þk)g(t þ‘)@ ei2p(k‘)v dvA dt ð1 0
¼ ¼ ¼ ¼
k2Z ‘2Z
ð1 X X
‘2Z 0 k2Z ð1
X
0 k2Z ð1
X
k2Z 0 1 ð
Z * Zg iin ¼ 0
0
f (t þk)g(t þ‘)d(k‘)dt
16.3.6 Inverse Transform
THEOREM 16.14 The Zak transform Z has the following properties: 1. Z is a one-to-one transform. 2. Z is onto L2 (Q, C), that is, R(Z) ¼ L2 (Q, C). Proof These follow from Theorem 16.13, since Z
1
exists.
The next theorem gives ideas about the definition of the inverse Zak transform.
f (t þk)g(t þk)dt
THEOREM 16.15 f (t þk)g(t þk)dt
(Marginals) For every f 2 L2 (R, C) such that Zf is continuous, then, in fact, f is continuous and Ff (the Fourier transform) is continuous. The marginals are
f (t)g(t)dt
1
¼ h f j g iin : This theorem implies that Z preserves the ‘‘angle’’ between signals. The angle between f and g is defined to be Ang(f , g) Arc cos
h f j g iin k f kin k g kin
:
f (t) ¼
ð1
[Zf ](t, v)dv a:e: t 2 R,
[Ff ](v) ¼
ð1
[Zf ](t, v)dt
0
0
a:e: v 2 R:
16-9
Zak Transform
Proof For every f 2 L2 (R, C) such that Zf is continuous, then ð1 0
[Zf ](t, v)dv ¼ ¼ ¼
ð1 X 0
k2Z
X
f (t þ k)ei2pkv dv
f (t þ k)
k2Z
X
ð1
so
0
0
The second result will be proved in Section 16.3.7. This theorem demonstrates that the Zak transform is a linear time–frequency representation. The first result of this theorem motivates the definition of the inverse of the Zak transform. But, first we define marginal transformations.
Definition 16.6: (Marginal Transforms) g 2 L2 (Q, C) define the marginal transforms M(1) g (v) ¼
M(2) g (t) ¼
For
0 0
¼ k g k2out :
(2) Since Z is linear, then Z 1 is linear, (see [25]). (3) By (1) Z 1 is bounded, therefore, Z 1 is continuous. (4) Since Z 1 ¼ Z * then (Z 1 ) * ¼ Z * * ¼ Z ¼ (Z 1 ) 1 . So the adjoint of Z 1 is (Z 1 ) 1 ¼ Z, its own inverse. Thus, Z 1 is a unitary transform.
every
16.3.7 Relationships to Other Transformations
ð1
g(t, v)dt,
16.3.7.1 Fourier
ð1
g(t, v)dv,
Definition 16.8:
0
(1D Continuous Fourier Transform) The continuous Fourier transform of f 2 L2 (R, C) is defined to be
0
where the superscript denotes which variable is integrated. [Ff ](v) ¼
These marginal transforms are continuous linear transformations from L2 (Q, C) into L2 ([0, 1], C).
Definition 16.7: (Inverse) The inverse Zak transform is defined, for every g 2 L2 (Q, C), to be [Z1 g](t) ¼
0
ð1 ð1 ð1
1 2
Z g ¼ [Z 1 g](t) 2 dt j g(t, v)j2 dv dt in
¼ f (t):
01 1 11=2 ð ð 1 2 [Z g](t) ¼ g(t, v)dv @ j g(t, v)j dvA , 0
ei2pkv dv
f (t þ k)d(k)
k2Z
Proof (1) Let g 2 L2 (Q, C) then
ð1
g(t, v)dv,
1 ð
f (t)e
i2ptv
dt,
1
a.e. v 2 R, and F : L2 (R, C) ! L2 (R, C). The inverse continuous Fourier transform of g 2 L2 (R, C) is defined to be [F 1 g](t) ¼
(16:7)
0
1 ð
g(v)ei2ptv dv,
1
a.e. t 2 [0, 1]. Notice that Z1 is M(2) .
a.e. t 2 R.
THEOREM 16.16
Definition 16.9: (1D Discrete Fourier Transform) The discrete Fourier transform of f 2 ‘2 (Z, C) is defined to be
The inverse Zak transform Z1 satisfies the following properties: 1. 2. 3. 4.
1
2
2
Z : L (Q, C) ! L ([0, 1], C). Z1 is linear transform. Z1 is continuous transform. Z1 is unitary transform.
[Fd f ](v) ¼
X
f (k)e
i2pkv
,
k2Z
a.e. v 2 [0, 1], and Fd : ‘2 (Z, C) ! L2 ([0, 1], C). The inverse discrete Fourier transform of g 2 L2 (R, C) is defined to be
16-10
Transforms and Applications Handbook
F1 d g (k) ¼
ð1
Proof Let f 2 L2 (R, C) then
g(v)ei2pkv dv
1. For v 2 R
0
for k 2 Z.
[FCZf ](v) ¼
Definition 16.10: (2D Fourier Transform) The continuous Fourier transform of x 2 L2 (Q, C) is defined to be [F2 x](s, n) ¼
ð1 ð1
F1 2 y (t, v) ¼
ð1 ð1
dt
1 i2pvt
¼ [Zf ](t, v)e
dt
¼
ð1 X
f (t þ k)e
i2pkv
¼
1 Xð
f (t þ k)e
i2pv(tþk)
¼
kþ1 X ð
0 0
2
a.e. (s, n) 2 Q, and F2 : L (Q, C) ! L (Q, C). The inverse continuous Fourier transform of y 2 L2 (Q, C) is defined to be
ð1
i2pvt
(x[0, 1] (t)[Zf ](t, v))e
0
x(t, v)ei2p(stþnv) dt dv,
2
1 ð
y(s, n)ei2p(stþnv) ds dn,
0 0
¼
a.e. (t, v) 2 Q.
0 k2Z
k2Z 0
k2Z 1 ð
i2pvt
f (t)e
i2pvt
e
dt
dt
where t ¼ t þ k 2 [k, k þ 1]
dt
k
i2pvt
f (t)e
dt
1
¼ [Ff ](v):
THEOREM 16.17
2. Let f ¼ F 1 g then by (1) FCZf ¼ Ff becomes
For every f 2 L2 (R, C)
FCZF 1 g ¼ FF 1 g ¼ g:
[Zf ](0, v) ¼ [Fd f ](v),
Rename g to be f . 3. For (t, v) 2 Q then
a.e. v 2 [0, 1].
Proof Let f 2 L2 (R, C) [Zf ](0, v) ¼
X k2Z
f (k)ei2pkv ¼ [Fd f ](v),
a.e. v 2 [0, 1]. For f 2 L2 (R, C) define the multiplication operator C to be [Cf ](t, v) ¼ x [0, 1] (t) f (t) where x [0, 1] is the characteristic function on the interval [0, 1].
THEOREM 16.18 For f 2 L2 (R, C) Ð1 1. 0 [Zf ](t, v)ei2pvt dt ¼ [FCZf ](v) ¼ [Ff ](v). 2. FCZF1 f ¼ f . 3. [ZFf ](v, t) ¼ ei2ptv [Zf ](t, v). for m 2 Z, 4. F2 (Zf Zg)(m) ¼ [F(f g)](m) ~ [Zg](t, v) [Zg](t, v).
[ZFf ](v, t) ¼
X
¼
X k2Z
¼
1 ð
¼
1 ð
¼ ¼
where
k2Z
[Ff ](v þ k)e 1 ð
0 @
1
1
1
" X
f (s)
k2Z
i2pk(s t)
e
k2Z
f (s)
d(k
#
f (s)d(k
1
f (k þ t)e
¼e
i2ptv
¼e
i2ptv
X k2Z
e
i2psv
ds
t]) e
s þ t)e
i2p(kþt)v
[Zf ](t, v):
i2pk( t)
#
[s
f (k þ t)e
1
dsAe
k2Z
1 X ð
X
i2ps(vþk)
f (s)e
" X
k2Z
i2pk( t)
i2pkv
i2psv
i2psv
ds
ds
16-11
Zak Transform
4. Let f , g 2 L2 (R, C) then
.
[F2 (Zf Zg)](m, n) ¼
¼
ð1 ð1 0 0
0 0
[Zf ](t, v)[Zg](t, v)ei2p(mtþnv) dt dv .
ð1 ð1 X k2Z
f (t þ k)ei2pkv
X ‘2Z
g(t þ ‘)ei2p‘v ei2p(mtþnv) dt dv
k2Z ‘2Z
¼
¼
ð1 X
f (t þ k)g(t þ k)ei2pmt dt
¼
1 Xð
f (t þ k)g(t þ k)ei2pmt dt
¼
¼ ¼
0
0
k2Z ‘2Z
k2Z
k2Z
X k2Z
X k2Z 1 ð
1
0
f (t þ k)g(t þ ‘)d(k ‘)ei2pmt dt
kþ1 ð
f (t)g(t)ei2pm(tk) dt where t ¼ t þ k
kþ1 ð
f (t)g(t)ei2pmt dt ei2pmk
k
k
i 2) T(1, a, b g (t, v) ¼ g(t þ a, v þ b)
a.e. (t, v) 2 R2 .
0
ð1 X X
THEOREM 16.19 For every a 2 R, the translation operator Ta satisfies the following properties: 1. 2. 3. 4.
Ta is a linear operator on L2 (R, C). T0 ¼ I, the identity operator. Ta Tb ¼ Tb Ta ¼ Taþb . T1 a ¼ Ta .
The proofs are straightforward. There are similar statements for (2) (1, 2) T(1) a , Tb , and Ta, b .
Definition 16.12: (Modulation Operators) For every a, b 2 R define the modulation operators
f (t)g(t)ei2pmt dt since ei2pmk ¼ 1
.
Ea acting on a function f 2 L2 (R, C) to be ½ Ea f (t) ¼ ei2pat f (t)
¼ [F(f g)](m): .
16.3.7.2 Translations and Modulations
a.e. t 2 R. 2 2 E(1) a acting on the first variable of a function g 2 L (R , C) to be
Definition 16.11: (Translation Operators) For every a, b 2 R, define the translation operators .
Ta acting on a function f 2 L2 (R, C) to be
.
a.e. t 2 R. 2 2 T(1) a acting on the first variable of a function g 2 L (R , C) to be
T(1) a g (t, v) ¼ g(t þ a, v)
a.e. (t, v) 2 R2 .
.
i2pat g(t, v) E(1) a g (t, v) ¼ e
a.e. (t, v) 2 R2 . acting on the second variable of a function E(2) a g 2 L2 (R2 , C) to be
[Ta f ](t) ¼ f (t þ a)
.
T(2) a g (t, v) ¼ g(t, v þ a)
a.e. (t, v) 2 R2 . 2) 2 2 T(1, a, b acting on both variables of a function g 2 L (R , C) to be h
01 1 ð ð1 X X ¼ f (t þ k)g(t þ ‘)@ ei2p(k‘þn)v dv Aei2pmt dt 0
acting on the second variable of a function T(2) a g 2 L2 (R2 , C) to be
i2pav g(t, v) E(2) a g (t, v) ¼ e
a.e. (t, v) 2 R2 . 2) 2 2 E(1, a, b acting on both variables of a function g 2 L (R , C) to be h
i 2) i2p(atþbv) E(1, g(t, v) a, b g (t, v) ¼ e
a.e. (t, v) 2 R2 .
16-12
Transforms and Applications Handbook
3. For a 2 R Z
THEOREM 16.20
1. 2. 3. 4.
Ea is a linear operator on L2 (R, C). E0 ¼ I, the identity operator. Ea Eb ¼ Eb Ea ¼ Eaþb . E1 a ¼ Ea .
[ZEm f ](t, v) ¼
X
ei2pm(tþk) f (t þ k)ei2pkv
k2Z
¼ ei2pmt ¼ ei2pmt
(Z is last) Let f 2 L2 (R, C): P 1. Zf ¼ k2Z E(2) k Tk f . Zf for m 2 Z. 2. ZTm f ¼ E(2) m (1) 3. ZTa f ¼ Ta Zf for a 2 R Z. for m 2 Z. 4. ZEm f ¼ E(1) m Zf (1) (2) 5. ZEa f ¼ Ea Ta Zf for a 2 R Z. for m 2 Z. 6. ZE(2) m Tm f ¼ Zf (2) (2) 7. ZEm Tn f ¼ Emn Zf for m, n 2 Z. for a 2 R Z. 8. ZE(2) a Ta f ¼ Zf (1, 2) 9. ZTm En f ¼ En, m Zf for m, n 2 Z.
¼
k2Z
X
6. For m 2 Z
¼
X k2Z
X ‘2Z
[Tk f ](t)ei2pkv
¼
¼ ei2pmv
‘2Z
where ‘ ¼ m þ k
X k2Z
f (t þ k)ei2pk(vþa)
X k2Z
X ‘2Z
f (t þ m þ k)ei2p(mþk)v f (t þ ‘)ei2p‘v
where ‘ ¼ m þ k
7. For m, n 2 Z
f (t þ ‘)ei2p‘v
¼ ei2p(m)v [Zf ](t, v) ¼ E(2) m Zf (t, v):
k2Z
ei2pa(tþk) f (t þ k)ei2pkv
¼ [Zf ](t, v):
i E(2) k Tk f (t, v):
f (t þ m þ k)ei2pkv
X
X
k2Z
¼
f (t þ ‘)ei2p(‘m)v
since ei2pmk ¼ 1
X ZE(2) ei2pmv f (t þ m þ k)ei2pkv m Tm f (t, v) ¼
f (t þ k)ei2pkv
2. For m 2 Z [ZTm f ](t, v) ¼
f (t þ k)ei2pkv
¼ ei2pat [Zf ](t, v þ a) ¼ ei2pat T(2) a Zf (t, v) (2) ¼ E(1) a Ta Zf (t, v):
k2Z
k2Z
k2Z
ei2pmk f (t þ k)ei2pkv
¼ ei2pat
Xh
X
[ZEa f ](t, v) ¼
1.
¼
k2Z
5. For a 2 R Z
Proof For a.e. (t, v) 2 Q
[Zf ](t, v) ¼
X
¼ ei2pmt [Zf ](t, v) ¼ E(1) m Zf (t, v):
THEOREM 16.21
X
k2Z
f (t þ a þ k)ei2pkv
¼ [Zf ](t þ a, v) ¼ T(1) a Zf (t, v):
4. For m 2 Z
The following properties consider the relationships between translations, modulations, and the Zak transform. We list expressions where Z is the last transform and Z is the first transform in separate theorems.
X
[ZTa f ](t, v) ¼
For every a, b 2 R, the modulation operator Ea satisfies the following properties:
X ei2pmv f (t þ n þ k)ei2pkv ZE(2) m Tn f (t, v) ¼ k2Z
¼ ¼
X k2Z
X ‘2Z
f (t þ n þ k)ei2p(mþk)v f (t þ ‘)ei2p(mþ‘n)v
¼ ei2p(mn)v
X ‘2Z
where ‘ ¼ n þ k
f (t þ ‘)ei2p‘v
¼ ei2p(mn)v [Zf ](t, v) ¼ E(2) mn Zf (t, v):
16-13
Zak Transform
16.3.8 Extensions of the Continuous Zak Transform
8. For a, b 2 R Z
X ZE(2) ei2pav f (t þ b þ k)ei2pkv a Tb f (t, v) ¼ k2Z
¼ ei2pav
X k2Z
f (t þ b þ k)ei2pkv
¼ ei2pav [Zf ](t þ b, v) h i Zf (t, v) ¼ ei2pav T(1) b h i (1) ¼ E(2) a Tb Zf (t, v): 9. For m, n 2 Z [ZTm En f ](t, v) X ei2pn(tþm) f (t þ m þ k)ei2pkv ¼
There have been several extensions to the Zak transform. We mention, without discussion, the multiplicative Zak transform and refer the reader to the article [17]. 16.3.8.1 Multidimensional Continuous Zak Transform
Definition 16.13: The multidimensional continuous Zak transform of f 2 L2 (Rd , C) is defined to be [Zf ](t, v) ¼
X
k2Zd
f (t þ k)ei2pkv
(16:8)
k2Z
¼ ei2pnt
X k2Z
X
ei2pnm f (t þ m þ k)ei2pkv
¼e
i2pnt
¼e
i2pnt i2pmv
k2Z
f (t þ ‘)ei2p(‘m)v
e
X k2Z
¼ ei2pnt ei2pmv i2pnt i2pmv
X k2Z
since ei2pnm ¼ 1
where ‘ ¼ m þ k
f (t þ ‘)ei2p‘v
Let f 2 L2 (Rd , C) 1. For m 2 Zd then [Zf ](t þ m, v) ¼ ei2pm v [Zf ](t, v) 2. For n 2 Zd then [Zf ](t, v þ n) ¼ [Zf ](t, v) for a.e. (t, v) 2 Rd Rd T. Proof 1. For m 2 Zd [Zf ](t þ m, v) ¼
THEOREM 16.22 2
(Z is first) Let f 2 L (R, C) and m 2 Z, then
¼
X
f (t þ m þ k)e
X
f (t þ ‘)e
k2Z
‘2Z
¼e
Proof For m 2 Z T(1) m Zf (t, v) ¼ [Zf ](t þ m, v) X ¼ f (t þ m þ k)ei2pkv ‘2Z
f (t þ ‘)ei2p(‘m)v
¼ ei2pmv
X ‘2Z
i2pmv
[Zf ](t, v þ n) ¼ where ‘ ¼ m þ k
f (t þ ‘)ei2p‘v
¼ ei2pmv [Zf ](t, v) ¼ E(2) m Zf (t, v):
d
X d
‘2Z
i2pkv
i2p(‘ m)v
f (t þ ‘)e
where ‘ ¼ m þ k
i2p‘v
[Zf ](t, v):
2. For n 2 Zd
k2Z
X
d
¼ ei2pmv
(2) T(1) m Zf ¼ Em Zf :
¼
A quasi-periodic condition also holds true.
THEOREM 16.23
f (t þ ‘)ei2p‘v
¼e e [Zf ](t, v) (1) (2) ¼ En Em Zf (t, v) 2) ¼ E(1, n, m Zf (t, v):
for a.e. (t, v) 2 Rd Rd .
¼
X
f (t þ k)e
i2pk(vþn)
X
f (t þ k)e
i2pk(vþn)
k2Z
d
k2Zd
e
¼ [Zf ](t, v) since the dot product k n 2 Z, then e
i2pkn
¼ 1.
i2pkn
16-14
Transforms and Applications Handbook
16.4.1 Definitions
THEOREM 16.24 The multidimensional continuous Zak transform Z: 1. Is defined for all f 2 L2 (Rd , C), i.e., D(Z) ¼ L2 (Rd , C) 2. Has range R(Z) L2 (Qd , C) so Z : L2 (Rd , C) ! L2 (Qd , C) 3. Is linear transform 4. Is continuous transform 5. Is unitary transform 6. Is invertible transform, defined by 1
[Z g](t) ¼
ð
Definition 16.16: The discrete Zak transform Z of f 2 ‘2 (Z, C) is defined to be [Zf ](m, v) ¼
X k2Z
f (m þ k)e
i2pkv
for (m, v) 2 Z R. When f 2 L2 (R, C) is defined at every integer, then the continuous Zak transform of f evaluated at the discrete times m 2 Z yields the same results as this discrete Zak transform.
g(t, v)dv:
Qd
16.4.2 Properties
Proof All these results are simple extensions from the previous one-dimensional theorems.
The discrete Zak transform satisfies quasi-periodic conditions.
16.3.8.2 Multidimensional Weighted Continuous Zak Transform
THEOREM 16.25
Definition
The discrete Zak transform Z : ‘2 (Z, C) ! ‘2 (Z R, C) is
16.14: The
multidimensional weighted continuous Zak transform of f 2 L2 (Rd , C) is, for weight matrix M 2 Rdd , defined to be [Zf ](t, v) ¼
X
k2Z
d
f (t þ Mk)e
i2pkv
(16:9)
16.3.8.3 Multidimensional Windowed, Weighted Continuous Zak Transform
k f kin ¼
16.15: The
multidimensional windowed, weighted continuous Zak transform of f 2 L2 (Rd , C) is, for weight matrix M 2 Rdd and window function w 2 L(Rd , Rþ ) with compact support, defined to be [Zf ](t, v) ¼
Z(af þ bg) ¼ aZf þ bZg 3. A continuous (equivalently, bounded) transform, and in particular, k Zf kout ¼ k f kin where
a.e. (t, v) 2 Rd Rd .
Definition
1. A total transform, that is, D(Z) ¼ ‘2 (Z, C). 2. A linear transform, that is, for all a, b 2 C and f , g 2 ‘2 (Z, C)
X
d
k2Z
f (t þ Mk)w(k)e
i2pkv
(16:10)
a.e. (t, v) 2 Rd Rd .
16.4 Discrete Zak Transform The discrete Zak transform is analogous to the continuous Zak transform. The term ‘‘discrete’’ describes the input function. The input function will be a countable function, that is, a sequence. The output function will be discrete in t and continuum in v.
X k2Z
0
kxkout ¼ @
j f (k)j
1 Xð k2Z
0
2
!1=2
, 11=2
jx(k, v)j2 dvA
:
4. An unitary transform, that is, h Zf jZg iout ¼ h f jg iin for all f , g 2 ‘2 (Z, C) where h f jg iin ¼ hxjyiout ¼
X
f (k)g(k),
k2Z
1 Xð k2Z
x(k, v)y(k, v)dv:
0
16.4.3 Inverse Transform The unitary property of Z implies that the inverse transform does exist. Observe the marginal property that motivates the inverse transform.
16-15
Zak Transform
matrix M 2 Zdd and window function w 2 ‘2 (Zd , Rþ ) with finite support, defined to be
THEOREM 16.26 For every f 2 ‘2 (Z, C)
X
[Zw f ](m, v) ¼
ð1 f (m) ¼ [Zf ](m, v)dv
k2Zd
f (t þ Mk)w(k)e
i2pk T MT v
(16:11)
for (m, v) 2 Zd Rd .
0
for m 2 Z. Proof The proof is the same as for the inverse of the continuous Zak transform with t replaced with m.
Definition 16.17: The inverse discrete Zak transform of g 2 ‘2 (Z R, C) is defined to be
Consider the short-time discrete Fourier transform Fw of f 2 ‘2 (Z, C) with window function w 2 ‘2 (Z, Rþ ) X f (k)w(k m)e i2pkv : [Fw f ](m, v) ¼ k2Z
Let ‘ ¼ k
m then k ¼ ‘ þ m and X f (‘ þ m)w(‘)e [Fw f ](m, v) ¼
i2p(‘þm)v
‘2Z
ð1
[Z 1 g](m) ¼ g(m, v)dv 0
for m 2 Z.
¼e
i2pmv
¼e
i2pmv
X ‘2Z
i2p‘v
f (‘ þ m)w(‘)e
[Zw f ](m, v):
THEOREM 16.27
16.4.4 Extensions of the Discrete Zak Transform Multidimensional discrete Zak transform takes as its input a complex-valued function f whose input is a multidimensional discrete set. That is, f 2 ‘2 (Zd , C) where d 2 N is the dimension of the input set.
Definition 16.18: The multidimensional discrete Zak transform of f 2 ‘2 (Zd , C) is defined to be [Zf ](m, v) ¼
X
k2Z
d
f (m þ k)e
i2pkv
Definition 16.19: Let matrix M 2 Zdd generate a lattice of
points in Zd , that is, for every k 2 Zd then Mk 2 Zd . The multidimensional weighted discrete Zak transform of f 2 ‘2 (Zd , C) is defined to be X
k2Zd
f (m þ Mk)e
[Fw f ](m, v) ¼ E(1)m Zw f (m, v)
for (m, v) 2 Z R.
The multidimensional versions of this theorem follow by simple extensions.
16.5 Finite Zak Transform
for (m, v) 2 Zd Rd .
[Zf ](m, v) ¼
For every f 2 ‘2 (Z, C)
i2pkT MT v
for (m, v) 2 Zd Rd .
Definition 16.20: The multidimensional windowed, weighted discrete Zak transform of f 2 ‘2 (Zd , C) is, for weight
16.5.1 Definition The finite Zak transform is a special case of the discrete Zak transform where the input sequence is always a finite length sequence. Assume the length is some L 2 N, then define the finite set S ¼ {0, 1, . . . , L 1}. Let F (S, C) be the set of functions f : S ! C, then f is, in fact, a finite sequence. Actually, F (S, C) ¼ CL the complex vector space, and f is, essentially, a complex vector of length L.
Definition 16.21: The finite Zak transform of f 2 F (S, C) is
defined to be
[Zf ](m, n) ¼
X k2S
L
f (m þ k)e L
i2pnk=L
¼
L 1 X k¼0
L
f (m þ k)e
i2pnk=L
for (m, n) 2 S S. Here, þ means modulo L addition in S. Equivalently, one can use the periodic extension of f for m þ k 2 = S.
16-16
Transforms and Applications Handbook
16.5.2 Properties The finite Zak transform satisfies a quasi-periodic condition, in fact, a periodic condition.
4. an unitary transform, that is, h Zf jZg iout ¼ h f j g iin for all f , g 2 ‘2 (Z, C) where X
h f jg iin ¼
XX
hxjyiout ¼
THEOREM 16.28
f (k)g(k)
k2S
x(k, v)y(k, v):
k2S ‘2S
16.5.3 Inverse Transform
(Quasi-periodic) For every f 2 F (S, C) L
Since Z is unitary then its inverse exists. We expect its inverse to be a marginal operation.
1. [Zf ](m þ L, n) ¼ [Zf ](m, n) for (m, n) 2 S S. L 2. [Zf ](m, n þ L) ¼ [Zf ](m, n) for (m, n) 2 S S. Proof For f 2 F (S, C)
Definition 16.22: The inverse finite Zak transform is, for every x 2 F (S2 , C), defined to be
1. And (m, n) 2 S S L
X L L f m þL þk e
[Zf ](m þ L, n) ¼
i2pnk=L
k2S
X L ¼ f m þk e
[Z 1 x](m) ¼
i2pnk=L
k2S
Just to check that this is correct, we determine Z 1 Zf for any f 2 F (S, C)
2. For (m, n) 2 S S [Zf ](m, n þ L) ¼ ¼
X k2S
X k2S
f (m þ k)e f (m þ k)e
L
L 1 1X [Z 1 Zf ](m) ¼ L n¼0
i2p nþL k=L
i2pnk=L
¼
¼ [Zf ](m, n):
¼
L
Notice that the addition of n þ L in the exponent e L
i2p(nþL)k=L
R
i2p nþL k=L
¼e
i2pnk=L i2pLk=L
¼e
i2pnk=L
e
i2pk
¼e
i2p(n)k=L
:
f (m þ k)e
k¼0
L 1 X L 1 1X f (‘)e L n¼0 ‘¼0
Observe that when ‘
L
but when ‘
1. A total transform, that is, D(Z) ¼ F (S, C). 2. A linear transform from F (S, C) into F (S2 , C): 3. A continuous (equivalently, bounded) transform, and in particular, kZf kout ¼ k f kin where X k2S
j f (k)j
XX k2S ‘2S
2
!1=2
jx(k, ‘)j
L 1 X
L
!
i2pnk=L
L
L
where ‘ ¼ m þ k
L
i2pn(‘ m)=L
!
:
L
m ¼ 0 then e
L 1 X
i2pnk=L
i2pn(‘ m)=L
i2pn(‘ m)=L
¼ 1 so
L
e
i2pn(‘ m)=L
n¼0
The finite Zak transform Z is
kxkout ¼
L
L 1 X L 1 L 1X f (m þ k)e L k¼0 n¼0
THEOREM 16.29
k f kin ¼
L 1 X
L 1 L 1 X 1X ¼ f (‘) e L ‘¼0 n¼0
R
can be þ or þ since they both give the same answer. That is, e
(16:12)
for m 2 S.
¼ [Zf ](m, n):
L
L 1 1X x(m, n) L n¼0
¼L
m 6¼ 0 then L L
e
n¼0
i2pn(‘ m)=L
¼ ¼
L
1
e
i2pL(‘ m)=L
1
e 1
i2p(‘ m)=L
1
e
i2p(‘ m)=L
L
1 L
¼
1 1
e e
i2p(‘ m) L
i2p(‘ m)=L
¼ 0:
Therefore,
2
!1=2
:
L 1 L 1 X 1X f (‘) e L ‘¼0 n¼0
L
i2pn(‘ m)=L
!
¼
L 1 1X f (‘)Ld(‘ L ‘¼0
¼ f (m)
L
m)
16-17
Zak Transform
and so Z1 Zf ¼ f : Hence, Equation 16.12 is the definition for the inverse finite Zak transform.
16.5.4 Extensions of the Finite Zak Transform
¼e
ct 2
¼e
ct 2
k2S
f (m þ Mk)ei2pnNk=L
for (m, n) 2 S S.
Definition 16.24: Let matrix M 2 Sdd generate a lattice of
points in Sd . The Multidimensional weighted finite Zak transform of f 2 ‘2 (Sd , C) is defined to be [Zf ](m, n) ¼ d
X
k2Sd
i2pkT Mn
f (m þ Mk)e
16.6 Applications This section discusses some application areas where the Zak transform is used. At the same time, we insert some important references.
16.6.1 Mathematics 16.6.1.1 Jacobi Theta Functions Jacobi’s third theta function, Q3 , is defined to be [1],
k¼1
1 X
2
qk cos (2nz) ¼
2
qk e
i2kz
k¼ 1
for z 2 C and nome q (a special function). ‘‘Theta functions are important because every one of the Jacobi elliptic functions can be expressed as the ratio of two theta functions ’’ [1]. If f (t) ¼ exp ( ct 2 ) for constant c > 0 then Zf is related to Q3 by [Zf ](t, v) ¼ ¼
X
e
c(tþk)2
e
c(t 2 þ2ktþk2 )
e
i2pkv
k2Z
X k2Z
k2Z
Q3 (pv
ict, e c ):
In 1946, Gabor [14] (born Gábor Dénes) suggested the expansion of a signal into a discrete set of shifted and modulated Gaussian signals, that is, an expansion of the form X
ck e
(tþk)2
e
i2pkt
k2Z
for some set of scalars {ck }. His idea has various mathematical implications as well as the signal analysis applications. ‘‘The Zak transform can be helpful in determining the window function that corresponds to a given elementary signal and how it can be used to Gabor’s expansion coefficients’’ [7]. Papers of special interest are by Gel’fand [15], Weil [38], Auslander and Tolimieri [4], Brezin [10], Heil and Walnut [19], Hernandez and Weiss [20], Piao et al. [28], Grochenig [18], Benedetto and Walnut [8], Benedetto et al. [9].
The physics community has used the Zak transform to investigate solid-state physics. Zak introduced this representation to construct a quantum mechanical representation for the motion of a Bloch electron in the presence of a magnetic or electric field. Articles of interest are by Zak [39–41], and Janssen [23].
for (m, n) 2 S S .
1 X
i2k(pv ict)
e
16.6.2 Physics
d
Q3 (z, q) 1 þ 2
ck2
16.6.1.2 Gabor Expansions
with scalar weights M, N 2 N, of f 2 F (S, C) is defined to be [Zf ](m, n) ¼
e
Articles of interest are [4,13], and the book by Igusa [22].
Definition 16.23: [35] The weighted finite Zak transform, X
X
e
i2pkv
16.6.3 Engineering 16.6.3.1 Time–Frequency Analysis The time–frequency representation of a signal will reveal more information than a time representation or frequency representation. In the articles by Auslander et al. [2,3], they presented an algorithm to compute coefficients of the finite double sum expansion of time-varying nonstationary signals and to synthesize them from a finite number of expansion coefficients. The algorithms are based on the computation of the discrete Zak transform. Brodzik and Tolimieri [11] proposed a new, time–frequency formulation of the Gerchberg–Papoulis algorithm for extrapolation of band-limited signals. The new formulation is obtained by translating the constituent operations of the Gerchberg–Papoulis procedure, the truncation and the Fourier transform, into the language of the finite Zak transform. They showed that the use of the Zak transform results in a significant reduction of the computational complexity of the Gerchberg–Papoulis procedure and in an increased flexibility of the algorithm. Other articles of interest are by Janssen [23,27]. There are a few books of interest in the time–frequency community, in particular, Cohen [12], Suter [31], and Tolimieri and An [37].
16-18
Transforms and Applications Handbook
Old multirate implementation methodology Degenerate case: decimation factor = 1
New multirate implementation methodology Generalized case: decimation factor > 1
Short-time Fourier transform
Zak transform
Windowed Zak transform Spectrogram Zak spectrogram
Weighted spectrogram
Weighted Zak spectrogram
Time–frequency distributions
Decimated time–frequency distributions
Multirate implementation
FIGURE 16.1 Interrelations of logical elements in O’Hair and Suter paper [26].
In [26], O0 Hair and Suter define the Zak spectrogram and show the connection between the short-time Fourier transform (STFT) and the Zak transform. Figure 16.1 shows the relationship between old multirate implementation and new multirate implementation. The point of interest is how the Zak transform is key to the new implementation. 16.6.3.2 Weighted Fast Fourier Transforms The main engine of almost every signal processing algorithm is the finite Fourier transform. The signal processing engineer has several goals when working with finite Fourier transforms: (1) make them fast, thus, the fast Fourier transform (FFT); and (2) make them big (i.e., large N). The chip designer wishes to make the chip: (1) physically small; (2) small memory needs; and (3) a low power consumer. So, designing a chip that will compute the FFT has some conflicting goals. In the mid-1960s, the problem of computing the FFT of a vector that was too large to fit into main memory was addressed by Gentleman and Sande [16]. In the late 1980s the approach was rediscovered by Bailey [5] for multiprocessor applications of the FFT algorithm running on hypercubes. The work done by Gentleman and Bailey, and even the work of Suter and Stevens [32] emphasized the connection to FFTs. As such, the Zak transform was not mentioned explicitly. In all of these papers, derivations were done in terms of FFTs, without realizing their intimate link to the Zak transform. In contrast, in the O’Hair and Suter paper [26], the Zak transform was explicitly mentioned. Also, the goal of the Suter–Stevens research (e.g., see [21]) was to ‘‘rediscover’’ the Zak transform approach as a mathematical abstraction to support the design of a low-power,
high-performance asynchronous FFT architecture. research produced a patent that we discuss next.
Their
16.6.3.2.1 Suter–Stevens Patent The patent by Suter and Stevens [33] uses the weighted finite Zak transform to perform the finite Fourier transform. We give the theory to demonstrate the application of the Zak transform. Recall the finite Fourier transform of x 2 RN to be X(m) ¼ [Fx](m) ¼
N1 X
n
x(n)ei2pmN
n¼0
for m ¼ 0, 1, . . . , N 1:
Assume N ¼ N1 N2 for N1 , N2 2 N. Let m ¼ m1 þ N1 m2 , n ¼ N2 n1 þ n2 , where m1 , n1 ¼ 0, 1, . . . , N1 1, m2 , n2 ¼ 0, 1, . . . , N2 1: The polyphase components of x are xn2 where, for each n2 ¼ 0, 1, . . . , N2 1, xn2 (n1 ) ¼ x(N2 n1 þ n2 ) for n1 ¼ 0, 1, . . . , N1 1:
16-19
Zak Transform
The finite Fourier transform m1 ¼ 0, 1, . . . , N1 1,
of
yields
xn2
for
each
3. Compute the N1 -weighted finite Zak transform of g
[ZN1 g](m1 , m2 ) ¼
Xm1 (m2 ) ¼
N 2 1 N 1 1 X X
¼
N 2 1 N 1 1 X X
N 2 1 N 1 1 X X
N 1 1 X n2 ¼0
g(m1 þ n2 N1 )e
n i2pm2 N2
2
n
xn2 (n1 )ei2pmN
for m1 ¼ 0, 1, . . . , N1
n2 ¼0 n1 ¼0
m1 þ N1 m2 N2 n1 þ n2 xn2 (n1 ) exp i2p N1 N2 n1 ¼0
1 and m2 ¼ 0, 1, . . . , N2
1
The FFT of f is the vector F
F(m) ¼ F(m1 þ m2 N1 ) ¼ ½ ZN1 g (m1 , m2 ) m1 n1 m2 n2 m1 n2 ¼ xn2 (n1 ) exp i2p m2 n1 þ þ þ for m1 ¼ 0, 1, . . . , N1 1 and m2 ¼ 0, 1, . . . , N2 1. N1 N2 N1 N2 n2 ¼0 n1 ¼0 The novelty of this patent is the hardware realization. Other N articles related to this patent are [6,30,34,36]. 2 1 N 1 1 m1 n1 m2 n2 m1 n2 X X i2p N i2p N i2p N N 1 e 2 e 1 2 ¼ xn2 (n1 )ei2pm2 n1 e n2 ¼0
n2 ¼0 n1 ¼0
¼ ¼
N 2 1 X
"
N 2 1 X
"
n2 ¼0
n2 ¼0
e
m n i2p N 1N2 1
2
N 1 1 X
xn2 (n1
m n i2p N1 1 1 )e
n1 ¼0
ei2p
m1 n2 N
N 1 1 X n1 ¼0
x(n1 N2 þ n2 )e
!#
16.7 Summary
m n i2p N2 2 2
e
m n i2p N1 1 1
!#
m n i2p N2 2 2
e
¼ X(m2 N1 þ m1 ): Observe that N 1 1 X n1 ¼0
x(n1 N2 þ n2 )e
m n i2p N1 1 1
is the weighted finite Zak transform [ZN2 x](n2 , m1 ) with weight N2 . This yields the algorithm generated in the patent.
Suter–Stevens Fast Fourier Transform Algorithm [33] Given: N ¼ N1 N2 for some N1 , N2 2 N. Given: f (n) for n ¼ 0, 1, . . . , N 1 1. Compute the N2 -weighted finite Zak transform
[ZN2 f ](n2 , m1 ) ¼
N 1 1 X n1 ¼0
f (n2 þ n1 N2 )e
n i2pm1 N1
for n2 ¼ 0, 1, . . . , N2
n m i2p 2N 1 [ZN
2
f ](n2 , m1 )
1 and m1 ¼ 0, 1, . . . , N1
](t, v). [Zf ](t, v) ¼ [Zf ](t, v) for f real-valued. [Zf ](t, v) ¼ [Zf [Zf ](t, v) ¼ [Zf ]( t, v) for f even. [Zf ](t, v) ¼ [Zf ]( t, v) for f odd. Z(af þ bg) ¼ aZ(f ) þ bZ(g). [Zf ](t þ 1, v) ¼ ei2pv [Zf ](t, v). ](t, v). [Zf ](t, v) ¼ [Zf [Zf ](t þ m, v) ¼ e i2pmv [Zf ](t, v) for m 2 Z. [Zf ](t, v þ n) ¼ [Zf ](t, v) for n 2 Z. Zf *1 Zg ¼ Z(f * g). Zf *2 Zg ¼ Z(f g). kZf kout ¼ k f kin . h Zf jZg iout ¼ h f jg iin . Ð1 [Z 1 g](t) ¼ 0 g(t, v)dv. Ð1 15. 0 [Zf ](t, v)e i2pvt dt ¼ [Ff ](v). 16. [ZFf ](v, t) ¼ e i2ptv [Zf ](t, v).
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
17. F2 (Zf Zg)(m) ¼ [F(f g)](m) [Zg](t, v) [Zg](t, v). P 18. Zf ¼ k2Z E(2) k Tk f . 19. ZTm f ¼ E(2)m Zf for m 2 Z.
1
for n2 ¼ 0, 1, . . . , N2 1 and m1 ¼ 0, 1, . . . , N1 1. 2. Scale the answer from 1. g(m1 þ n2 N1 ) e
In this section we summarize the properties of the continuous Zak transform. The functions f , g are complex-valued unless stated otherwise. The pair (t, v) 2 Q ¼ [0, 1] [0, 1] unless stated otherwise.
1.
20. ZTa f ¼ T(1) Z. a Zf for a 2 R Zf for m 2 Z. 21. ZEm f ¼ E(1) m
22. 23. 24. 25. 26.
(2) ZEa f ¼ E(1) Z. a Ta Zf for a 2 R (2) ZEm Tm f ¼ Zf for m 2 Z. (2) ZE(2) m Tn f ¼ Em n Zf for m, n 2 Z. Z. ZE(2) a Ta f ¼ Zf for a 2 R 2) Zf for m, n 2 Z. ZTm En f ¼ E(1, n, m
for
m 2 Z,
where
16-20
References 1. Abramowitz, M. and Stegun, I, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Washington, DC: U.S. Government Printing Office, 1964. 2. Auslander, L., Gertner, C., and Tolimieri, R., The discrete Zak transform application to time-frequency analysis and synthesis of nonstationary signals, IEEE Trans. Signal Process., 39:825–835 (April 1991). 3. Auslander, L., Gertner, I., and Tolimieri, R., The finite Zak transform and the finite Fourier transform, Institute for Mathematics and Its Applications, 39:21–36 (1992). 4. Auslander, L. and Tolimieri, R., Abelian Harmonic Analysis, Theta Functions and Function Algebra on Nilmanifolds. New York: Springer-Verlag, 1975. 5. Bailey, D., FFTs in external or hierarchical memory, J. Supercomput., 4:23–35 (1990). 6. Barnhart, D., Duggan, P., Suter, B., Brothers, C., and Stevens, K., Total ionizing dose characterization of a commercially fabricated asynchronous FFT for space applications, Government Microcircuit Application Conference (GOMAC – 200), Anaheim, CA, March 2000. 7. Bastiaans, M., Gabor’s expansion and the Zak transform for continuous-time and discrete-time signals. Signal and Image Representation in Combined Spaces 7. Wavelet Analysis and Its Applications, R. R. Coifman Y. Y. Zeevi (eds.), Chapter 2, pp. 23–70. San Diego, CA: Academic Press, 1998. 8. Benedetto, J. and Walnut, D. Gabor Frames for L2 and related spaces, Chapter 3, pp. 97–162. Wavelets: Mathematics and Applications. Boca Raton, FL: CRC Press, 994. 9. Benedetto, J. J., Czaja, W., Gadzinski, P., and Powell, A. M. The Balian–Low theorem and regularity of Gabor systems, J. Geom. Anal., 13(2):217–232 (2003). 10. Brezin, J. Function theory on metabelian solvmanifolds, J. Funct. Anal., 10:33–51 (1972). 11. Brodzik, A. K. and Tolimieri, R. Gerchberg–Papoulis algorithm and the finite Zak transform. Proceedings of SPIE, Wavelet Applications in Signal and Image Processing VIII, vol. 4119, A. Aldroubi, et al. (eds.) pp. 1084–1093, San Diego, CA, December 2000. 12. Cohen, L. Time Frequency Analysis: Theory and Applications. Signal Processing Series. Englewood Cliffs, NJ: Prentice-Hall, 1995. 13. Foth, T. and Neretin, Y. A. Zak transform, Weil representation, and integral operators with theta-kernels, International Mathematics Research Notices (2004), pp. 2305– 2327, Oxford, 2004. 14. Gabor, D. Theory of communication, Proc. Inst. Electr. Eng., 93(III):429–457 (1946). 15. Gel’fand, I. Eigenfunction expansions for an equation with periodic coefficients, Dokl. Akad. Nauk. SSR, 76:1117–1120 (1950) (in Russian). 16. Gentleman, W. M. and Sande, G. Fast Fourier transforms for fun and profit. 1966 Fall Joint Computer Conf. 29, pp. 563–578. Washington, DC: Spartan, 1966.
Transforms and Applications Handbook
17. Gertner, I. and Tolimeri, R. Multiplicative Zak transform, J. Vuis. Commun. Image Represent., 6(1):89–95 (March 1996). 18. Gröchenig, K. Foundations of Time–Frequency Analysis. Borton, MA: Birkhäuser, 2001. 19. Heil, C. and Walnut, D. Continuous and discrete wavelet transforms, SIAM Rev., 31(4):628–666 (December 1989). 20. Hernandez, E. and Weiss, G. A First Course on Wavelets. Boca Raton, FL: CRC Press, 1996. 21. Hunt, B. W., Stevens, K. S., Suter, B. W., and Gelosh., D. S. A single chip low power asynchronous implementation of an FFT algorithm for space applications. Proceedings of the 4th International Symposium on Advanced Research in Asynchronous Circuits and Systems (ASYNC-98), pp. 216– 223. San Diego, CA, 1998. 22. Igusa, J. Theta Function. Berlin, Germany=New York: Springer-Verlag, 1972. 23. Janssen, A. Bargmann transform, Zak transform, and coherent states, J. Math. Phys., 23:720–731 (1982). 24. Janssen, A. The Zak transform: A signal transform for sampled time-continuous signals, Philips J. Res., 43(1):23–69 (1988). 25. Naylor, A. W. and Sell, G. R. Linear Operator Theory in Engineering and Science, Vol. 40. Applied Mathematical Sciences. New York: Springer-Verlag, 1982. 26. O’Hair, J. R. and Suter, B. W. The Zak Transform and Decimated Time-Frequency Distributions, IEEE Trans. Signal Process., 44(5):1099–1110 (May 1996). 27. Pei, S.-C. and Yeh, M.-H. Time and frequency split Zak transform for finite Gabor expansion, Signal Process., 52(3):323–341 (1996). 28. Piao, Z., Zibulski, M., and Zeevi, Y. The multidimensional multi-window nonrectangular discrete Gabor schemes. International Symposium on Time-Frequency and TimeScale Analysis, pp. 41–43. IEEE, Philadelphia, PA, October 1998. 29. Schempp, W. Radar ambiguity functions, the Heisenberg group and holomorphic theta series, Proc. Am. Math. Soc., 92:103–110 (1984). 30. Stevens, K. S. and Suter, B. W. A mathematical approach to a low power FFT architecture. International Symposium on Circuits and Systems (ISCAS-98)II, pp. 21–24, Monterey, CA, June 1998. 31. Suter, B. W. Multirate and Wavelet Signal Processing, Vol. 8. Wavelet Analysis and Its Applications. San Diego, CA: Academic Press, 1998. 32. Suter, B. W. and Stevens, K. S. Low power, high performance FFT design. IMACS World Congress on Scientific Computation, Modeling, and Applied Mathematics 1, pp. 99–104, July 1997. 33. Suter, B. W. and Stevens, K. S., Low energy consumption, high performance fast Fourier transform. U.S. Patent 5,831,883, November 1998. 34. Suter, B. W. and Stevens, K. S. A low power, high performance approach for time-frequency time-scale computa-
Zak Transform
tions. Proceedings of SPIE Conference on Advanced Signal Processing Algorithms, Architectures and Implementations VIII, vol. 3461, pp. 86–90, San Diego, CA, July 1998. 35. Suter, B. W. and Stevens, K. A mathematical approach to a low power FFT architecture. IEEE International Symposium on Circuits and Systems, (ISCAS-98) pp. 21–24. IEEE, Monterey, CA, 1998. 36. Suter, B. W., Stevens, K. S., Velazquez, S. R., and Nguyen, T. Multirate as a Hardware Paradigm. International Conference on Acoustics, Speech, and Signal Processing (ICASSP-99) Vol IV, pp. 1885–1888, Phoenix, AZ, March 1999.
16-21
37. Tolimieri, R. and An, M. Time-Frequency Representations, 2nd edn. Boston, MA: Birkhauser, 1998. 38. Weil, A. Sur certains groupes d’opérateurs unitaries, Aeta Math., 111:143–211 (1964). 39. Zak, J. Finite translation in solid state physics, Phys. Rev. Lett., 19:1385–1397 (1967). 40. Zak, J. Dynamics of electrons in solids in external fields, Phys. Rev., 168:686–695 (1968). 41. Zak, J. The kq-representation in the dynamics of electrons in solids, Solid State Phys., 27:1–62 (1972).
17 Discrete Time and Discrete Fourier Transforms 17.1 Introduction................................................................................................................................. 17-1 17.2 Discrete-Time Fourier Transforms ........................................................................................ 17-1 Definitions of Discrete-Time Fourier Transform (DTFT) . DTFT Properties Sequences . Frequency Response of LTI Discrete Systems . Approximation to Continuous-Time Fourier Transforms Definitions of the Discrete Fourier Transform
17.1 Introduction In this chapter we shall present both the discrete-time and discrete Fourier transforms (DFT). We shall also present their important properties and introduce examples to elucidate the mathematical relationships.
17.2 Discrete-Time Fourier Transforms 17.2.1 Definitions of Discrete-Time Fourier Transform (DTFT)
1 2p
1 X
1 xi (n) ¼ 2p
ðp
p
ðp
p
(xr (n) sin vn xi (n) cos vn)
(17:6)
Xr (e jv ) cos vn Xi (e jv ) sin vn dv (17:7)
Xr (e jv ) sin vn þ Xi (e jv ) cos vn dv
(17:8)
X(e jv )e jvn dv
Find the DTFT of the function:
(17:2)
x(n) ¼
p
x(n) ¼ xr (n) þ jxi (n)
(17:3)
X(e jv ) ¼ Xr (e jv ) þ jXi (e jv )
(17:4)
Using the above equation, (Equations 17.1 and 17.2) become
n¼1
1 2p
n¼1
(17:1)
where X(e jv) is a periodic function with period 2p. This implies that all the spectral information contained in the fundamental interval is necessary for the complete description of the signal. If x(n) and X(e jv) are complex they have the form:
1 X
xr (e jv ) ¼
1 X
Example: x(n)ejvn
n¼1
ðp
Xi (e jv ) ¼
Based on the above equations, Table 17.1 can be easily completed.
The DTFT pair is X(e jv ) ^DT {x(n)} ¼
Properties of the DFT
.
References .............................................................................................................................................. 17-12
University of Alabama in Huntsville
Xr (e jv ) ¼
Finite
17.3 The Discrete Fourier Transform............................................................................................. 17-5
Alexander D. Poularikas
jv x(n) ^1 DT {X(e )} ¼
.
(xr (n) cos vn þ xi (n) sin vn)
(17:5)
ean n 0
0nN1 otherwise
(17:9)
Solution Using Equation 17.1 we obtain X(e jv ) ¼ ¼
N1 X
ean nejvn
n¼0
ea ejv a(N1) jv(N1) e 2 1 Ne a jv (1 e e ) þ (N 1)eaN ejvN
(17:10)
The signal x(n) and jX(e jv)j are shown in Figure 17.1 with a ¼ 0.5. 17-1
17-2
Transforms and Applications Handbook
Time reversal: If ^DT{x(n)} ¼ X(e jv), then:
TABLE 17.1 Symmetry Properties of the Discrete-Time Fourier Transforms Sequence
^DT {x(n)} ¼ X(ejv )
DTFT
Complex signals
Convolution: If ^DT{x(n)} ¼ X(e jv) and ^DT{y(n)} ¼ Y(e jv), then:
X(e jv)
x(n)
jv
)
x*(n)
X*(e
x*(n)
X*(e jv)
1 X(e jv ) þ X * (ejv ) 2 1 Xo (e jv ) ¼ X(e jv ) X * (ejv ) 2 jv Xr(e )
G(e jv ) ¼ ^DT {x(n) * y(n)} ¼ X(e jv )Y(e jv )
Xe (e jv ) ¼
xr(n) jxi(n) 1 xe (n) ¼ [x(n) þ x * (n)] 2 1 xo (n) ¼ [x(n) x * (n)] 2 Real signals
G(e jv ) ¼
jXi(e )
¼
X(e jv) ¼ X*(ejv) jv
jv
Xr(e ) ¼ Xr(e
x(n)
)
Xi(e jv) ¼ Xi(ejv)
x(n)
¼
jX(ejv)j ¼ jX(ejv)j angle X(e jv) ¼ angle X(ejv)
x(n) x(n)
1 xe (n) ¼ [x(n) þ x(n)] (real and even) Xr(e jv) (real and even) 2 1 xo (n) ¼ [x(n) x(n)] (real and odd) jXi(e jv) (imaginary and odd) 2
¼
1 X
¼ X1 (e jv ) þ X2 (e jv )
x(m)
n¼1 1 X
m¼1 1 X
1 X
(17:12)
x(m)ejv(mþn0 )
x(m)ejmv
1 X
y(r)ejrv
r¼1
1 X
Magnitude of X
0.4 0.2
20
30
40
(17:16)
1 1 X x(n)ej(vv0 ) 2 n¼1
4
0.6
10
dX(z) dz z¼e jv
1 1 ¼ X e j(vþv0 ) þ X e j(vv0 ) 2 2
x(m)ejvm
m¼1
x(n)
0.8
(17:15)
1 1 X x(n)ej(vþv0 )n 2 n¼1
þ
m¼1
¼ ejvn0
FIGURE 17.1
y(r)ej(mþr)v
r¼1
m¼1
^DT {x(n) cos v0 n} ¼
0
y(n m)ejnv
For proof, see Chapter 6. Modulation: If ^DT{x(n)} ¼ X(e jv), then:
Proof:
0
x(m)
^DT {nx(n)} ¼ z
(17:11)
^DT {x(n n0 )} ¼ ejvn0 X(e jv )
n¼1
m¼1 1 X
Time multiplication: If ^DT{x(n)} ¼ X(e jv), then:
Time shifting: If ^DT{x(n)} ¼ X(e jv), then:
x(n n0 )ejvn ¼
x(m)y(n m)ejnv
n¼1 m¼1 1 1 X X
^DT e jv0 n x(n) ¼ X e j(vv0 ) ,
^DT {x1 (n) þ x2 (n)} ¼ ^DT {x1 (n)} þ ^DT {x2 (n)}
1 X
1 X
Frequency shifting: If ^DT{x(n)} ¼ X(e jv), then:
17.2.2 DTFT Properties Linearity:
(17:14)
Proof:
jv
x(n)
(17:13)
3 2 1 0
0
2
4 6 Frequency, rad/s
8
(17:17)
17-3
Discrete Time and Discrete Fourier Transforms
Correlation: If ^DT{x(n)} ¼ X(e jv) and ^DT{y(n)} ¼ Y(e jv), then: ^DT {x(n) ? y(n)} ¼ ¼ ¼ ¼
1 X
1 X
x(m)y(m n)ejvn
n¼1 m¼1 1 X
1 X
x(m)
m¼1
n¼1
1 X
1 X
x(m)
m¼1 1 X
^DT {x(n)y(n)} ¼
y(m n)ejvn
x(m)e
jvm
m¼1
y(r)e
1 ¼ 2p
j(v)
r¼1
jv
¼ X(e )Y(e
jv
)
1 ¼ 2p
(17:18)
¼
Example: Find the autocorrelation of the signal x(n) ¼ an u(n) for jaj < 1.
Equation 17.18
1 P
n¼0
^DT {x(n) * y(n)} ¼
an ejvn ¼
1 P
n¼0
2
3
ðp
Y(e jl )e jln dl5ejvn
p
ðp
Y(e jl )dl
ðp
Y(e jl )X(e j(vl) )dl
1 X
x(n)ej(vl)n
n¼1
p
p
1 Y(e jv ) * X(e jv ) 2p
(17:21)
Differentiation in the frequency domain: If ^DT{x(n)} ¼ X(e jv), then:
Solution: Since FDT {x(n)} ¼
n¼1
1 ¼ x(n)4 2p n¼1
r¼1
and
x(n)y(n)ejvn
1 X
y(r)ejv(mr) 1 X
1 X
^DT{x(n)} ¼ X(e jv)
If
Multiplication of sequence: ^DT{y(n)} ¼ Y(e jv), then:
(aejv )n ¼ 1ae1 jv , then from
dX(e jv ) d ¼ dv dv ¼ j
1 1 1 ¼ jv jv 1 ae 1 ae 1 2a cos v þ a2
"
1 X
x(n)e
jvn
n¼1
1 X
n¼1
#
nx(n)ejvn ¼ j^DT {nx(n)}
(17:22)
Table 17.2 presents the DTFT properties and Table 17.3 presents DTFT of some typical signals.
Parseval’s theorem: If ^DT{x(n)} ¼ X(e jv) and ^DT{y(n)} ¼ Y(e jv), then: 1 X
1 x(n)y * (n) ¼ 2p n¼1
ðp
X(e jv )Y * (e jv )dv
TABLE 17.2 Properties of the Fourier Transform for Discrete-Time Signals
(17:19)
p
Time Domain x(n), y(n)
Frequency Domain X(e jv), Y(e jv)
ax(n) þ by(n)
aX(e jv) þ bY(e jv)
x(n)
X(ejv)
Convolution
x(n) ? y(n)
X(e jv) Y(e jv)
Correlation Frequency shifting
x(n) ? y(n) e jv0 n x(n)
Modulation
x(n) cosv0n
Multiplication
x(n)y(n)
X(e jv) Y(ejv) ¼ X(e jv) Y*(e jv) X(e j(vv0 ) ) 1 1 X(e j(vþv0 ) ), þ X(e j(vv0 ) ) 2 2 ðp 1 jl X(e )Y(e j(vl) )dl 2p
Property Linearity Time shifting
Proof:
Time reversal
# ðp " X 1 1 jnv Y * (e jv )dv x(n)e 2p n¼1 p
1 X
1 x(n) ¼ 2p n¼1
ðp
Y*(e jv )ejv dv
p
n¼1
jx(n)j2 ¼
1 2p
ðp
p
ejvn0 X(e jv )
p
For the case x(n) ¼ y(n), then: 1 X
x(n n0)
jX(e jv )j2 dv
(17:20)
Differentiation in the frequency domain
nx(n)
dX(e jv ) j dv
Conjugation
x*(n)
X*(ejv)
Parseval’s theorem
1 P
n¼1
x(n)y(n) ¼
1 2p
ðp
p
X(e jv )Y * (e jv )dv
17-4
Transforms and Applications Handbook
jv
Time Function x(n)
Magnitude of Spectrum jX(e )j
X(e jv) ¼ 1
1. x(n) ¼ d(n) x(n)
1
2 Magnitude of X
0.8 0.6 0.4 0.2 0
0
10
20
30
1.5
0.5 0
XN (e jv ) ¼ 0
2 4 6 Frequency, rad/s v
X(e jv ) ¼ ej 2 (N1) sin
x(n)
1
Practical considerations usually dictate that we deal with truncated series. We must, therefore, consider the effect of the missing data, if, for example, the time series, x(n), is defined in the whole interval 0 n < 1. The truncated Fourier transform is defined by
1
40
2. x(n) ¼ u(n) u(n N 1)
8
vN . v sin 2 2
Magnitude of X
0.6 0.4 0.2 10
3. x(n) ¼
N1 X
20 15 10
20
30
0
40
sin v0 n jv0 j < p pn
2
41 XN (e jv ) ¼ 2p n¼0
25
0
2 4 6 Frequency, rad/s
X(e jv ) ¼
8
1 jvj < v0 0 v0 jvj p
n ¼ 0, 1, 2, . . .
(17:23)
¼
1 2p
¼
1 2p
ðp
p
0
3
0
X(e jv )e jv n dv05ejvn
ðp
X(e jv )
ðp
1 0 0 X(e jv )W e j(vv ) dv0 ¼ X(e jv ) * W(e jv ) 2p
0
0 n
ej(vv ) dv0
n¼0
p
p
N 1 X
(17:24)
1
Magnitude of X
x(n)
1.5
ω0 = 0.2*pi
0.5 0
1 0.8 ω0 = 0.2*pi
where
0.6 0.4 0.2
W(e jv ) ¼
0 –0.5 –40
–20
0
20
40
4. x(n) ¼ anu(n)
–0.2
–2
X(e jv ) ¼ x(n)
0.8
Magnitude of X
n = 0, 1, 2... a = 0.9
0.6 0.4 0.2 0
10
20
30
Magnitude of X
0 a = 0.9
–0.5 0
10
20
4 2 0
2 4 6 Frequency, rad/s
8
1 1 2 1 aej(vv0 ) 1 1 þ 2 1 aej(vþv0 )
30
40
n¼0
ejvn ¼ ejv(N1)=2
4
þ e j(vv0 )(N1)=2
ω0 = 0.2*pi
3
a = 0.9
2 1 0
2 4 6 Frequency, rad/s
sin (vN=2) sin (v=2)
8
(17:25)
the transform function W(e jv) is the rectangular window transform. We observe that with a finite data sequence, a convolution operation appears. From Equation 17.24 we observe that to find X(e jv) we require W(e jv) to be a delta function in the interval p v p. However, the amplitude of jW(e jv)j ¼ sin (vN=2)=sin(v=2) has the properties of a delta function and approaches it as N ! 1. Thus, the longer the time-data sequence that we observe, less distortion will occur in the spectrum of X(e jv). Because ^1 DT {pd(v v0 ) þ pd(v þ v0 )} ¼ cos v0 n, jvj < p, then its truncated Fourier transform is given by
n = 0, 1, 2, ...
5
0
N 1 X
XN (e jv ) ^DT { cos v0 n} ¼ ej(vv0 )(N1)=2
6
n = 0, 1, 2, ... ω0 = 0.2*pi
0.5
6
0
x(n)
1
1 1 aejv
a = 0.9
8
X(e jv ) ¼
5. x(n) ¼ an cos nv0 u(n), jv0 j < p
–1
10
40
0 2 Frequency, rad/s
n = 0, 1, 2...
12
1
0
x(n)ejvn
n¼0
We introduce the Fourier transform of x(n) in this expression so that:
5 0
N1 X
35 30
0.8
0
17.2.3 Finite Sequences
DTFTs of Some Typical Discrete-Time Signals
TABLE 17.3
sin [(v v0 )N=2] sin [(v v0 )=2]
sin [(v þ v0 )N=2] sin [(v þ v0 )=2]
(17:26)
This indicates that instead of delta functions at v ¼ v0, two sine functions appear. This phenomenon is known as the smearing effect.
17-5
Discrete Time and Discrete Fourier Transforms
17.2.4 Frequency Response of LTI Discrete Systems
17.3 The Discrete Fourier Transform
The Nth-order discrete system is characterized by the difference equation:
17.3.1 Definitions of the Discrete Fourier Transform The discrete Fourier transform pair is defined by
y(n) þ a1 y(n 1) þ þ aN1 y(n N þ 1) ¼ b0 x(n) þ b1 x(n 1) þ þ bN1 x(n N þ 1)
(17:27)
X(k) ^D {x(n)} ¼
Assuming that the transforms of y(n), x(n), and the system impulse response h(n) exist, the above equation becomes:
H(e jv ) ¼
PN1 jvi Y(e jv ) i¼0 bi e ¼ P N1 X(e jv ) 1 þ i¼1 ai ejvi
x(n) ^1 D {X(k)} ¼ (17:28)
where H(e jv) is its transfer function, a periodic function with period 2p.
x(t)ejVt dt
(17:29)
1
X(V) ¼
n¼1
nTþT ð
x(t)ejVt dt ffi
nT
(17:33)
N 1 1 X X(k)e j2pkn=N , n ¼ 0, 1, 2, . . . , N 1 N k¼0
" # N1 X N1 1 X j2pkm=N e j2pkn=N x(m)e N k¼0 m¼0 ¼
N 1 N 1 X 1 X x(m) ej2p(mn)k=N N m¼0 k¼0
But the last summation is equal to zero for m 6¼ n and equal to one for m ¼ n and, thus, the last expression becomes x(n)N=N ¼ x(n) which proves that Equations 17.33 and 17.34 are the DFT pair. 17.3.1.1 DFT as a Linear Transformation
where V is used to designate the frequency of a continuous-time function. We can approximate the above integral in the form: 1 X
k ¼ 0, 1, 2, . . . , N 1
where X(k) X(2pk=N). If we substitute Equation 17.33 in Equation 17.34, we obtain
The Fourier transform of a continuous time function is given by
X(V) ¼
x(n)ej2pkn=N ,
n¼0
(17:34)
17.2.5 Approximation to Continuous-Time Fourier Transforms
1 ð
N1 X
1 X
Tx(nt)ejVTn
Equations 17.23 and 17.24 can be considered as linear transformations for the sequences x(n) and X(k). Let xN be an N-point vector of the signal sequence and XN is an N-point sequence of the frequency samples. We can write Equation 17.33 in the form
(17:30)
(17:35)
X N ¼ W N xN
n¼1
where By comparing this equation with Equation 17.2, we observe the following correspondence: x(n) ¼ Tx(nT)
(17:31)
v ¼ VT
(17:32)
Therefore, the following steps can be taken to approximate the continuous-time Fourier transform: 1. Select the time interval T such that X(V) ffi 0 for all jVj > p=T 2. Sample x(t) at times, nT, to obtain x(nT) 3. Compute the discrete time Fourier transform using the sequence {Tx(nT)} 4. The resulting approximation is then: X(V) ffi X(e jv) for p=T < V < p=T
2
6 6 XN ¼ 6 6 4
2
X(0) X(1) .. .
3
7 6 7 6 7 , xN ¼ 6 7 6 5 4
X(N 1) 1 1
61 6 WN ¼ 6 6 .. 4.
2
ej2p=N
x(0) x(1) .. . x(N 1) 1
3
7 7 7, 7 5
(ej2p=N )2
1 (ej2p=N )N1 .. .
3 7 7 7 7 5
1 (ej2p=N )N1 (ej2p=N )2N1 (ej2p=N )(N1)(N1) (17:36)
If the inverse of W N exists, then Equation 17.35 gives xN ¼ WN1 XN
(17:37)
17-6
Transforms and Applications Handbook
which is the inverse discrete Fourier transform (IDFT). The matrix W N is symmetric and has the properties W 1 N ¼
1 * W , N N
WN WN* ¼ NI N
and is called a circular folding. The sequence x(n) is folded counterclockwise and the points n ¼ 0 and n ¼ N overlap. The DFT is given by
(17:38) ^D {x((n))N } ¼ X((k))N
where IN is an N 3 N identity matrix W N is an orthogonal (unitary matrix)
For the circular shift of xs(n) with k ¼ 2 and N ¼ 4, we obtain xs(n) ¼ x((n 2))4 and, thus, xs (0) ¼ x((2))4 ¼ x(2) xs (1) ¼ x((1))4 ¼ x(3) xs (2) ¼ x((0))4 ¼ x(0) xs (3) ¼ x((1))4 ¼ x(1)
Periodicity: If x(n) and X(k) are DFT pair, then x(n þ N) ¼ x(n) for all n
(17:39)
X(k þ N) ¼ X(k) for all k
(17:40)
which can easily be proved from Equations 17.33 and 17.34
Circularly even:
x(N n) ¼ x(n) 1 n N 1
(17:48)
Circularly odd:
x(N n) ¼ x(n) 1 n N 1
(17:49)
Circularly even periodic:
Linearity: If ^D{x(n)} ¼ X(k) and ^D{y(n)} ¼ Y(k), then (17:41)
Circular symmetries: The DFT of finite duration sequence, x(n) for 0 n N 1, is equivalent to N-point DFT of the periodic sequence 1 X
‘¼1
x(n ‘N)
Circularly odd periodic:
1 X
‘¼1
x(n k ‘N)
xsp 0
0nN1 otherwise
(17:52)
Conjugate odd periodic:
xp (n) ¼ xp*(N n)
(17:53)
The above suggests the following relationships:
(17:43)
x(0) n¼0 x(N n) 0 n N 1
(17:54)
1 xp (n) þ xp*(N n) 2 1 xpo (n) ¼ xp (n) xp*(N n) 2
(17:55)
17.3.2.1 Symmetry Properties of the DFT
(17:44)
(17:45)
If an N-point sequence is folded; then modulo-N operators are the argument x(n) is defined by
x((n)) ¼
xp (n) ¼ xpe (n) þ xpo (n) xpe (n) ¼
which is related to the original sequence x(n) by a circular shift. Hence, Equation 17.44 can be written in the form xs ¼ x(n k, modulo N) x((n k))N
xp (n) ¼ xp (n) ¼ xp (N n) xp (n) ¼ xp*(N n)
(17:42)
and from this we can define the new finite-duration sequence xs (n) ¼
(17:50)
Conjugate even periodic:
The shifted form of this equation is xsp ¼ xp (n k) ¼
xp (n) ¼ xp (n) ¼ xp (N n)
(17:51)
This is easily proved using Equations 17.33 and 17.34 and applying the linearity property of the summation operator.
xp (n) ¼
X(0) k¼0 X(N k) 1 k N 1
(17:47)
17.3.2 Properties of the DFT
^D {ax(n) þ by(n)} ¼ aX(k) þ bY(k)
(17:46)
Let an N-point sequence {x(n)} and its DFT are complex valued. Hence, we can express them in the form x(n) ¼ xr (n) þ jxi (n) 0 n N 1
(17:56a)
X(k) ¼ Xr (k) þ jXi (k) 0 k N 1
(17:56b)
Introducing these relationships into Equations 17.33 and 17.34, we obtain
Xr (k) ¼
N 1 X n¼0
Xi (k) ¼
2pkn 2pkn xr (n) cos þ xi (n) sin N N
N1 X n¼0
xr (n) sin
2pkn 2pkn xi (n) cos N N
(17:57a)
(17:57b)
17-7
Discrete Time and Discrete Fourier Transforms
17.3.2.5 Imaginary Sequences
xr (k) ¼
N 1 1 X 2pkn 2pkn Xr (k) cos Xi (k) sin N n¼0 N N
(17:58a)
xi (k) ¼
N 1 1 X 2pkn 2pkn Xr (k) sin þ Xi (k) cos N k¼0 N N
(17:58b)
If x(n) ¼ jxi(n), Equations 17.57a and 17.56b reduce to
17.3.2.2 Real-Valued Sequences If the sequence x(n) is real, it follows directly from the DFT pair that X(N k) ¼ X*(k) ¼ X(k)
(17:59)
As a result of the above equation, we obtain j X(N k)j ¼ j X(k)j,
angle{X(N k)} ¼ angle{X(k)}
(17:60)
Since xi(n) ¼ 0, the x(n) can be determined by Equation 17.58a.
Xr (k) ¼
N1 X
xi (n) sin
2pkn N
(17:67a)
Xi (k) ¼
N1 X
xi (n) cos
2pkn N
(17:67b)
n¼0
n¼0
Xr(k) is odd and Xi(k) is even. If xi(n) is odd, Xi(k) ¼ 0 and, hence, X(k) is purely real. If xi(n) is even, Xr(k) ¼ 0 and, hence X(k) is purely imaginary. The symmetry properties are given in Table 17.4. 17.3.2.6 Circular Convolution Let x1(n), x2(n), and x3(n) are three sequences of length N. Then, if we take the inverse transform of the DFT product X1(k)X2(k), we obtain:
17.3.2.3 Real and Even Sequences If x(n) is real and even; that is x(n) ¼ x(N n),
x3 (n) ¼ 0nN 1
(17:61)
then Equation 17.57b yields Xi(k) ¼ 0 and, hence, the DFT becomes N 1 X
2pkn X(k) ¼ x(n) cos , 0kN 1 N n¼0
(17:62)
which is real and even. In addition, Xi(k) ¼ 0 and the IDFT reduces to x(n) ¼
N1 1 X 2pkn X(k) cos , 0nN 1 N k¼0 N
(17:68)
Substituting the inverse transform of X1(k) and X2(k) in Equation 17.68, we obtain the relation x3 (m) ¼
N1 X n¼0
TABLE 17.4
x1 (n)x2 ((m n))N,
0mN1
(17:69)
Symmetries of DFT
N-Point Sequence x(n) 0 n N 1
DFT X(k) 0 k N 1
Complex signals
(17:63)
x(n)
X(k)
x*(n)
X*(N k)
x*(N n)
17.3.2.4 Real and Odd Sequences
xr(n)
If x(n) is real and odd (xi(n) ¼ 0); that is x(n) ¼ x(N n)
N 1 1 X 2pnk X1 (k)X2 (k)e N , 0 n N 1 N k¼0
jxi(n)
(17:64)
then Equation 17.57a yields Xr(k) ¼ 0, and hence, the DFT becomes (see also Equation 17.56b)
1 xe (n) ¼ [x(n) þ x? (N n)] 2 1 xo (n) ¼ [x(n) x? (N n)] 2
X*(k)
1 Xe (k) ¼ [X(k) þ X ? (N k)] 2 1 Xo (k) ¼ [X(k) X ? (N k)] 2 Xr (k) jXi (k)
Real signals
X(k) ¼ j
N 1 X n¼0
x(n) sin
2pkn , N
x(n)
0kN1
(17:65)
x(n)
which is purely imaginary and odd. Since Xr(k) ¼ 0, the IDFT reduces to N1 1 X 2pkn X(k) sin , 0nN1 x(n) ¼ j N k¼0 N
x(n)
(17:66)
x(n) x(n) 1 xe (n) ¼ [x(n) þ x(N n)] 2 1 xo (n) ¼ [x(n) x(N n)] 2
X(k) ¼ X*(N k)
Xr(k) ¼ Xr(N k)
Xi(k) ¼ Xi(N k)
jX(k)j ¼ jX(N k)j
angle {X(k)} ¼ angle {X(N k)} Xr (k) jXi (k)
17-8
Transforms and Applications Handbook
17.3.2.9 Circular Frequency Shift
Example: The circular convolution of x1(n) ¼ {1, 2, 1, 4}, x2(n) ¼ {1, 2, 3, 4} is found as follows x3 (m) ¼
N1 X n¼0
x1 (n)x2 ((m n))N , x3 (0) ¼
N1 X
If ^D {x(n)} ¼ X(k), then n o ^D x(n)e j2p‘n=N ¼ X((k ‘))N
x1 (n)x2 ((n))N
(17:73)
n¼0
¼ x1 (0)x2 (0) þ x1 (1)x2 (4 1) þ x1 (2)x2 (4 2) þ x1 (3)x2 (4 3) ¼ 1 1 þ 2 4 þ 1 3 þ 4 2 ¼ 20, etc:
17.3.2.10 Complex-Conjugate If ^D {x(n)} ¼ X(k), then
Therefore, we write
^D fx*(n)g ¼ X*ð(k)ÞN ¼ X*(N k)
^D {x1 (n) x2 (n)} ¼ X1 (k)X2 (k), 0 k N 1
(17:74)
(17:70)
and since
17.3.2.7 Time Reversal If ^D{x(n)} ¼ X(k), then
Proof:
^D {x((n))N } ¼ ^D {x(N n)} ¼ X((k))N ¼ X(N k), 0 k N 1
^D {x(N n)} ¼ ¼
N1 X n¼0
N 1 X m¼0
x(N n)ej2pkn=N ¼ x(m)e j2pkm=N ¼
¼ X(N k)
N 1 X
N1 X
(17:71)
" #* N 1 N 1 1 X 1 X ? j2pkn=N j2pk(Nn)=N X (k)e ¼ X(k)e N k¼0 N k¼0 implies that
x(m)ej2pk(Nm)=N
? x? ð( n)ÞN ¼ x? (N n) ¼ ^1 D fX (k)g
m¼0
x(m)ej2pm(Nk)=N
m¼0
17.3.2.11 Circular Correlation If ^D{x(n)} ¼ X(k) and ^D fh(n)g ¼ H(k), then the DFT of a circular cross-correlation is
17.3.2.8 Circular Time Shift If ^D {x(n)} ¼ X(k), then
^D
^D {x((n ‘))N } ¼ X(k)ej2pk‘=N
(17:72)
Proof: ^D {x((n ‘))N } ¼
‘1 X n¼0
þ ¼
n¼0
¼
N 1 X n¼0
?
x(n)h ((n ‘))N
)
¼ X(k)H ? (k)
(17:76)
Proof: The correlation of Equation 17.76 can be represented as a circular convolution. Hence, ^D fx(m) h*(m)g ¼ X(k)H*(k)
(17:77)
where Equation 17.70 was used. Furthermore, if x(n) ¼ h(n), an autocorrelation case, then
x(n ‘ þ n)N ej2pk‘=N
^D fx(m) x*(m)g ¼ j X(k)j2
N1 X n¼‘
N 1 X
x(n ‘)ej2pkn=N
17.3.2.12 Product
x(m)ej2pk(mþ‘)=N
If ^D fx(n)g ¼ X(k) and ^D fh(n)g ¼ H(k), then
(17:78)
m¼N‘
þ ¼
(
x(n ‘)ej2pkn=N
n¼‘
‘1 X
þ
x((n ‘))N ej2pk‘=N
N 1 X
(17:75)
N1‘ X
x(m)ej2pk(mþ‘)=N
m¼0
N1 X m¼0
x(m)ej2pk(mþ‘)=N ¼ X(k)ej2pk‘=N
^D {x(n)h(n)} ¼
1 X(k) H(k) N
(17:79)
By interchanging the roles of time and frequency in the expression for the circular convolution, Equation 17.79 results.
17-9
Discrete Time and Discrete Fourier Transforms
17.3.2.13 Parseval’s Theorem
TABLE 17.5
If ^D fx(n)g ¼ X(k) and ^D fh(n)g ¼ H(k), then
Property
N1 X
Properties of DFT Time Functions x(n), h(n)
Linearity
N1 1 X x(n)h (n) ¼ X(k)H ? (k) N k¼0 n¼0 ?
(17:80)
ax(n) þ bh(n)
Periodicity
Complex conjugate
x*(n)
Circular convolution
x(n) h(n)
f(n)
Multiplication
x(n) h(n) 1 X(n) N N 1 N 1 X 1 X x(n)h? (n) ¼ X(k)H ? (k) N k¼0 n¼0
Symmetry
0.8
0.6
0.6
0.4
0.4
0.2
0.2
f(n)
0.08
20
30
40
0
0.3 a = 0.5
Re[F(k)]
20
0.5
0
0
–0.5
30
10
20
30
Abs[f(n)]
1
10
n0 = 5
10
20
k0 = 5
0.2 0 –0.1
10
40
Im[F(k)] a = 0.5
–0.3 0
20
10
30
40
Re[F(k)]
–0.4 0
10
0
10
–0.5
30
40
30
40
30
40
k0 = 5
0.5
20
20 Im[F(k)]
1
k0 = 5
30 k0 = 5
30
20
–0.2
40
15
0.2
0.3
10
0.1
Abs[F(k)]
20
0
0.1
40
30
0
0.2
–0.1
25
0.4
40
a = 0.5
30
0.6
30
0
35
0.8
20 Re[F(k)]
0.3
–1 0
40
0
0.4
–0.5
0
0.1
0
40
Im[F(k)]
1
n0 = 5
0.5
0
10
a = 0.5
0.2
0.02
0
Abs[F(k)]
0.4
0.06 0.04
–1
X(k) H*(k) 1 X(k) H ? (k) N x(k)
F(k) ¼ ej2pn0 k=N , 0 k N 1
Abs[F(k)]
1
0.8
1
X*(N k)
X(k) H(k)
x(n) h*(n)
0n > < d(k k0 ) 2 F(k) > N > : d[k (N k0 )] 2
4. f(n) ¼ cos(2pk0n=N), 0 < k0 < N 1 k0 ¼ integer, 0 n N 1 1
10
Im[F(k)]
1
–2
0 0
10
20
30
40
–5
–4 0
10
20
30
40
–6
0
10
20
30
40
17-11
Discrete Time and Discrete Fourier Transforms TABLE 17.6 (continued)
Table of DFTs of Functions 8N > < F(k) ¼ 2cos (kp) sin (kp=N) > :j 2 sin2 (kp=N)
7. f (n) ¼ n=N, 0 n N 1 f(n)
1
Abs[F(k)]
6
m=5
Re[F(k)]
0
0
10
8. f (n) ¼
20
1 0
30
40
1
0
10
20
30
40
–1
0
10
20
30
pk(m þ 1) sin pkm N F(k) ¼ exp j , pk N sin N Abs[F(k)]
20
0.8
–2
0
0nm m
> :0
20
30
0n
40
0
10
20
30
N N (1 pffiffiffiffi ~ c (k, l)j < 1 þ N j, jX > : pffiffiffiffi < N j,
if l ¼ l0 and k ¼ k0 , if l ¼ 6 l0 ,
(18:22)
if l ¼ l0 and k 6¼ k0 ,
where j is defined in Equation 18.21. From this theorem, one can see that as long as the chirp rate error level e and the constant frequency error level h are low enough (i.e., ~l0 and ~k0 are close enough to integers l0 and k0,
18-5
Discrete Chirp-Fourier Transform
i.e., e, h 0, which can be achieved when the sampling rate is fast enough), the DCFT of a linear chirp signal still has the peak property as in Theorem 18.1. We shall see some numerical examples in Section 18.5.
18.3 DCFT Properties for Multiple Component Chirp Signals
Therefore, for i ¼ 1, 2, . . . , I, by Equations 18.25 and 18.26 we have EjXc (ki , li )j
I X i¼1
2
Ai WN(li n þki n) þ z(n),
xi (n) ¼ Ai WN(li n þki n) :
EjXc (k, l)j
(18:23)
where z(n) is an additive i.i.d. noise with mean 0 and variance s2, jAij2 > 0 is the signal power of the ith chirp component, and ðki1 , li1 Þ 6¼ ðki2 , li2 Þ for i1 6¼ i2 . For i ¼ 1, 2, . . . , I, let 2
X t6¼i
jAt j
s:
(18:27)
Furthermore, for (k, l) 6¼ (ki, li) for i ¼ 1, 2, . . . , I,
We next consider a multiple component chirp signal x(n) of the form
x(n) ¼
pffiffiffiffi N jAi j
I X X (i) (k, l) þ (EjZc (k, l)j2 )1=2 c i¼1
I X i¼1
jAi j þ s:
(18:28)
By comparing Equations 18.27 and 18.28, there are peaks at (ki, li) in the DCFT domain if pffiffiffiffi N jAi j
(18:24)
X t6¼i
jAt j
s>
I X i¼1
jAi j þ s:
Or,
Then, the DCFT Xc(k, l) of x(n) is
Xc (k, l) ¼
I X i¼1
! X 2 jAi j > pffiffiffiffi jAt j þ s : N 1 t6¼i
Xc(i) (k, l) þ Zc (k, l),
This concludes the following theorem.
where Xc(i) (k, l) is the DCFT of the ith chirp component xi(n) and Zc(k, l) is the DCFT of noise z(n). From the study in Section 18.2, (i) we know pffiffiffiffi that each Xc (k, l) has a peak at (ki, li) with peak value jAi j N and the maximal off-peak value is jAij. What we are interested here is whether there is a peak of Xc(k, l) at each (ki, li), 1 i I. If there is a peak at (ki, li), then a chirp component with constant frequency ki and chirp rate li is detected. To study this question, let us calculate the mean magnitude of Xc(k, l). We first calculate the mean jXc(k, l)j at (ki, li). For i ¼ 1, 2, . . . , I, EjXc (ki , li )j X (i) (ki , li ) c
pffiffiffiffi 1 jAi j N
X t6¼i
X X (t) (ki , li ) c
EjZc (ki , li )j
(EjZc (ki , li )j2 )1=2 ,
Consider a multiple component chirp signal x(n) in Equation 18.23 with components at different constant frequency and chirp rate pairs (ki, li) of power jAij2 for i ¼ 1, 2, . . . , I. Its DCFT magnitudes at (ki, li) are lower bounded by pffiffiffiffi N jAi j
X t6¼i
jAt j
s, i ¼ 1, 2, . . . , I, (18:30)
and its DCFT magnitudes at other (k, l) are upper bounded by (18:25)
where the inequality in Step 1 is because EjZc (ki , li )j (EjZc (ki , li )j2 )1=2 from the Schwarz inequality with respect to the expectation E. Thus, to estimate the lower bound of the mean magnitude jXc(ki, li)j we need to estimate the mean power of the DCFT of the noise z(n). Since, for any fixed l, 2 Zc(k, l) is the DFT of z(n)WNln , the energy of Zc(k, l) in terms 2 of the frequency variable k is the same as the one of z(n)WNln , i.e., the one of z(n). This proves EjZc (k, l)j2 ¼ s2 :
THEOREM 18.4
EjXc (ki , li )j
t6¼i
jAt j
(18:29)
(18:26)
EjXc (k, l)j
I X i¼1
jAi j þ s, (k, l) 6¼ (ki , li ), i ¼ 1, 2, . . . , I: (18:31)
For each i with 1 i I, a peak in the DCFT domain appears at (ki, li) if the inequality 18.29 holds. From (18.29), one can see that, when the number I of multiple chirp components is fixed, all the peaks at (ki, li) for i ¼ 1, 2, . . . , I will appear in the DCFT domain as long as the signal length N, a prime, is sufficiently large. In other words, when the signal is sufficiently long, all the chirp components can be detected by using the DCFT.
18-6
Transforms and Applications Handbook
We next consider the special case when all the signal powers jAij2 of the different chirp components are the same, i.e., jAi j ¼ A,
for i ¼ 1, 2, . . . , I:
2 s 2 1 1 > pffiffiffiffi I1þ I 1 þ pffiffiffi , ¼ pffiffiffiffi A g N 1 N 1
where g is the signal-to-noise (SNR) ratio
(18:32)
In other words, given the SNR g, all peaks at (ki, li) for i ¼ 1, 2, . . . , I appear in the DCFT domain if the number of chirp components satisfies pffiffiffiffi 1 Nþ1 pffiffiffi : I< g 2
Xc,a (a, b) ¼
COROLLARY 18.1 Let x(n) be of the form (18.23) with all equal powers jAij2 ¼ A2 and the SNR g defined in Equation 18.32. Then, there are peaks at (ki, li) for i ¼ 1, 2, . . . , I if the number I of the chirp components satisfies the upper bound (18.33).
¼
The above corollary basically says that, in the case when all signal powers of the multiple chirp components are the same, the chirp components can pffiffiffiffibe detected using the DCFT if the number of them is less than N =2 when the signal length N is sufficiently large. From the simulation results in Section 18.5, one will see that the upper bound in Equation 18.33 is already optimal, i.e., tight. Similar to the single chirp DCFT performance analysis in Theorem 18.3, when the chirp rate and the constant frequency parameters li and ki are not integers, the above results for multiple chirp DCFT can be generalized. Some numerical examples will be presented in Section 18.5.
18.4 Connection to the Analog Chirp-Fourier Transform
0
cos (b0 b)t 2 cos (a0 a)t dt 1 ð 0
sin (b0 b)t 2 cos (a0 a)t dt
1
(1 þ sign(b0 b)) jb0 bj1=2 (a0 a)2 exp j , 4(b0 b)
n , N 1=3
b: ¼
2pn 2pn , a: ¼ 2=3 , 1=3 N N
2pk 2pl , N 2=3 N 1=3
¼
In this section, we want to see the relationship of the DCFT and the ACFT. Let us first see the ACFT. For an analog signal xa(t), its ACFT is (18:34)
(18:37)
N1 1 X 2 x(n)WNln þkn 1=3 N n¼0
N 1=2 Xc (k, l): N 1=3
In other words, Xc (k, l) N 1=6 Xc,a
xa (t) exp (j(bt 2 þ at))dt,
(18:36)
where N is a positive integer. The reason for this sampling method is for getting the DCFT form studied Sections 18.2 and 18.3 and the difference of the samplings between the chirp rate b and the constant frequency a is due to the power difference between the chirp term t2 and the constant frequency term t. = [0, N 2=3 ]. Sample xa(t) Truncate xa(t) such that it is zero for t 2 1=3 into x(n) ¼ xa (n=N ) for n ¼ 0, 1, 2, . . . , N 1. In this case, the integral in Equation 18.34 can be discretized
Xc,a
1
1 1 ð
where Equation 18.36 is from [19]. Clearly, when the constant frequency a0 and the chirp rate b0 are both matched, i.e., when a ¼ a0 and b ¼ b0, the ACFT Xc,a(a, b) ¼ 1, and otherwise Xc,a(a, b) is a finite value, i.e., Xc,a(a, b) < 1 for b 6¼ b0 or a 6¼ a0. To consider the connection with the DCFT, let us consider the following samplings for the above analog parameters t, a, and b: t: ¼
Xc,a (a, b) ¼
(18:35)
exp (j[(b0 b)t 2 þ (a0 a)t])dt
þ 2j
(18:33)
This gives us the following corollary.
1 ð
1 ð
¼2
2
A : s2
xa (t) ¼ exp (j(b0 t 2 þ a0 t)), the ACFT is
In this case, (18.29) becomes
g¼
where a and b are real. When xa(t) is a linear chirp, i.e.,
2pk 2pl , , N 2=3 N 1=3
(18:38)
which gives a connection between the DCFT and the ACFT.
18-7
Discrete Chirp-Fourier Transform
Equation 18.33 for the number I of the detectable chirp components is 4, i.e., I I2 ¼ 4. In the following, three different numbers I ¼ 2, 3, 4 of chirp components are simulated, where the constant frequencies ki and the chirp rates li for i ¼ 1, 2, . . . , I are arbitrarily chosen. The corresponding amplitudes Ai are set to be all 1. Figures 18.1 and 18.2 show the DCFTs of signals with two chirp components at (ki, li) ¼ (42, 15), (45, 44), and the SNRs g ¼ g1 ¼ 0 dB and g ¼ g2 ¼ 6 dB in Equation 18.32, respectively.
18.5 Numerical Simulations In this section, we want to see some simple numerical simulations. Two signal lengths are considered: N ¼ 67 and N ¼ 66. We first see some examples when N ¼ 67. Two different SNRs g in Equation 18.32 are considered, which are g1 ¼ 1 (0 dB) and g2 ¼ 4 (6 dB). For the first SNR g1, the upper bound in Equation 18.33 for the number I of the detectable chirp components is 3, i.e., I I1 ¼ 3. For the second SNR g2, the upper bound in
N = 67, I = 2, γ = 0 dB
Image of |DCFT|2, N = 67, I = 2, γ = 0 dB 0
70
10
60 20
40
Chirp rate, l0
|DCFT|2
50 30 20 10 0 80
30 40 50
60 40 20 0
Chirp rate, l0
0
50 40 30 20 10 Constant frequency, k0
60
70 60 0
(a)
10
20
FIGURE 18.1
30
40
50
60
Constant frequency, k0
(b)
The DCFT of two chirp components with additive SNR g ¼ 0 dB: (a) three-dimensional plot and (b) image.
Image of |DCFT|2, N = 67, I = 2, γ = 6 dB
N = 67, I = 2, γ = 6 dB 0 10
70
20
50
Chirp rate, l0
|DCFT|2
60 40 30 20
30 40
10 0 80
50 60 40 20 0
(a)
Chirp rate, l0
0
10
20
30
40
50
60
70 60 0
Constant frequency, k0
(b)
10
20
30
40
Constant frequency, k0
FIGURE 18.2 The DCFT of two chirp components with additive SNR g ¼ 6 dB: (a) three-dimensional plot and (b) image.
50
60
18-8
Transforms and Applications Handbook
Image of |DCFT|2, N = 67, I = 3, γ = 0 dB
N = 67, I = 3, γ = 0 dB
0
20 Chirp rate, l0
|DCFT|2
10 80 70 60 50 40 30 20 10 0 80
40 50
60 40 20 0
Chirp rate, l0
(a)
30
0
10
20
30
40
50
60
70
60 0
Constant frequency, k0
10
(b)
20 30 40 Constant frequency, k0
50
60
FIGURE 18.3 The DCFT of three chirp components with additive SNR g ¼ 0 dB: (a) three-dimensional plot and (b) image.
Image of |DCFT|2, N = 67, I = 3, γ = 6 dB
N = 67, I = 3, γ = 6 dB
0
20 Chirp rate, l0
|DCFT|2
10 80 70 60 50 40 30 20 10 0 80
40 50
60 40 20 0 (a)
30
Chirp rate, l0
FIGURE 18.4
0
10
20
30
40
50
60
70
Constant frequency, k0
60 0 (b)
10
20
30
40
50
60
Constant frequency, k0
The DCFT of three chirp components with additive SNR g ¼ 6 dB: (a) three-dimensional plot and (b) image.
Figures 18.3 and 18.4 show the DCFTs of signals with three chirp components at (ki, li) ¼ (12, 2), (49, 35), (18, 24), and the SNRs g ¼ g1 ¼ 0 dB and g ¼ g2 ¼ 6 dB in Equation 18.32, respectively. Figures 18.5 and 18.6 show the DCFTs of signals with four chirp components at (ki, li) ¼ (44, 57), (38, 65), (53, 10), (55, 12), and the SNRs g ¼ g1 ¼ 0 dB and g ¼ g2 ¼ 6 dB in Equation 18.32, respectively. One can see from Figure 18.5 that, although the
upper bound for I is 3 when the SNR g ¼ g1 ¼ 0 dB, the four peaks can be seen in the DCFT domain. This is, however, not always true from the following examples. Figures 18.7 and 18.8 show the DCFTs of another set of two signals with four chirp components at (ki, li) ¼ (64, 55), (21, 39), (8, 17), (53, 44), and the SNRs g ¼ g1 ¼ 0 dB and g ¼ g2 ¼ 6 dB in Equation 18.32, respectively. One can see from Figures 18.7 that the four peaks
18-9
Discrete Chirp-Fourier Transform
N = 67, I = 4, γ = 0 dB
Image of |DCFT|2, N = 67, I = 4, γ = 0 dB
0 10
80 20 Chirp rate, l0
|DCFT|2
70 60 50
30
40 30 20 10
40
0 80
50 60 40 20 0
Chirp rate, l0
(a)
0
10
20
30
40
50
70
60
60 0
Constant frequency, k0
10
20
30
40
50
60
Constant frequency, k0
(b)
FIGURE 18.5 The DCFT of four chirp components with additive SNR g ¼ 0 dB: (a) three-dimensional plot and (b) image.
N = 67, I = 4, γ = 6 dB
10
70 60
20 Chirp rate, l0
50 |DCFT|2
Image of |DCFT|2, N = 67, I = 4, γ = 6 dB
0
40 30 20
30 40
10 0 80
50 60 40 20 0
(a)
Chirp rate, l0
FIGURE 18.6
0
10
20
30
40
50
60
70 60 0
Constant frequency, k0
(b)
10
20 30 40 Constant frequency, k0
50
60
The DCFT of four chirp components with additive SNR g ¼ 6 dB: (a) three-dimensional plot and (b) image.
(I ¼ 4) are not clear, which is because the upper bound for I in Equation 18.33 is 3 when g ¼ g1 ¼ 0 dB. The four peaks in Figure 18.8 are, however, clear because the upper bound for I in Equation 18.33 is 4 when g ¼ g2 ¼ 6 dB. When N ¼ 66, we consider the two chirp components (ki, li) ¼ (42, 15), (45, 44) in Figure 18.2 with the SNR g ¼ g2 ¼ 6 dB. Its
DCFT is shown in Figure 18.9. Clearly, it fails to show the two peaks, which illustrates the difference of the DCFT with respect to having prime and nonprime length. We next want to see some examples when the chirp rate and the constant frequency parameters li and ki are not but close to integers, i.e., e, h 0. The parameter errors are randomly added
18-10
Transforms and Applications Handbook
N = 67, I = 4, γ = 0 dB
Image of |DCFT|2, N = 67, I = 4, γ = 0 dB
0 10
100 20 Chirp rate, l0
|DCFT|2
80 60 40
30 40
20 50
0 80 60 40 20 Chirp rate, l0
(a)
0
0
10
20
30
40
50
60
70
60 0
Constant frequency, k0
10
(b)
20 30 40 Constant frequency, k0
50
60
FIGURE 18.7 The DCFT of another set of four chirp components with additive SNR g ¼ 0 dB: (a) three-dimensional plot and (b) image.
N = 67, I = 4, γ = 6 dB
Image of |DCFT|2, N = 67, I = 4, γ = 6 dB
0
20 Chirp rate, l0
|DCFT|2
10 90 80 70 60 50 40 30 20 10 0 80
40 50
60 40 20 (a)
30
Chirp rate, l0
FIGURE 18.8
0
0
10
20
30
40
50
60
Constant frequency, k0
70
60 0 (b)
10
20
30
40
50
60
Constant frequency, k0
The DCFT of another set of four chirp components with additive SNR g ¼ 6 dB: (a) three-dimensional plot and (b) image.
with Gaussian distributions. Figures 18.10 and 18.11 show the DCFTs of the two chirp components (ki, li) ¼ (41.9897, 15.0180), (45.0037, 43.9968) that are distorted from the chirp components in Figures 18.1 and 18.2. Figures 18.12 and 18.13 show the DCFTs of the three chirp components (ki, li) ¼ (12.0050, 1.9883),
(48.9875, 35.0063), (17.9825, 24.0004) that are distorted from the chirp components in Figures 18.3 and 18.4. Figures 18.14 and 18.15 show the DCFTs of the four chirp components (ki, li) ¼ (43.9977, 56.9989), (38.0013, 64.9920), (52.9976, 9.9991), (54.9898, 12.0094) that are distorted from the chirp components
18-11
Discrete Chirp-Fourier Transform
Image of |DCFT|2, N = 66, I = 2, γ = 6 dB
N = 66, I = 2, γ = 6 dB
0 10
70 20
50
Chirp rate, l0
|DCFT|2
60 40 30 20
30 40
10 50
0 80 60 40 20 0
Chirp rate, l0
(a)
FIGURE 18.9
20
10
0
30
40
50
60
70
60 0
Constant frequency, k0
10
20
(b)
30 40 50 Constant frequency, k0
60
The DCFT of two chirp components with additive SNR g ¼ 6 dB: and signal length N ¼ 66: (a) three-dimensional plot and (b) image.
Image of |DCFT|2, N = 67, I = 2, γ = 6 dB
N = 67, I = 2, γ = 6 dB
0 10
60 20 Chirp rate, l0
|DCFT|2
50 40 30 20
30 40
10 50
0 80 60 40 20 (a)
Chirp rate, l0
FIGURE 18.10 and (b) image.
0
0
10
20
30
40
50
60
Constant frequency, k0
70
60 0 (b)
10
20
30
40
50
60
Constant frequency, k0
The DCFT of two chirp components (41.9897, 15.0180), (45.0037, 43.9968) with additive SNR g ¼ 6 dB: (a) three-dimensional plot
in Figures 18.5 and 18.6. One can see that, unlike in Figures 18.5 and 18.6, in Figures 18.14 and 18.15 the four peaks are not all shown well, which is due to the additional distortions of the integer chirp rate and constant frequency parameters li and ki as we studied in Theorem 18.3.
18.6 Conclusion In this chapter, we studied the DCFT for discrete linear chirp signals. The approach is analogous to the one of the DFT. We showed that, when the signal length N is a prime, all the sidelobes
18-12
Transforms and Applications Handbook
Image of |DCFT|2, N = 67, I = 2, γ = 0 dB N = 67, I = 2, γ = 0 dB
0 10
60 20 Chirp rate, l0
50 |DCFT|2
40 30 20
30 40
10 50
0 80 60 40 20 Chirp rate, l0
(a)
0
0
10
20
30
40
50
60
70
60 0
Constant frequency, k0
10
20
30
40
50
60
Constant frequency, k0
(b)
FIGURE 18.11 The DCFT of two chirp components (41.9897, 15.0180), (45.0037, 43.9968) with additive SNR g ¼ 0 dB: (a) three-dimensional plot and (b) image.
Image of |DCFT|2, N = 67, I = 3, γ = 6 dB N = 67, I = 3, γ = 6 dB
0 10
70 20
60 Chirp rate, l0
|DCFT|2
50 40 30 20
30 40
10 50
0 80 60 40 20 (a)
Chirp rate, l0
0
0
10
20
30
40
50
60
Constant frequency, k0
70
60 0 (b)
10
20
30
40
50
60
Constant frequency, k0
FIGURE 18.12 The DCFT of three chirp components (12.0050, 1.9883), (48.9875, 35.0063), (17.9825, 24.0004) with additive SNR g ¼ 6 dB: (a) three-dimensional plot and (b) image.
(i.e., when the chirp rates or the constant frequencies are not matched) of the DCFT are not above 1 while the mainlobe (i.e., when the chirp rates and the constant pffiffiffiffi frequencies are matched simultaneously) of the DCFT is N . We showed that this is optimal, i.e., when N is not a prime, the maximal sidelobe magnitude of the DCFT is greater than 1 (in fact, we showed
that the pffiffiffi maximal sidelobe magnitude of the DCFT is greater than 2). We also presented an upper bound in terms of signal length N and SNR for the number of the detectable chirp components using the DCFT. Simulations were presented to illustrate the theory. A connection of the DCFT with the ACFT was also presented.
18-13
Discrete Chirp-Fourier Transform
Image of |DCFT|2, N = 67, I = 3, γ = 0 dB N = 67, I = 3, γ = 0 dB
0 10
60 20 Chirp rate, l0
50 |DCFT|2
40
30
30 20
40
10 50
0 80 60 40
40
20 Chirp rate, l0
(a)
0
0
10
50
60
70
30 20 Constant frequency, k0
60 0
10
(b)
20
30 40 Constant frequency, k0
50
60
FIGURE 18.13 The DCFT of three chirp components (12.0050, 1.9883), (48.9875, 35.0063), (17.9825, 24.0004) with additive SNR g ¼ 0 dB: (a) three-dimensional plot and (b) image.
Image of |DCFT|2, N = 67, I = 4, γ = 6 dB
N = 67, I = 4, γ = 6 dB
0
20 Chirp rate, l0
|DCFT|2
10 80 70 60 50 40 30 20 10 0 80
30 40 50
60 40 (a)
20 Chirp rate, l0
40
0
0
10
50
60
30 20 Constant frequency, k0
70
60 0 (b)
10
20 30 40 Constant frequency, k0
50
60
FIGURE 18.14 The DCFT of four chirp components (43.9977, 56.9989), (38.0013, 64.9920), (52.9976, 9.9991), (54.9898, 12.0094) with additive SNR g ¼ 6 dB: (a) three-dimensional plot and (b) image.
Although the DCFT was defined for linear chirps that are quite common in radar applications, it is not hard to generalize to higher order chirps. Notice that the DCFT for higher order chirps may not have the precise values but some roughly low values of the sidelobes obtained in Sections 18.2 through 18.5 for linear chirps. However, it might be possible but more tedious to calculate the
values of the sidelobes of the DCFT for higher order chirps when the higher order powers of P(k, l) in Equation 18.9 in the proof of Lemma 18.1 is used. Another comment we would like to make here is that, similar to the spectrum estimation, when the chirp rate and the constant frequency are not integers, other high resolution techniques may exist and are certainly interesting.
18-14
Transforms and Applications Handbook
Image of |DCFT|2, N = 67, I = 4, γ = 0 dB
N = 67, I = 4, γ = 0 dB
0 10
70 20
60 Chirp rate, l0
|DCFT|2
50 40 30 20
30 40
10 50
0 80 60 40 20 (a)
Chirp rate, l0
0
0
10
20
30
40
50
60
Constant frequency, k0
70
60 0 (b)
10
20
30
40
50
60
Constant frequency, k0
FIGURE 18.15 The DCFT of four chirp components (43.9977, 56.9989), (38.0013, 64.9920), (52.9976, 9.9991), (54.9898, 12.0094) with additive SNR g ¼ 0 dB: (a) three-dimensional plot and (b) image.
References 1. D. R. Wehner, High-Resolution Radar, 2nd edn., Boston, MA-London, U.K., Artech House, 1995. 2. V. C. Chen and H. Lin, Time-Frequency Transforms for Radar Imaging and Signal Analysis, Artech House Publishing, Boston, MA, 2002. 3. X.-G. Xia, Discrete chirp-Fourier transform and its application to chirp rate estimation, IEEE Trans. Signal Process., 48, 3122–3133, Nov. 2000. 4. S. Peleg and B. Porat, Estimation and classification of signals with polynomial phase, IEEE Information Theory, 37, pp.422–430, 1991. 5. B. Porat, Digital Processing of Random Signals, Theory and Methods, Prentice-Hall, Englewood Cliffs, NJ, 1994. 6. S. Peleg and B. Friedlander, The discrete polynomial-phase transform, IEEE Trans. Signal Process., 43, 1901–1914, Aug. 1995. 7. S. Qian, D. Chen, and Q. Yin, Adaptive chirplet based signal approximation, Proceedings of ICASSP’98, Seattle, WA, May 12–15, 1998. 8. R. Kumaresan and S. Verma, On estimating the parameters of chirp signals using rank reduction techniques, Proceedings of the 21st Asilomar Conference Signals, System, Computing, Pacific Grove, CA, pp. 555–558, 1987. 9. P. M. Djuric and S. M. Kay, Parameter estimation of chirp signals, IEEE Trans. Acoust. Speech Signal Process., 38, 2118–2126, Dec. 1990.
10. M. Z. Ikram, K. Abed-Meraim, and Y. Hua, Estimating the parameters of chirp signals: An iterative approach, IEEE Trans. Signal Process., 46, 3436–3441, Dec. 1998. 11. T. J. Abatzoglou, Fast maximum likelihood joint estimation of frequency and frequency rate, IEEE Trans. Aerosp. Electron. Syst., 22, 708–715, Nov. 1986. 12. S. Peleg and B. Porat, The Cramer–Rao lower bound for signals with constant amplitude and polynomial phase, IEEE Trans. Signal Process., 39, 749–752, Mar. 1991. 13. L. R. Rabiner, R. W. Schafer, and C.M. Rader, The chirp z-transform algorithm and its applications, Bell Syst. Tech. J., 48, 1249–1292, May–June 1969. 14. V. Namias, The fractional order Fourier transform and its application to quantum mechanics, J. Inst. Math. Appl., 25, 241–265., 1980. 15. A. C. McBride and F. H. Kerr, On Namias’ fractional Fourier transforms, IMA J. Appl. Math., 39, 150–175, 1987. 16. L. B. Almeida, The fractional Fourier transform and timefrequency representations, IEEE Trans. Signal Process., 42, 3084–3091, Nov. 1994. 17. A. W. Lohmann, Image rotation, Wigner rotation and the fractional Fourier transform, J. Opt. Soc. Am. A, 10, 2181–2186, 1993. 18. D. Mendlovic and H. M. Ozaktas, Fractional Fourier transformations and their optical implementation: I. J. Opt. Soc. Am. A, 10, 1875–1881, 1993. 19. I. N. Bronshtein and K. A. Semendyayev, Handbook of Mathematics, Van Nostrand Reinhold Company, New York, 1985.
19 Multidimensional Discrete Unitary Transforms 19.1 Introduction................................................................................................................................. 19-1 Row–Column Algorithm Transforms
.
Vector Radix Algorithm
Tensor Representation
Covering with Cyclic Groups
.
Method of the Polynomial
19.2 Nontraditional Forms of Representation.............................................................................. 19-5 19.3 Partitioning of Multidimensional Transforms .................................................................... 19-7 .
19.4 Fourier Transform Tensor Representation ........................................................................ 19-10 2-D Directional Images
19.5 Tensor Algorithm of the 2-D DFT ...................................................................................... 19-16 N Is a Prime . N Is a Power of Two Algorithm . n-Dimensional DFT
.
Modified Tensor Algorithms
.
Recursive Tensor
19.6 Discrete Hartley Transforms ................................................................................................. 19-26 3-D DHT Tensor Representation
.
n-Dimensional DHT
19.7 2-D Shifted DFT....................................................................................................................... 19-32 r
r
2 3 2 -Point SDFT
.
r
r
L 3 L -Point SDFT
.
L1 L2 3 L1 L2-Point SDFT
19.8 2-D DCT .................................................................................................................................... 19-35 Modified Algorithms of the 2-D DCT
19.9 3-D Paired Representation..................................................................................................... 19-41 2D-to-3D Paired Transform . N Is a Power of Two Set-Frequency Characteristics
.
N Is a Power of Odd Prime
.
19.10 2-D DFT on the Hexagonal Lattice ..................................................................................... 19-55 Paired Representation of the DHFT
19.11 Paired Transform–Based Algorithms .................................................................................. 19-62
Artyom M. Grigoryan The University of Texas
Calculation of the 2-D DHT Hadamard Transform
.
2-D Discrete Cosine Transform
.
2-D Discrete
19.12 Conclusion ................................................................................................................................. 19-67 References .............................................................................................................................................. 19-67
19.1 Introduction The use of fast discrete unitary transforms has become a powerful technique in multidimensional signal processing, and in particular in image processing. Image processing in the frequency domain is used widely in image filtration, restoration, enhancement, compression, image reconstruction by projections, and other areas [1–4]. Among the unitary transforms, one should note the Fourier, Hartley, Hadamard, and cosine transforms. Theory of the Fourier transform is well developed, and effective methods (or, fast algorithms) of the discrete Fourier transforms (DFT’s) are used for solving many problems in different areas of data processing such as signal and image processing, speech analysis, and communication, etc. We also observe the considerable interest to many applications of the discrete Hartley transform (DHT) together with the DFT, since the DHT relates closely to the DFT and has been created as an alternative form
of the complex DFT, to eliminate the necessity of using complex operations. Another transform is the discrete Hadamard transform (DHdT), which is real, binary, and computationally advantageous over the fast Fourier transform (FFT). The Hadamard functions can be used for a series expansion of the signal, being orthogonal and taking value 1 at each point. This transform has found useful applications in signal and image processing, communication systems, image coding, image enhancement, pattern recognition, and general two-dimensional filtering. The discrete cosine transform (DCT) is used in speech and image processing, especially in image compression and transform coding in telecommunication [5–9]. The application of the multidimensional transform involves the calculation of the transform, manipulation with transform coefficients, and then calculation of the inverse transform. For signals of large sizes, this process requires a great number of operations when performing multidimensional transforms. It is 19-1
19-2
Transforms and Applications Handbook
desired to maximally reduce this number, and different methods of effective calculation of multidimensional unitary transforms have been developed. For instance, in most cases, the calculation of the 2-D transform is reduced by partitioning the entire image by one-dimensional (1-D) or two-dimensional (2-D) blocks and calculating the transforms of these blocks. We here stand on the traditional ‘‘row–column’’ algorithm, when 1-D transforms over all rows and then columns are calculated, as well as the ‘‘vector– radix’’ algorithm, when the image is divided consequently by four blocks of equal size. The method of the polynomial transformations developed by Nussbaumer is also considered. Then we describe, in detail, the partitionings of multidimensional transforms, which are based on the concepts of the tensor and paired representations of multidimensional signals, including 2-D and three-dimensional (3-D) images. In these new forms of representation, the multidimensional signals are described by sets of 1-D signals which carry the spectral information of multidimensional signals in different subsets of frequency-points. The processing of multidimensional signals thus is reduced to processing 1-D signals, which we call splitting-signals, since they represent the multidimensional signals and split the transforms of these signals. The splitting-signals are described for the 2-D and multidimensional Fourier, Hartley, Hadamard, and cosine transforms.
Many of the multidimensional transformations are separable, meaning that these transforms over multidimensional signals can be performed by calculating 1-D transforms consequently along all dimensions of the signal. For instance, for a separable 2-D transformation T, the transform of a 2-D signal or image f ¼ {fn,m} of size (N 3 N), N > 1, can be obtained by first calculating the 1-D transforms over all columns of the image and, then calculating the 1-D transforms by the rows of the obtained 2-D data, as shown in Figure 19.1. In matrix form, the transform of f can be written as [2D T] [f ] ¼ [1D T] [f ] [1D T]t where ‘‘t’’ denotes the matrix operation of transposition, and square brackets [] are used to denote the matrices of the transformations T and image f. As an example, we consider the 2-D DFT of the image fn,m, which is defined by
1-D DTs
N1 X N1 X
fn,m W npþms ,
n¼0 m¼0
p, s ¼ 0: (N 1), (19:1)
1-D DTs
Fp,s ¼
N1 X n¼0
"
Fn (s) ¼
N1 X
=
2-D DT
FIGURE 19.1 Block diagram of calculation of the 2-D discrete transform (DT) (separable).
fn,m W
ms
m¼0
#
W np ,
p, s ¼ 0: (N 1), (19:2)
where Fn (s) is the value of the 1-D DFT of row number n at point s. To calculate the 2-D DFT, 2N 1-D DFTs are used in the row–column algorithm. This algorithm is simple, but requires many operations of multiplication and addition. All twiddle coefficients, Wt, t ¼ 0: (N 1), lie on N equidistant points of the unit circle, and many of them are irrational numbers. We now consider the transformation whose kernel lies only on two points 1 on the unit circle. The 2-D separable DHdT of order N 3 N, where N ¼ 2r, r > 1, is defined as Ap,s ¼ (AN,N f )p, s ¼ ¼
19.1.1 Row–Column Algorithm
Fp,s ¼ (F N,N f )p,s ¼
where W ¼ WN ¼ exp (2pj=N) is the kernel of the transformation, and j2 ¼ 1. The designation p ¼ 0: (N 1) denotes p as an integer that runs from 0 to (N 1). The kernel is separable, Wnpþms ¼ Wnp Wms, and the transform can thus be written as
N1 X N1 X
fn, m a(p; n)a(s; m)
n¼0 m¼0
" N 1 X N 1 X n¼0
#
fn,m a(s; m) a(p; n):
m¼0
(19:3)
The kernel of the transformation is defined by the binary function a(p; n) ¼ (1)n0 p0 þn1 p1 þþnr pr
(19:4)
where (n0, n1, . . . , nr) and (p0, p1, . . . , pr) are the binary representations of numbers n and p, respectively. As an example, Figure 19.2 shows an image (512 3 512) in part (a), along with the 2-D discrete Fourier and Hadamard transforms of the image in (b) and (c), respectively. The realization of the Hadamard transformation requires only operations of addition (and subtraction). From the computational point of view, the 1-D Hadamard transform is faster than the complex Fourier transform. These two different transforms can share the same fast algorithm. For instance, the FFT by paired transforms can also be used for the fast Hadamard transform, when considering all twiddle coefficients Wt equal 1 [11].
19.1.2 Vector Radix Algorithm The ‘‘row–column’’ method of calculation of a separable 2-D DT requires the transposition of the 2-D data table obtained after performing all 1-D transforms over the rows. For tables of large sizes, the transposition slows down the process of calculation of the transform, and therefore other methods of fast calculation of the transform have been developed to avoid the transposition. That can be done by partitioning the square period (N 3 N) of the transform by other than row and column sets. We mention here the idea of generalization of the 2-D ‘‘butterfly’’ operation from the
19-3
Multidimensional Discrete Unitary Transforms
FISH image
2-D DFT
(a)
2-D DHT
(c)
(b)
FIGURE 19.2 (a) Original image of size 512 3 512, (b) 2-D DFT (in absolute mode), and (c) 2-D DHdT.
1-D Cooley–Tukey algorithm [10] to the four-dimensional (4-D) operation, when dividing the transforms by four parts of size (N=2 3 N=2) each. The vector–radix algorithm for the 2-D DFT uses the ‘‘butterfly’’ 2 3 2 which is defined as the following Kronecker product of two 2-D butterflies:
{fn,m } !
8 > > >
> > =
bn1 , m1 ¼ f2n1 þ1, 2m1 > cn1 , m1 ¼ f2n1 , 2m1 þ1 > > > > > ; : dn1 , m1 ¼ f2n1 þ1, 2m1 þ1
n1 , m1 ¼ 0: (N=2 1): (19:6)
3 W p1 þs1 Then, the N=2 3 N=2-point DFT of each part is calculated, 1 W s1 1 W p1 W p1 þs1 7 7 1 W p1 1 W s1 W p1 þs1 5 N=21 X X N=21 W p1 þs1 n1 p1 þm1 s1 an1 ,m1 WN=2 , Ap1 ,s1 ¼ F N=2,N=2 a p1 ,s1 ¼ (19:5) n1 ¼0 m1 ¼0 Bp1 ,s1 ¼ F N=2,N=2 b p1 ,s1 , where p1, s1 ¼ 0: (N=2 1). The block diagram of the vector– radix algorithm is given in Figure 19.3. Cp1 ,s1 ¼ F N=2,N=2 c p1 ,s1 , The image fn,m is reorganized into four parts of size (N=2 3 N=2), each of which contains only even–even, odd–even, even– Dp1 ,s1 ¼ F N=2,N=2 d p1 ,s1 : odd, or odd–odd coordinates,
1 W p1 6 1 W p1 ¼6 4 1 W p1 1 W p1 2
W s1 W s1 W s1 W s1
1
Ap1, s1
f2n1,2m1 N 2
× N2 -point DFTs
f2n1 + 1,2m1
Bp1,s1
f2n1,2m1 + 1
Cp1,s1
f2n1 + 1,2m1 + 1
Dp1,s1
n1,m1 = 0: (N/2 − 1)
“Butterfly” 2 × 2
{W p1}
{W s1}
{W p1 + s1}
W = e−j
2π N
FIGURE 19.3 Diagram of 2-D N 3 N-point DFT by using ‘‘butterfly’’ 2 3 2.
Fp1,s1
Fp1 + N/2,s1
Fp1,s1 + N/2
Fp1 + N/2,s1 + N/2 p1,s1 = 0:(N/2 − 1)
19-4
Transforms and Applications Handbook
where p1, s1 ¼ 0 : (N=2 1). The 2-D DFT of the image fn,m can be composed from these four 2-D DFTs by using the butterfly operation (Equation 19.5) as follows: Fp1 , s1 ¼ Ap1 , s1 þ W p1 Bp1 , s1 þ W s1 Cp1 , s1 þ W p1 þp1 Dp1 , s1
Let us write the N 3 N-point DFT of the sequence fn1 ,n2 in the separable form Fp1 ,p2 ¼
Fp1 þN=2, s1 ¼ Ap1 , s1 W p1 Bp1 , s1 þ W s1 Cp1 , s1 W p1 þs1 Dp1 , s1 Fp1 , s1 þN=2 ¼ Ap1 , s1 þ W p1 Bp1 , s1 W s1 Cp1 , s1 W p1 þs1 Dp1 , s1 p1
s1
Fp1 þN=2, s1 þN=2 ¼ Ap1 , s1 W Bp1 , s1 W Cp1 , s1 þ W
p1 þs1
D p1 , s 1 :
The same method of partitioning can be applied to each of the parts (N=2 3 N=2), and then to each of the sixteen obtained (N=4 3 N=4) parts, and so on, until the parts be of size (2 3 2). The vector–radix algorithms reduce the number of arithmetic operations. For instance, the number of multiplications can be estimated by the recurrent formula
We use the estimation mN ¼ N=2(log2 N 3) þ 2, (N > 8), for the number of multiplications used in the N-point DFT by the fast paired transforms [11,12]. There are many modifications of the vector–radix technique to decompose the image into many smaller 2-D transforms. Another powerful method of calculation of the DFT is based on using polynomial transforms, which was developed by Nussbaumer [13,14].
19.1.3 Method of the Polynomial Transforms The Nussbaumer algorithm uses the polynomial expansion of the field of rational and complex numbers, where the Fourier transformation exists. Such expansion represents the domain of polynomials, in which the operations of addition and multiplication of polynomials are considered modulo the given polynomial. We here briefly describe the cases of most interest, which correspond to the 2-D DFT of the equal orders N 3 N, when N is a prime and a power of two.
!
W n1 p1 ,
p1 , p2 ¼ 0: (N 1): (19:9)
n2 ¼0
fn1 ,n2 W p2 n2 ! Fn1 (z) ¼
N1 X
fn1 ,n2 z n2 ,
n2 ¼0
z 2 C2 ,
and, then, write the definition in (Equation 19.9) as N 1 X
n1 ¼0 N1 X
¼
(19:7)
N ( log2 N 3) þ 2) ¼ N 2 ( log2 N 3) þ 4N: 2 (19:8)
fn1 ,n2 W
n2 p2
n2 ¼0
Fp1 ,p2 ¼
The actual number of multiplications is smaller than mN, N, since the number of trivial twiddle coefficients are not considered. For large N, the vector–radix algorithms reduces the number of multiplications by almost 25%, when compared with the row– column algorithm. Indeed, the number of multiplications in the row–column algorithm can be estimated by
n1 ¼0
N1 X
¼ 4[4mN=4,N=4 þ 3(N=4)2 ] þ 3(N=2)2
mN,N ¼ 2NmN ¼ 2N
N1 X
The polynomial transforms represent themselves the polynomial expansion the 1-D DFTs when the complex exponents W p2 are replaced by the variable z in the complex plane C2. Thus, we consider the following transformation into polynomials:
mN,N ¼ 4mN=2,N=2 þ 3(N=2)2 3 ¼ N 2 ( log2 N 2): 4
N1 X
n1 ¼0
n2
!
Fn1 (z)W n1 p1
!
N 1 X
fn1 ,n2 z
n2 ¼0
W n1 p1 jz¼W p2
(19:10)
jz¼W p2
or shortly Fp1 ,p2 ¼
N1 X
n1 ¼0
Fn1 (z)W n1 p1 mod (z W p2 ):
(19:11)
All twiddle coefficients W p2 , p2 ¼ 0: (N 1), are the Nth roots of the unit, or the roots of the polynomial zN 1. We consider the sum of the above equation modulo zN 1 and denote it by Gp1 (z) ¼
N1 X
n1 ¼0
Fn1 (z)W n1 p1 mod (z N 1):
(19:12)
Thus, the 2-D DFT can be written in the form of Fp1 , p2 ¼ Gp1 (z) mod (z W p2 ), p2 ¼ 0: (N 1):
(19:13)
The polynomial zN 1 can be represented as the product of the cyclotomic polynomials (i.e., indivisible polynomials with rational coefficients): zN 1 ¼ P1 (z)P2 (z) Pm (z), m ¼ m(N) N, and the residual of division of Gp1 (z) by the polynomial zN 1 in Equation 19.13 can be reduced to residuals by such indivisible polynomials. For example, we consider the case when N is a prime number > 2. The following decomposition of the polynomial holds: zN 1 ¼ (z 1)P2 (z) ¼ (z 1)(z N1 þ z N2 þ þ 1):
(19:14)
19-5
Multidimensional Discrete Unitary Transforms
transforms modulo zN=2 þ 1, 3N=2 reduced N-point DFTs and one N=2 3 N=2-point DFT for calculating all components of the spectrum at frequency-points (p1, p2) with even p1 and p2. The N=2 3 N=2-point DFT can also be reduced by the polynomial transforms modulus zN=4 þ 1, to 3N=4 N=4-point reduced DFTs, N1 X and the N=4 3 N=4-point DFT. The sequential application of k F p1 p2 , p 2 ¼ Fn1 (z)W n1 p1 p2 mod (z N 1) the polynomial transforms modulo z N=2 þ 1, ðk ¼ 1 : (r m)Þ, n1 ¼0 when m ¼ 1 : (r 2), yields the decomposition of the N 3 NN 1 X n1 p1 p2 point DFT by 3N=2 N=2-point DFTs, 3N=4 N=4-point DFTs, etc. ¼ Fn1 (z)z mod P2 (z), z ¼ W , p2 ¼ 1: (N 1), In comparison with the row–column algorithm, the method of n1 ¼0 polynomial transforms uses approximately two times less operations of multiplication and a small number of additions. where p1 ¼ 0 : (N 1). The compact form of this equation is We now present the tensor approach and its improvement for ! N 1 dividing the calculation of the 2-D DFT into the minimal numX F p1 p2 , p 2 ¼ Fn1 (z)zn1 p1 mod P2 (z) mod (z W p2 ), ber of short 1-D transforms. The approach is universal because it n1 ¼0 can be implemented to calculate other discrete unitary transp1 ¼ 0: (N 1), p2 ¼ 1: (N 1): forms, such as the Hadamard, cosine, and Hartley transforms (19:15) [15,16], and transforms of high dimensions. Since N is prime, and p2 6¼ 0, the transformation p1 ! (p1p2) mod N maps the set of the integers p1 ¼ 0 : (N 1) into itself. The coefficients W p2 are roots of the polynomial P2(z). Therefore, Equation 19.13 takes the form
Thus, the 2-D DFT at the frequency-points {(p1p2, p2); p2 ¼ 0 : (N 1)} can be defined as Fp1 p2 , p2 ¼ Gp1 (z) mod (z W p2 ),
19.2 Nontraditional Forms of Representation
When processing a multidimensional signal fn1 , n2 ,..., nm , m 2, in frequency domain by a specific unitary transformation, for instance the m-dimensional DFT, the signal can be represented where in a form that splits the structure of both signal and transform in N1 a way that yields an effective method of calculation of the transX Fn1 (z)z n1 p1 mod P2 (z), p1 ¼ 0: (N 1): (19:17) form with the following signal processing. Such forms are not Gp1 (z) ¼ n1 ¼0 necessarily of the matrix form, but other multidimensional figures. The work presented here does not rely on traditional When p2 ¼ 0, the formula for the 2-D DFT takes the simple form methods of processing multidimensional transforms and signals, of the 1-D DFT, but more effective methods which are based on the discovery that can be briefly formulated as follows. Multidimensional spectra N1 X N1 N1 X X are split by appointed trajectories (such as orbits) and the moven1 p1 n 1 p1 Fp1 ,0 ¼ fn1 ,n2 W ¼ Fn1 (1)W , p1 ¼ 0: (N 1): ment of a spectral point along each such trajectory is of great n1 ¼0 n2 ¼0 n1 ¼0 (19:18) interest in the process of formation of the spectra, as well as in processing the spectra. Trajectories do not intersect, and it is The polynomial transform modulo P2(z) in Equation 19.17 does possible to extract the spectral information from such trajectories not depend on p2 and is calculated without operations of multi- or change and put desired information into trajectories. Vast plication. The multiplications are required to calculate the horizons lie before us in such an approach allowing new effective N-point DFT in Equation 19.18 and the N-point DFT in Equa- methods of processing multidimensional spectra to be discovered tion 19.16, for each p1 ¼ 0 : (N 1). Thus, the N 3 N-point DFT and applied in practice. We present a theory of fast multidimension transforms based is calculated by means of the polynomial transforms modulo P(z) on the concept of partitioning that reveals the transforms. We and (N þ 1) N-point DFTs. stand in detail on the 2-D and 3-D cases; the application of the In the case, when N equals a power of two, the following discussed methods to high-dimensional signals is straightfordecomposition of the polynomial holds: ward. At the same time, we present our vision of developing and applying new methods of multidimensional signal processz N 1 ¼ z N=2 1 z N=2 þ 1 , ing, such as image processing. We propose to use new forms of image and transform representation, that simplify not only the and all exponents z ¼ W p2 with odd powers p2 are roots of the calculation of 2-D (or 3-D) transforms, but also lead to effective polynomial zN=2 1. Therefore it follows directly from Equation solutions of different problems in image processing, such as 19.10 through Equation 19.13, that all spectral components Fp1 , p2 image enhancement, computerized tomography, image filtration, at frequency-points (p1, p2), both coordinates of which p1 and p2 and compression. We describe the theory of the so-called are not even, can be calculated by means of the polynomial tensor and paired forms of representation. Their main task is to p2 ¼ 0: (N 1),
(19:16)
19-6
Transforms and Applications Handbook
fn,m
Signal 1
FN,−1N
1-D DSP
gn,m
χ Image
Image
2-D IDFT Signal L
Original
1-D DSP
2
Fp,s
1
Processed
FIGURE 19.4 Block diagram of image processing by 1-D signals. DSP, digital signal processing.
represent uniquely the image in a form of a set of 1-D signals which can be processed separately and then transferred back to the image, as shown in the diagram of Figure 19.4 (with or without block 2). The calculation of the 2-D DFT is reduced
to calculation of 1-D DFTs and processing of the 2-D image to processing all or a few 1-D signals. The mathematical structure of the 2-D DFT and other unitary transforms possess such representations.
176
200
175
180 FT(0) = 4388.5
174
160 140
173
120
172
100
171
80
170
40
168
20
167
(a)
60
169
0
50
(b)
Original
200
250
1-D signal
0
50
100
150
200
250
1-D DFT
250
174
0.9
0
(c)
Original signal
175
0.95
200
173
0.85
172
0.8
171
150 100
New signal
170
0.75
169
0.7
(d)
150
176
1
0.65
100
50
168 0
50
100
150
Factors
200
250
167
(e)
(g)
0
50
100
150
New signal
200
250
0
(f )
0
50
100
150
200
250
Frequencies
Enhanced
FIGURE 19.5 Fast transform-based method of image enhancement. (a) The original image, (b) the splitting-signal, (c) the amplitude spectrum of the signal, (d) factors, (e) the processed signal, (f) marked locations of 256 frequency-points, and (g) the image enhanced by one signal.
19-7
Multidimensional Discrete Unitary Transforms
As an example, we consider the image enhancement by one such signal, instead of calculating the 2-D DFT of the image and manipulating its coefficients. Figure 19.5 shows the original image of size 256 3 256 in part a, along with a 1-D signal derived from the image in (b), the 1-D DFT of the signal (in absolute scale and shifted to the center) in (c), the coefficients to be multiplied pointwise by the 1-D DFT in (d), a new 1-D signal in (e), 256 frequency-points at which the spectral information of the new signal will be recorded in the 2-D DFT of the processed image in (f), and the enhanced image in (g). The enhanced image can be obtained by the inverse 2-D DFT (2-D IDFT), as well as directly from the new signal (of e) by using the 1-D DFT [18]. Thus the problem of the 2-D image enhancement can be reduced to processing a 1-D signal (or a few such), by passing the calculation of the 2-D DFT of the original image as well as the inverse 2-D DFT (2-D IDFT) for the enhanced image. We now describe methods of deriving such 1-D signals, which lead to effective calculation of the 2-D DFT, as well as other unitary transforms.
19.3 Partitioning of Multidimensional Transforms Let a sequence g ¼ {g0, g1, . . . , gN 1} of length N > 1 be linearly and uniformly expressed by a sequence f ¼ {f0, f1, . . . , fN 1} gp ¼
N 1 X n¼0
wp (n)fn , p ¼ 0: (N 1):
(19:19)
The transformation of f into g, by using this formula is called a linear transformation, which we denote by A. Coefficients ap,n ¼ wp(n) form a square (N 3 N) matrix A ¼ kap,nk, which is called the matrix of the transformation. The linear transformation can be written in matrix representation [g] ¼ A[ f], where [g] and [ f] denote the vector–columns of sequences g and f, respectively. Every linear 1-D transformation determines uniquely a 2-D matrix A, and vice versa, every matrix A determines the linear 1-D transformation. Similarly, each 2n-dimensional (n > 1) matrix determines a certain linear n-dimensional transformation A of n-dimensional sequences,
A: f ¼ ffn1 , n2 ,..., nn g ! g ¼ gp1 , p2 ,..., pn
(19:20)
where nk, pk ¼ 0 : (Nk 1) and Nk > 1, k ¼ 1 : n. The numbers N1, . . . , Nn are called orders of the transformation A:g ¼ A f is called the n-dimensional transform of f. For simplicity of future calculations, we will consider mainly the case of equal orders, when Nk ¼ N, k ¼ 1 : n. The transform of f is described by the following relation:
where ap1 ,..., pn , n1 ,..., nn ¼ wp1 ,..., pn (n1 , . . . , nn ) are coefficients of the matrix of the transformation A. Elements (p1, . . . , pn) 2 X are referred to as frequency-points. We assume that sequences f and the transforms A f are defined on the same n-dimensional rectangular integer lattice of size N1 3 . . . 3 Nn X ¼ XN1 ,..., Nn ¼ f(n1 , . . . , nn ); nk ¼ 0: (Nk 1), k ¼ 1 : ng:
(19:22)
This set X is called the fundamental period of the transformation A.
Example 19.1 (3-D Fourier Transformation) Let f be a 3-D N 3 N 3 N-point sequence f ¼ ffn1 , n2 , n3 g. The N 3 N 3 N-point Fourier transform of the sequence f is defined by
Fp1 ,p2 ,p3 ¼
N 1 1 X n1 ¼0
N n 1 X nn ¼0
wp1 ,..., pn (n1 , . . . , nn )fn1 ,..., nn
(19:21)
n1 ¼0 n2 ¼0 n3 ¼0
W n1 p1 þn2 p2 þn3 p3 fn1 ,n2 ,n3 , p1 , p2 , p2 ¼ 0 : (N 1)
(19:23)
where W
¼ exp(2pj=N). The arithmetic action fn1 ,n2 ,n3 ! Fp1 ,p2 ,p3 is called a 3-D N 3 N 3 N-point Fourier transformation, which we shall denote by F N,N,N . The order of the transformation F N,N,N equals N 3 N 3 N, and its matrix is the six-dimensional matrix [F N0 ,N,N ] ¼ ap1 ,p2 ,p3 ,n1 ,n2 ,n3 ¼ W n1 p1 þn2 p2 þn3 p3 , nk , pk ¼ 0: (N 1), k ¼ 1 : 3:
An n-dimensional DT A is called unitary, if the matrix of the transformation is unitary, i.e., AA* ¼ I, where I is the diagonal matrix, and A* is the complex conjugate to A, which is defined by A* ¼ an1 ,..., nn , p1 ,..., pn
where the sign denotes the transition to the complex conjugate value, that is, z ¼ x jy, if z ¼ x þ jy. A real unitary transformation is called orthogonal. For a fixed (p1, . . . , pm), the function wp,..., pn ( , . . . , ) is said to be the (p1, . . . , pm)-th basis function of the transformation A, and n o the collection of such functions {w} ¼ wp,..., pn ( , . . . , ) is called a basis or kernel of A. The unitary property of the transformation is also expressed by the following expression X
(n1 ,..., nn )2X
gp1 ,..., pn ¼
N1 X N1 X N1 X
wp1 ,..., pn (n1 , . . . , nn ) ws1 ,..., sn (n1 , . . . , nn ) ¼
(p1 , . . . , pn ), (s1 , . . . , sn ) 2 X,
n Y
d(pk , sk )
k¼1
(19:24)
19-8
Transforms and Applications Handbook
where d is the kronecker delta-function, d(n, k) ¼ 1, if n ¼ k, and d(n, k) ¼ 0, otherwise. If the collection of functions {w} satisfies this condition, then {w} is said to be a complete and orthonormal set of functions (or orthogonal, if there is a factor different from 1 in the right side of Equation 19.24, before the multiplication sign) in the space of n-dimensional sequences defined on X. For the unitary transformation A, the collection of functions is a complete and orthonormal set of basis functions.
Example 19.2 (1-D Fourier Transformation) Let f be a 1-D sequence f ¼ {f0, f1, . . . , fN 1}. The N-point Fourier transform of the sequence f is defined by N1 1 X Fp ¼ (F N f )p ¼ pffiffiffi W np fn , N n¼0
p ¼ 0: (N 1):
(19:25)
pffiffiffiffi The pffiffiffiffibasis functions wp(n) are the pairs 1= N cos (vp n), 1= N sin (vp n)Þ of discrete-time cosine and sine waves with frequencies vp ¼ (2p=N)p. The waves are orthonormal, since N 1 X n¼0
N 1 1 X W n(ps) N n¼0 N 1 1 X 2p(p s) 2p(p s) ¼ cos n j sin n N n¼0 N N 1, p¼s ¼ 0, p 6¼ s
wp (n) ws (n) ¼
and the transformation is unitary. The matrices of the transformation and its conjugate are symmetric matrices j2p j2p 1 1 [F N ] ¼ pffiffiffiffi ke N np kn, p¼0:(N1), ½F N* ¼ pffiffiffiffi ke N np kn, p¼0:(N1), N N
and [F N ]½F *N ¼ I. The conjugate matrix is thus the matrix of the inverse 1-D DFT.
19.3.1 Tensor Representation In this section, we describe a concept of covering that reveals the mathematical structure of many multidimensional transforms [19–23]. The covering is considered to be composed by cyclic groups of frequency-points of the period of transformations. This covering leads to the tensor, or vectorial representation of multidimensional signals. In the general multidimensional case, the covering revealing the transform is described in the following way. Suppose s ¼ (T) is an irreducible covering of an n-dimensional lattice X, n 2. It means that the set-theoretic union of all subsets T coincides with X and any smaller family of subsets of T from s does not cover X. We use card to denote the cardinality of a set. If a discrete n-dimensional unitary transformation with the fundamental period X can be split into a set of card s one-dimensional
card T-point unitary transformations A, then we say that the considered multidimensional transformation is revealed by the covering s, or, the covering s reveals the transformation. Let f be an N1 3 3 Nn sequence.
Definition 19.1: An N1 3 3 Nn transformation P is said to be revealed by the covering s of X if, for each set T 2 s, there exists a 1-D orthogonal transformation A ¼ A(T) and a sequence fT such that the restriction of the transform P f on the set of frequency-points T equals the transform A fT , i.e., (P f )jT ¼ A fT :
(19:26)
~ A. This condition is briefly written as P jT s The set of the 1-D transforms {A(T); T 2 s} is called a s-splitting of the n-dimensional transformation P by the covering s and denoted by R(P; s). The set of 1-D sequences {fT; T 2 s} is the s-representation of f with respect to the transformation P. In the above definition, each 1-D transformation A is determined by the corresponding subset T, not f. It should also be noted, that the covering s results in not only the splitting of the n-dimensional transformation P by the 1-D transformations A but also determines the corresponding representation of the n-dimensional sequence f as a set of 1-D sequences fT. In other words, two representations are defined, one for the given sequence and another for its transform, f ! {fT ; T 2 s},
and
P f ! {A fT ; T 2 s}:
(19:27)
19.3.2 Covering with Cyclic Groups We consider a class of n-dimensional discrete unitary transformations that are revealed by the irreducible covering s composed only from additive cyclic groups s ¼ sJ ¼ Tp1 ,..., pn (p1 ,..., pn )2J
(19:28)
with generators (p1, . . . , pn) from a certain subset J X ¼ XN1 ,..., Nn . The cyclic group T with a generator (p1, . . . , pn) is defined as a set of frequency-points which are integer multiples to the generator,
T ¼ Tp1 ,..., pn ¼ (kp1 , . . . , kpn ); k ¼ 0: (card T 1) (19:29)
We use the short notation kpi ¼ (kpi ) mod Ni for i ¼ 1 : n.
Example 19.3 Let X be the lattice 5 3 5, what corresponds to the N1 ¼ N2 ¼ 5 case. The group Tp1 , p2 with a generatrix (p1, p2) 2 X ¼ X5,5 is
Tp1 , p2 ¼ (0, 0), (p1 , p2 ), (2p1 , 2p2 ), (3p1 , 3p2 ), (4p1 , 4p2 ) :
19-9
Multidimensional Discrete Unitary Transforms
T0,1
T1,1
T2,1
T3,1
T4,1
T1,0
4
4
4
4
4
4
2
2
2
2
2
2
0
0
0
0
0
0
0
2
4
0
2
4
0
2
4
0
2
4
0
2
4
0
2
4
FIGURE 19.6 Arrangement of frequency-points of groups Tp1 , p2 covering the lattice 5 3 5.
There are six groups T which compose an irreducible covering s ¼ sJ of X, namely T0,1 ¼ f(0, T1,1 ¼ f(0, T2,1 ¼ f(0, T3,1 ¼ f(0, T4,1 ¼ f(0,
0), (0, 1), (0, 2), (0, 3), (0, 4)g 0), (1, 1), (2, 2), (3, 3), (4, 4)g 0), (2, 1), (4, 2), (1, 3), (3, 4)g 0), (3, 1), (1, 2), (4, 3), (2, 4)g 0), (4, 1), (3, 2), (2, 3), (1, 4)g
(19:30)
T1,0 ¼ f(0, 0), (1, 0), (2, 0), (3, 0), (4, 0)g
and the set of generators J ¼ J5,5 ¼ f(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (1, 0)g: (19:31) Figure 19.6 shows the location of all frequency-points of these six groups. These groups intersect only at point (0, 0). If the covering composed by these groups reveals a 5 3 5-point transformation, then that transformation can be split by six 5-point 1-D transformations. It will be shown, that such transformations are the Fourier, Hartley, and Hadamard transformations. To calculate, for instance the 5 3 5-point 2-D DFT, there are only six 1-D 5-points transforms required, instead of 10 transforms in the ‘‘row–column’’ method. In the general N 3 N case, the elements of the group Tp1 , p2 lie on parallel lines at angle u ¼ tan1(p2=p1) to the horizontal axis. The number l of such lines is determined as follows. If p1 ¼ 0 or p2 ¼ 0, then l ¼ 1. For other cases, let k1 and k2 be the least integers satisfying the relations k1 p1 ¼ k2 p2 ¼ N 1. Then, l ¼ k1=k2 with k1 k2, and l ¼ k2=k1 with k1 < k2. For instance, when N ¼ 5 and (p1, p2) ¼ (2, 1), we obtain 2p1 ¼ 4p2 ¼ 4 and l ¼ 4=2 ¼ 2. The frequency-points of the group T2,1 lie on two parallel lines at angle u ¼ tan1(1=2) ¼ 26.56518 to the horizontal axis (see Figure 19.6). The points of this group can be also considered lying on three parallel lines at angle u1 ¼ u 908 ¼ 63.43498 to the horizontal axis. It should be noted, that if we splice the opposite sides of the lattice bounds, then the lattice will be represented as a net traced on the surface of a three-dimensional torus and the mentioned l lines will compose a closed spiral on the torus, which will pass through those points on the net, which correspond to the points (0, 0) and (p1, p2). All elements of the cyclic group will be points of intersection of the spiral with the net. As an example, Figure 19.7 shows the points of the lattice X20,20 on the torus and two spirals with frequency-points of the groups T1,1 and T1,2 which intersect at the knot (0, 0).
The irreducible covering s of the domain X composed from groups (Equation 19.29) is unique. To illustrate this property, we consider the 2-D case with the square lattice X5,5. The irreducible covering s of X5,5 is the family of six groups given in Equation 19.30: s ¼ (T0,1 , T1,1 , T2,1 , T3,1 , T4,1 , T1,0 ): The cyclic group Tp1 , p2 with any other generatrix (p1, p2) 6¼ (0, 0), different from generatrices (0, 1), (1, 1), (2, 1), (3, 1), (4, 1), and (1, 0), coincides with one of the groups of the covering s. For instance, T1, 2 ¼ f(0, 0), (1, 2), (2, 4), (3, 1), (4, 3)g ¼ T3, 1 , T3, 2 ¼ f(0, 0), (3, 2), (1, 4), (4, 1), (2, 3)g ¼ T4, 1 , T2, 2 ¼ f(0, 0), (2, 2), (4, 4), (1, 1), (3, 3)g ¼ T1, 1 : If an n-dimensional transformation P is revealing by the covering s composed by cyclic groups (Equation 19.29), then the sJ-representation of a n-dimensional sequence f by P, i.e., the totality of 1-D signals {fT ; T 2 s}, is called a tensor, or vector representation of f with respect to the transformation P [12]. The 1-D signals fT are called the splitting-signals of f.
(0,0)
T1,2 T1,1
FIGURE 19.7 Torus of the lattice 20 3 20 with two spirals corresponsing to the groups T1,1 and T1,2.
19-10
Transforms and Applications Handbook
19.4 Fourier Transform Tensor Representation In this section, we discuss in detail the construction and properties of the tensor representation of the multidimensional signals with respect to the DFT. We first consider the 2-D case as the simplest case among multidimensional ones. Let f ¼ ffn1 ,n2 g be a sequence of size N1 3 N2, and let N0 ¼ g.c.d. (N1, N2) > 1, i.e., N1 ¼ N0 N10 , N2 ¼ N0 N20 . Let s ¼ sN1 ,N2 be the irreducible covering of the rectangular lattice X ¼ XN1 ,N2 defined in Equation 19.28. We denote by F ¼ F N1 ,N2 the N1 3 N2-point 2-D DFT. The 2-D DFT of the sequence f, accurate to the normalizing factor 1=N1N2, is defined by the following relation: Fp1 ,p2 ¼ (F f )p1 ,p2 ¼
N 1 1 N 2 1 X X n1 ¼0 n2 ¼0
(19:32)
(19:33)
where N ¼ N1 N2=N0. On these sets of points, we consider the sums of the sequence f, i.e., the following N quantities X
fn1 ,n2 ; (n2 , n2 ) 2 Vp1 ,p2 ,t , t ¼ 0 : (N 1):
(19:34)
For the spectral component Fp1 ,p2 , the following calculations hold: N 1 1 N 2 1 X X
N 0 n1 p1 þN10 n2 p2
fn1 ,n2 WN 2
n1 ¼0 n2 ¼0
¼
N1 X
fp1 ,p2 ,t W t , (19:35)
t¼0
2p
N1 X t¼0
1 2 f ¼ 1 4 2
fp1 ,p2 ,t W kt , k ¼ 0 : (N 1):
(19:36)
In other words, N components Fkp1 , kp2 of the 2-D DFT of f can be represented by the 1-D sequence of length N:
fTp1 ,p2 ¼ fp1 ,p2 ,0 , fp1 ,p2 ,1 , . . . , fp1 ,p2 , N1 :
2
1
3
0 3 1 4
1 2 0 1
1 2 1 2
1 2 1 : 3 1
The underlined unit shows the location of the zero point. We consider the frequency-point (p1, p2) ¼ (2, 1). All values of t in the equations n1p1 þ n2p2 ¼ t mod 5 can be written in the form of the following matrix: 0 2 kt ¼ (n1 2 þ n2 1)mod5kn2 ,n1 ¼0: 4 ¼ 4 1 3
1
2
3
3 0 2 4
4 1 3 0
0 2 4 1
4 1 3 : 0 2
Therefore, the components of the splitting-signal fT2,1 of f are defined as follows: 8 f2,1,0 ¼ f0,0 þ f1,3 þ f2,1 þ f3,4 þ f4,2 ¼ 1 þ 1 þ 3 þ 3 þ 1 ¼ 9 > > > > > > < f2,1,1 ¼ f0,1 þ f1,4 þ f2,2 þ f3,0 þ f4,3 ¼ 2 þ 2 þ 2 þ 4 þ 2 ¼ 12 fT2,1 ¼ f2,1,2 ¼ f0,2 þ f1,0 þ f2,3 þ f3,1 þ f4,4 ¼ 1 þ 2 þ 2 þ 1 þ 1 ¼ 7 > > > f2,1,3 ¼ f0,3 þ f1,1 þ f2,4 þ f3,2 þ f4,0 ¼ 3 þ 0 þ 1 þ 0 þ 2 ¼ 6 > > > : f2,1,4 ¼ f0,4 þ f1,2 þ f2,0 þ f3,3 þ f4,1 ¼ 1 þ 1 þ 1 þ 1 þ 4 ¼ 8 (19:38)
where W ¼ WN ¼ ej N . The general formula is also valid, Fkp1 ,kp2 ¼
(5 3 5-point DFT) Let N ¼ 5 and f be the following image of size 5 3 5:
n p
Vp1 ,p2 ,t ¼ (n1 , n2 ); N20 n1 p1 þ N10 n2 p2 ¼ t mod N , t ¼ 0: (N 1),
Fp1 ,p2 ¼
Example 19.4
fn1 ,n2 Wnn11 p1 WN22 2 , (p1 , p2 ) 2 X,
where WNk ¼ expð2pj=Nk Þ, k ¼ 1, 2. For an arbitrary frequency-point (p1, p2), we determine in the period X the sets of points
fp1 ,p2 ,t ¼
frequency (p1, p2) and t is referred to as the time. Thus the splitting-signal is a (2-D frequency)–(1-D time) representation of the 2-D sequence f, which determines completely the 2-D DFT of f at the frequency-points of the set Tp1 ,p2 .
(19:37)
The sequence fTp1 , p2 determines the spectral information of the image f at frequency-points of the set Tp1 ,p2 . We call such a sequence the splitting-signal, or the image-signal if f is an image. The components of the splitting-signal are numbered by the set of three, (p1, p2, t), where two components represent the
and fT2, 1 ¼ {9, 12, 7, 6, 8}. The power of this signal equals the power of the image f, i.e., 4 X t¼0
f2, 1, t ¼ 9 þ 12 þ 7 þ 6 þ 8 ¼ 42 ¼
4 X 4 X
fn1 , n2 :
n1 ¼0 n2 ¼0
The 5-point DFT of the splitting-signal equals F 5 fT2, 1 ¼ {42, 4:6631 4:3920j, 3:1631 1:4001j, 3:1631 þ 1:4001j, 4:6631 þ 4:3920j}: This transform coincides with the 2-D DFT of the image f at frequency-points of the group T2, 1 ¼ f(0, 0), (2, 1), (4, 2), (1, 3), (3, 4)g,
19-11
Multidimensional Discrete Unitary Transforms
as shown in the following table: 3 42 0 0 0 0 7 60 0 0 3:1631þ1:4001j 0 7 6 7 6 0 4:66314:3920j 0 0 0 7 6 40 0 0 0 4:6631þ4:3920j 5 0 0 3:16311:4001j 0 0 2
(19:39)
We can fill the rest values of the 2-D DFT of the image, by using other splitting-signals. For instance, for the signal corresponding to the generator (p1, p2) ¼ (3, 1), we have the following table of time-points: 0 3 kt ¼ (n1 3 þ n1 1)mod 5kn2 ,n1 ¼0:4 ¼ 1 4 2
1 4 2 0 3
2 0 3 1 4
4 2 0 : 3 1
3 1 4 2 0
The components of the splitting-signal fT2, 1 are thus calculated as follows: 8 f3,1,0 ¼ f0,0 þ f1,2 þ f2,4 þ f3,1 þ f4,3 ¼ 1 þ 1 þ 1 þ 1 þ 2 ¼ 6 > > > > < f3,1,1 ¼ f0,1 þ f1,3 þ f2,0 þ f3,2 þ f4,4 ¼ 2 þ 1 þ 1 þ 0 þ 1 ¼ 5 fT3,1 ¼ f3,1,2 ¼ f0,2 þ f1,4 þ f2,1 þ f3,3 þ f4,0 ¼ 1 þ 2 þ 3 þ 1 þ 2 ¼ 9 > > ¼ f þ f þ f þ f þ f ¼ 3 þ 2 þ 2 þ 3 þ 4 ¼ 14 f > > : 3,1,3 0,3 1,0 2,2 3,4 4,1 f3,1,4 ¼ f0,4 þ f1,1 þ f2,3 þ f3,0 þ f4,2 ¼ 1 þ 0 þ 2 þ 4 þ 1 ¼ 8
256 3 256 in part (a), along with the image-signal fT16, 1 of length 256 in (b), the 1-D DFT of the image-signal (in absolute scale) in (c), and 256 samples of this 1-D DFT at frequency-points of the subset T16,1 of X256,256 at which the 2-D DFT of the image is filled by the 1-D DFT in (d). The value of the 1-D DFT at point 0, which is the power 43,885 of the image, has been truncated in parts (c) and (d). Figure 19.9 shows another image-signal fT5, 1 in part (a), along with the 256-point DFT of this signal in (b), and the arrangement of values of the 1-D DFT in the 2-D DFT of the image in (c).
19.4.1 2-D Directional Images The components fp1 , p2 , t , t ¼ 0 : (N 1), of the splitting-signals of a 2-D sequence, or image fn1 , n2 , are defined as sums of the image at points lying on the corresponding set Vp1 , p2 , t defined in Equation 19.33. To describe these sets, we first consider the case when N1 ¼ N2 ¼ N. Given a sample (p1, p2) 2 X and a nonnegative integer t < N, the set Vp1 , p2 , t ¼ f(n1 , n2 ); n1 p1 þ n2 p2 ¼ t mod N, n1 , n2 ¼ 0 : (N 1)g if it is not empty, is the set of points (n1, n2) along a family of parallel straight lines at the angle of c ¼ arctan(p2=p1) to the horizontal axis. The equations for these lines are 8 xp1 þ yp2 > > > < xp1 þ yp2 > > > : xp1 þ yp2
(19:40)
and fT3,;1 ¼ {6, 5, 9, 14, 8}. The 5-point DFT of this splittingsignal equals F 5 fT3, 1 ¼ {42, 8:5902 þ 5:7921j, 2:5902 2:9919j, 2:5902 þ 2:9919j, 8:5902 5:7921j}: This transform defines the 2-D DFT of the image at frequencypoints of the group T3, 1 ¼ f(0, 0), (3, 1), (1, 2), (4, 3), (2, 4)g: In this stage, we fill the 2-D DFT as shown in the following table: 2
42
0
0
0
0
3
7 6 7 60 0 2:5902 2:9919j 3:1631 þ1:4001j 0 7 6 7 6 0 0 8:5902 5:7921j 7: 6 0 4:6631 4:3920j 7 6 6 0 8:5902 þ 5:7921j 0 0 4:6631 þ4:3920j 7 5 4 0
0
3:1631 1:4001j 2:5902 þ 2:9919j
0
In a similar way, the 1-D DFTs of the splitting-signals fT0, 1 , fT1, 1 , fT4, 1 , and fT1, 0 , fill the rest of the table of the 2-D DFT of the image f. To illustrate the tensor representation in the general case, we consider the case when N ¼ 256 and the generator is (p1, p2) ¼ (16, 1). Figure 19.8 shows (a) the clock-and-moon image
¼t ¼tþN
¼ t þ kN
(19:41)
where k p1 þ p2. We denote this family by Lp1 , p2 , t . For different values of t1 6¼ t2 < N the families of lines Lp1 , p2 , t1 and Lp1 , p2 , t2 do not intersect. All together, the sets Vp1 , p2 , t , t ¼ 0 : (N 1), compose a partition of the period X. It is interesting to note that the direction of parallel lines of Lp1 , p2 , t is perpendicular to the direction of frequency-points of the cyclic group Tp1 , p2 .
Example 19.5 On the lattice X8,8, we consider two sets of parallel lines L2, 1, 1 and L2, 1, 2 . Each family contains three parallel lines. For the family L2, 1, 1 , the parallel lines are y1 : 2x þ y ¼ 1, y9 : 2x þ y ¼ 9, y1 7: 2x þ y ¼ 17:
One point (0, 1) of the set V2,1,1 lies on the first line of L2, 1, 1 , four points (1, 7), (2, 5), (3, 3), (4, 1) on the second line and, three points (5, 7), (6, 5), (7, 3) on the third one. Therefore, f2, 1, 1 ¼ (x0, 1 ) þ (x1, 7 þ x2, 5 þ x3, 3 þ x4, 1 ) þ (x5, 7 þ x6, 5 þ x7, 3 ). The parallel lines of the family L2, 1, 2 are defined by y2 : 2x þ y ¼ 2, y10 : 2x þ y ¼ 10, y18 : 2x þ y ¼ 18,
19-12
Transforms and Applications Handbook
176 fT16,1(t)
174
172
170
168 (a)
t 0
50
100
150
200
250
(b) 43,885
150 F0 = 43,885 200 100 150 100 50
50 200
0 200 0
0
50
100
150
200
p2
250
100 p 1
100
0
0
(d)
(c)
FIGURE 19.8 (a) The image 256 3 256, (b) image-signal corresponding to the generator (p1, p2) ¼ (16, 1), (c) absolute spectrum of the image-signal (with the truncated zero component), and (d) the arrangement of values of the 1-D DFT in the 2-D DFT of the image (in the 3-D view).
178
150
176
fT5,1(t)
200 100
174
150 172
100 50
170
50
168 166 (a)
200 100 p
0 t 0
100
200
200 0
0
100
(b)
p2
200
100
0 0
1
(c)
FIGURE 19.9 (a) Image-signal corresponding to the generator (5, 1), (b) absolute spectrum of the image-signal, and (c) the arrangement of values of the 1-D DFT in the 2-D DFT of the image.
and the component f2, 1, 2 ¼ (x0, 2 þ x1, 0 ) þ (x2, 6 þ x3, 4 þ x4, 2 þ x5, 0 ) þ (x6, 6 þ x7, 4 ). The disposition of the points lying on the parallel lines of these sets is given in Figure 19.10. The location of the frequency-points of the group T2,1 is also shown in this figure. Two parallel lines pass through these frequency-points, which are defined in the frequency plane (w1, w2) as l1 : 2w2 w1 ¼ 0 and l2 : 2w2 w1 ¼ 8. The parallel lines l1 and l2 are perpendicular to
the parallel lines yn of L2, 1, 1 and L2, 1, 2 , as well as all other families L2, 1, t , t ¼ 0, 3 : 7. The disposition of the points of all disjoint eight sets V2,1,t, when t ¼ 0 : 7, is given in Figure 19.11. We can again identify the opposite sides of the boundaries of the square Y ¼ [0, N] 3 [0, N] and consider Y as a torus and the 2-D lattice X as a net traced on the torus in the 3-D space.
19-13
Multidimensional Discrete Unitary Transforms V2,1,1
V2,1,2
T2,1
8
8
8
6
6
6
4
4
4
2
2
2
0
0
0
0
2
4
y1
6
8
0 y17
y9
2
4
y2
6
8
y10
l2
l1
0
2
4
6
8
y18
FIGURE 19.10 The locations of the points of sets V2,1,1 and V2,1,2 and frequency-points of the group T2,1.
V2,1,0 8 6 4 2 0
V2,1,1
0 2 4 6 8
V2,1,2
V2,1,3
8 6 4 2 0
8 6 4 2 0 0 2 4 6 8
0 2 4 6 8
V2,1,4 8 6 4 2 0
8 6 4 2 0 0 2 4 6 8
V2,1,5
0 2 4 6 8
V2,1,6 8 6 4 2 0
8 6 4 2 0 0 2 4 6 8
V2,1,7 8 6 4 2 0
0 2 4 6 8
0 2 4 6 8
FIGURE 19.11 The disposition of eight sets of points of V2,1,t, when t ¼ 0 : 7 (shown by filled circles).
Then, the straight lines of will compose a closed spiral St 2, t
Lp1 , p on the torus. The sums fp1 , p2 , t , calculated on N parallel spirals St, t ¼ 0 : (N 1), represent the image fn1 , n2 in the group of frequency-points Tp1 , p2 , which are also situated on a spiral that passes through the initial point (0, 0) of the net and make an angle p = 2 with the spirals St. The image-signals are the discrete integrals (or image projections) along the parallel lines of Equation 19.41. Therefore, any processing of the image-signal fT yields the change in the Fourier spectrum at frequency-points of the corresponding cyclic group T. After performing the 2-D IDFT, the corresponding change can be observed in the image along the parallel lines of sets Vp1 , p2 , t , t ¼ 0 : (N 1). As an example, Figure 19.12 shows the tree image of size (256 3 256) in part (a), along with the results of
(a)
(b)
amplifying only one image-signal fT2, 1 by the factor of 4 in (b), and signal fT5, 1 by the factor of 6 in (c). The directions of parallel lines of the corresponding families L2, 1, t , and L5, 1, t , t ¼ 0 : 255, on the image can easily be observed. 19.4.1.1 Superposition of Directions The images of Figure 19.12 illustrates well that an image f ¼ fn1 , n2 can be composed by specific collection of directional images. To show that, we consider the tensor representation of f,
ffn1 , n2 g ! fT ; T ¼ Tp1 , p2 2 sJ ,
where s is the irreducible covering of the period X whose size we assume equal, N 3 N.
(c)
FIGURE 19.12 (a) Tree image before and after processing by the image-signals, (b) fT2, 1 and (c) fT5, 1 .
19-14
Transforms and Applications Handbook
Thus, N values of the splitting-signal are placed in all N2 points of the square lattice XN,N. Namely, each value fp,s,t is placed at all points which are situated along the parallel lines of the corre ( Fkp,ks , if (p1 , p2 ) ¼ kp, ks , k ¼ 0: (N 1), sponding family Lp,s,t . As an example, we consider an image of size 257 3 257, i.e., Ap1 ,p2 ¼ A(p, s)p1 ,p2 ¼ (p,s) 0, otherwise, when N ¼ 257. The first 10 directional images dn1 ,n2 , for (p, s) ¼ (19:42) (0, 1), (1, 1), . . . , and (9, 1), are shown in Figure 19.13, for the tree image (whose original size 256 3 256 has been extended to the where p1, p2 ¼ 0 : (N 1). The data A represent an incomplete size 257 3 257). When N is a prime, N þ 1 directional signal-images are 2-D DFT of the image, that is zero at all frequency-points except required to compose the image f. Indeed, the covering s ¼ (Tp,s; the group Tp,s. Examples of such incomplete 2-D DFTs have been (p, s) 2 J) consists of (N þ 1) groups Tp,s. The set J of generators shown in Equation 19.39 for the N ¼ 5 case, as well as in the 3-D (p, s) can be taken as view for the image 256 3 256 in Figures 19.8d and 19.9c for groups T16,1 and T5,1, respectively. J ¼ f(0, 1), (1, 1), (2, 1), (3, 1), . . . , (N 1, 1), (1, 0)g: We define the ‘‘directional image’’ dn1 , n2 as the 2-D IDFT of the data A, The cyclic groups T of sJ intersect only at the point (0, 0). Therefore, for a given frequency-point (p1, p2), the following N1 X N1 1 1 X n1 p1 þn2 p2 holds ¼ F A ¼ A W dn1 ,n2 ¼ dn(p,s) p ,p N,N 1 ,n2 n1 ,n2 N 2 p1 ¼0 p2 ¼0 1 2 X A(p, s)p1 ,p2 ¼ Fp1 ,p2 þ Ns0 dp1 ,p2 , n1 , n2 ¼ 0 : (N 1): (19:43) Given a generator (p, s) 2 J, we define the complex data A ¼ A (p, s) of size N 3 N by
Tp,s 2s
Since the splitting-signal fTp, s defines the 2-D DFT of the image at frequency-points of the group T ¼ Tp,s,
where s0 ¼ F0,0 or the power of the image. Taking the 2-D IDFT of the sum of all incomplete 2-D DFTs, we obtain the following:
(F N fT )k ¼ Fkp, ks , k ¼ 0 : (N 1),
X
the following calculations hold:
(p,s)2J
dn1 ,n2
N1 X N1 N1 1 X 1 X ¼ 2 Ap1 ,p2 W n1 p1 þn2 p2 ¼ 2 F W n1 (kp)þn2 (ks) N p1 ¼0 p2 ¼0 N k¼0 kp,ks N1 1 1 X 1 ¼ F W k(n1 pþn2 s) ¼ fp,s, (n1 pþn2 s) mod N N N k¼0 kp,ks N
(19:44)
2
1 4 F 1 N,N A(p, s)p1 ,p2 ¼ F N,N
¼
F 1 N,N
:
X
(p,s)2J
A(p, s)p1 ,p2 5
s0 Fp1 ,p2 þ : N
3
(19:45)
To simplify our calculations, we assume that the image is centered fn1 ,n2 ! fn1 ,n2 s0 =N 2 . Then, it directly follows from Equation 19.45 that
(a) DI−(0,1)
(b) DI−(1,1)
(c) DI−(2,1)
(d) DI−(3,1)
(e) DI−(4,1)
(f ) DI−(5,1)
(g) DI−(6,1)
(h) DI−(7,1)
(i) DI−(8,1)
(j) DI−(9,1)
FIGURE 19.13 (a)–(j) The first 10 directional images of the tree image 257 3 257. (All images have been scaled.)
19-15
Multidimensional Discrete Unitary Transforms
X
(p,s)2J
dn(p,s) ¼ 1 ,n2
X
(p,s)2J
F 1 N,N A(p, s)
n1 ,n2
¼ fn1 ,n2 :
(19:46)
The sum of the directional images equals to the image fn1 , n2 . Each directional image can be determined by the corresponding splitting-signal as shown in Equation 19.44. Therefore, the whole image fn1 , n2 can be reconstructed from (N þ 1) splittingsignals as fn1 ,n2 ¼
X
(p,s)2J
dn(p,s) ¼ 1 ,n2
1 X fp,s, (n1 pþn2 s) mod N : N (p,s)2J
(19:47)
If the image is not centered, the reconstruction formula is fn1 ,n2 ¼
X
(p,s)2J
dn(p,s) 1 ,n2
s0 1 X s0 ¼ fp,s,(n1 pþn2 s) mod N : N N (p,s)2J N
(19:48)
Each directional d(p,s) completes the original image with details, or parallel straight lines of different brightness in gray scale. The direction of these lines is defined by the generator (p, s). Formula (19.48) describes the principle of the superposition of
(a)
directional images in image formation. Each directional image is determined by the corresponding splitting-signal which is calculated by the discrete linear integrals (sums) of the image fn1 , n2 along the parallel lines of Lp, s, t , t ¼ 0 : (N 1). These integrals can be considered as the projection data along the angle defined by the generator (p, s). Thus we obtain the simple formula of reconstruction of the image by its projection data, by using the splitting-signals of the tensor representation of the image with respect to the Fourier transform. The number of projections equals the number of generators, i.e., (N þ 1), when N is prime. This is the required number of projections for the exact reconstruction of the image N 3 N. As an example, Figure 19.14 shows the tree image after removing the projection data fp,s, (n1 pþn2 s) mod N which corresponds to the generator (p, s) ¼ (1, 0) in part (a), (p, s) ¼ (1, 1) in (b), and (p, s) ¼ (0, 1) in (c). The angles of the required projections for reconstruction of an image of size 257 3 257 by the splitting-signals (or direction images) compose the following set F257, 257 ¼ f arcctg(p); p ¼ 0 : 257g [ fp=2g: Figure 19.15 illustrates all central angles of this set on the unit circle. One can see that the points on the circle are not uniformly
(b)
(c)
FIGURE 19.14 Images reconstructed by 256 projections by splitting-signals, when the projection data of one generator (p, s) have been removed, for (p, s) equals (a) (1, 0) (b) (1, 1), and (c) (0, 1).
1 0.8
90°
0.6 45° 0.4
26.57°
18.43°
0.2 0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
FIGURE 19.15 Central angles of 258 projections required for reconstructing an image 257 3 257.
0.8
0.9
1
19-16
Transforms and Applications Handbook
distributed, and the increment of the angles is not a constant. The main part of the projections are taken along the small angles. For instance, there are no projections for angles in the intervals (458, 908) and (26.578, 458), as well as (18.438, 26.578).
19.5 Tensor Algorithm of the 2-D DFT In this section, we consider the tensor algorithm for calculating the 2-D DFT of order N1 3 N2. Let s ¼ sJ be an irreducible covering of the period X ¼ XN1 , N2 consisting of cyclic groups T. The algorithm of calculation of the 2-D DFT F N1 , N2 f of a 2-D sequence fn1 , n2 is performed by the following two steps.
Other sets of (N þ 1) generators can also be taken, for instance, J ¼ {(1, 0), (1, 1), (1, 2), . . . , (1, N 1), (0, 1)}. Therefore, to calculate the N 3 N-point 2-D DFT, it is sufficient to fulfill (N þ 1) 1-D N-point DFTs.
Example 19.6 We consider the 3 3 3-point DFT of the following 2-D sequence, or image: 2
1 f ¼ ffn1 ,n2 g ¼ 4 2 1
Step 1. Calculate the 1-D splitting-signals fT of the tensorrepresentation of the image, i.e., calculate the transform X s : f ! {fT ; T 2 s}
(19:49)
2 4 2
3 1 2 5: 1
The square lattice X3,3 ¼ {(n1, n2); n1, n2 ¼ 0, 1, 2} is covered by the totality of sets s ¼ (T0,1, T1,1, T2,1, T1,0). The tensor representation of f defines four splitting-signals,
which we call the tensor transform of the image. Step 2. Calculate the 1-D DFTs of the obtained splitting-signals, F N (T) fT , T 2 s.
Step 3. Allocate the 1-D DFTs in the 2-D data by cyclic groups T 2 s,
[F N fT ] ! Fp,s ; (p, s) 2 T :
(19:50)
Each splitting-signal fT defines the 2-D DFT at the frequencypoints of the cyclic group T, Fkp1 , kp2 ¼ (F N fT )k , k ¼ 0 : (N 1), where the number N ¼ N1N2=g.c.d.(N1, N2). The number of 1-D transforms required for calculating the 2-D DFT coincides with the cardinality, card s, of the covering s, or the cardinality of the set J of generators of these groups, s ¼ sJ ¼ Tp1 ,p2 (p1 ,p2 )2J :
(19:51)
We here separately consider the set of generators for the cases of most interest, when N1 ¼ N2 ¼ N, and N is a general prime, the product of two prime numbers, and then we describe the case when N1 and N2 are arbitrary unequal integers.
19.5.1 N Is a Prime Let N > 1 be a general prime. The irreducible covering sJ of the set XN,N has the cardinality N þ 1, i.e., cardsJ ¼ N þ 1:
(19:52)
In other words, the minimum number of cyclic groups Tp,s which together cover the period X equals (N þ 1). Indeed, the irreducible covering sJ is determined by the following set of generatrices J ¼ JN,N ¼ f(0, 1), (1, 1), (2, 1), . . . , (N 1, 1), (1, 0)g:
(19:53)
X s : ffn1 ,n2 g ! f{f0,1,t }, {f1,1,t }, {f2,1,t }, {f1,0,t }g, t ¼ 0, 1, 2:
(19:54)
Step 1: We denote by f the vector–column composing from rows of the sequence fn1 , n2 , i.e., f ¼ (1, 2, 1, 2, 4, 2, 1, 2, 1)0 . The first splitting-signal {f0,1,t} is calculated by 2
3 2 1 f0,1,0 6 7 6 4 f0,1,1 5 ¼ 4 0 f0,1,2 0
0 1 0
0 0 1
1 0 0
0 1 0
0 1 0 0 1 0
0 1 0
3 0 7 0 5f ¼ [4, 8, 4], 1
(19:55)
and the next three splitting-signals are calculated as follows: 2
3 2 f1,1,0 1 4 f1,1,1 5 ¼ 4 0 f1,1,2 0
3 0 1 5f ¼ [5, 5, 6], 0 (19:56) 2 3 2 3 1 0 0 0 1 0 0 0 1 f2,1,0 4 f2,1,1 5 ¼ 4 0 1 0 0 0 1 1 0 0 5f ¼ [6, 5, 5], f2,1,2 0 0 1 1 0 0 0 1 0 (19:57) 3 2 3 2 1 1 1 0 0 0 0 0 0 f1, 0, 0 4 f1, 0, 1 5 ¼ 4 0 0 0 1 1 1 0 0 0 5f ¼ [4, 8, 4]: f1, 0, 2 0 0 0 0 0 0 1 1 1 0 1 0
0 0 1
0 1 0
0 0 1
1 0 0 0 0 1
1 0 0
(19:58)
Step 2: The three-point DFT of the first signal {f0,1,0, f0,1,1, f0,1,2} ¼ [4, 8, 4] is calculated by 3 2 1 1 A0 4 A1 5 ¼ 4 1 W A2 1 W2 2
3 32 3 2 16 4 1 W 2 54 8 5 ¼ 4 2:0 j3:4641 5, 2:0 þ j3:4641 4 W
19-17
Multidimensional Discrete Unitary Transforms
where W ¼ exp(j2p=3). The power of each splitting-signal equals 16, and there is no need to calculate the three-point DFTs of other signals at zero. Therefore, we can perform three incomplete three-point DFTs of these splitting-signals: "
B1
"
C1
B2
C2 D1 D2
# #
¼
"
¼
"
¼
2 3 " # # 5 0:5 þ j0:8660 W 6 7 , 455 ¼ 0:5 j0:8660 W 6 2 3 " # # 6 1 W2 6 7 , 455 ¼ 1 W 5 2 3 4 2:0 j3:4641 W2 4 5 8 ¼ : 2:0 þ j3:4641 W 4
F0,0
F0,1
F0,2
A0
A1
A2
6 4 F1,0
F1,1
7 6 F1,2 5 ¼ 4 D1
B1
7 C2 5
2
F2,0
W
1
W2
1
W
1
W2
1 W 1 W2
0
1
0
2
1
|Ak|
15
0
2
0
1
0
2
15
5
5
5
0
1
0
2
0
1
T1,1
T0,1
0
2
3
3
2
2
2
2
1
1
1
1
0
0
0
0
1
2
3
–1 –1
0
1
2
3
–1 –1
0 0 1 1 0 0 0 1 0 0 1 0
1 0 0 0 0 1 0 1 0 0 0 1
0 1 0 1 0 0 0 0 1 0 0 1
03 07 7 17 7 7 07 7 17 7 07 7 7: 17 7 07 7 07 7 7 07 7 05
(19:59)
1
0
1
2 |Dk|
0
1
0
1
2
T1,0
3
0
0 1 0 0 0 1 1 0 0 0 1 0
T2,1
3
–1 –1
1 0 0 0 1 0 0 0 1 0 1 0
15
5
0
7 5:
20 |Ck|
10
2
1
3
5
10
1
0:5 j0:8660
0 0 1 0 0 1 0 0 1 1 0 0
10
0
1
0:5 þ j0:8660
10
0
2 þ j3:4641
f1,0,t
20 |Bk|
15
B2
2:0 j3:4641
f2,1,t 5
0
C1
10
10
20
20
0 1 0 0 1 0 0 1 0 1 0 0
60 6 60 6 6 61 6 60 6 60 6 [X s ] ¼ 6 61 6 60 6 60 6 6 61 6 40 0
f1,1,t 5
0
16
21
10
5
D2
3
If we unite four binary matrices 9 3 3 in Equations 19.55 through 19.58, we obtain the following matrix of the tensor transformation in Equation 19.54:
Step 3: The location of frequency-points of four cyclic groups T 2 s in the lattice 3 3 3, where the 1-D DFTs of the splitting-signals are placed, is shown in the last row of the figure. As a result, we obtain the following matrix expression for the 3 3 3-point DFT of the given sequence f:
f0,1,t
F2,2
2
2 þ j3:4641
The splitting-signals and the energy Fourier spectrums of these signals are shown in the first and second rows of Figure 19.16, respectively.
10
2
6 ¼ 4 2 j3:4641
2
1
F2,1
3
2
3
–1 –1
0
1
2
3
FIGURE 19.16 Four splitting-signals (the first row), the 3-point DFTs (in absolute scale) of the splitting-signals (the second row), and the location of the frequency-points of the cyclic groups T 2 s (the third row).
19-18
In matrix form, the described tensor algorithm of the 3 3 3-point DFT can be written as 3 3 2 1 1 1 F0,0 7 6 F0,1 7 6 1 W W 2 7 7 6 6 7 6 F0,2 7 6 1 W 2 W 7 7 6 6 2 7 6 F1,1 7 6 1 W W 7 7 6 6 7[X s ]f: 6 F2,2 7 ¼ 6 1 W2 W 7 7 6 6 2 7 6 F2,1 7 6 1 W W 7 7 6 6 2 7 6 F1,2 7 6 1 W W 7 7 6 6 25 4 F1,0 5 4 1 W W 2 1 W W F2,0 2
The tensor algorithm of the 3 3 3-point DFT uses the following number of arithmetical operations: 4 real multiplications and 38 real additions, if the sequence f is real. Indeed, since W2 ¼ 1 W, we have the following: 2 3 x 2 x þ Wy (z þ Wz ) F1 1 W W 6 7 ¼ 4y5 ¼ x (y þ Wy ) þ Wz F2 1 W2 W z x z þ W(y z) ¼ : (19:60) x y W(y z) The incomplete 3-point Fourier transform can thus be calculated bypffiffione ffi complex multiplication. Moreover, since W 1 ¼ 3 j =2 and the division by 2 is the elementary operation of shifting, the multiplication by W is equivalent to one real multiplication and shifting. The 3-point DFT of real data uses one operation of real multiplication, five additions, and one shifting. Three additions are required to calculate the incomplete 3-point DFT, since F 2 ¼ F1. For the complex data, the 3-point DFT uses two multiplications, two shiftings, and 16 operations of real additions, and the incomplete DFT requires 12 additions. Further, the direct calculation of the matrix [X ] of order 12 3 8 is fulfilled in the given example via 24 operations of real addition and subtraction. Therefore, the 3 3 3-point 2-D DFT requires (5 þ 3 3 3) þ 24 ¼ 38 additions for the real sequence f. We note for comparison, the polynomial algorithm of the 3 3 3-point DFT also uses four 3-point DFTs, polynomial transforms, reductions, and Chinese remainder operations, which requires 25 additions (against 24 for the tensor transform). In the row– column algorithm, six 3-point DFTs are used, namely, three transforms with real inputs and three transforms with complex inputs. Therefore, the algorithm uses respectively 3 þ 2(3) ¼ 9 real multiplications and 3 3 5 þ 3 3 16 ¼ 63 additions, when data are real. Thus, in the tensor and polynomial algorithms, we get the advantage of the number of multiplications by 3 times, and 1.6 times for additions. In the general case, for a prime number N > 3, in the traditional row–column algorithm, 2N one-dimensional N-point DFTs are used. Therefore, the tensor algorithm decreases the number of multiplications by 2N=(N þ 1) times, i.e., almost by 2 times, for large N. The tensor transform X s requires N3 N additions, and the polynomial transforms and reductions
Transforms and Applications Handbook
and Chinese remainder operations require N3 þ N2 5N þ 4 additions [14].
19.5.2 N Is a Power of Two When N ¼ 2r, r > 1, the irreducible covering s of the lattice X ¼ X2r ,2r can be taken as the following family of 3N=2 cyclic groups: sJ ¼
Tp1 ,1 p1 ¼0:(2r 1) , T1,2p2 p2 ¼0:(2r1 1) :
(19:61)
Thus, to calculate the 2r 3 2r-point 2-D DFT, 3 2r 1 1-D DFTs are used in the tensor algorithm which is described by
Fkp, ks ¼ ðF 2r , 2r f Þkp, ks ¼ ¼ ðF 2r fT Þk ¼
r 2X 1
r r 2X 1 2X 1
fn1 ,n2 W n1 kpþn2 ks
n1 ¼0 n2 ¼0
fp,s,t W kt ,
t¼0
k ¼ 0: (2r 1),
(19:62)
where the generators (p, s) are taken from the set J ¼ f(0, 1), (1, 1), (2, 1), . . . , (2r 1, 1)g [ f(1, 0), (1, 2), (1, 4), . . . , (1, 2r 2)g:
(19:63)
The components of splitting-signals fT are calculated by the characteristic functions of sets Vp,s,t, r r 2X 1 2X 1
fp,s,t ¼ X p,s,t f ¼
n1 ¼0 n2 ¼0
X p,s,t (n1 , n2 )fn1 ,n2 ¼
X
fn1 ,n2 :
(n1 ,n2 )2Vp,s,t
(19:64)
These binary functions determine the tensor transform X s and are defined as
X p,s,t (n1 , n2 ) ¼
(
1,
if n1 p þ n2 s ¼ t mod 2r ,
0,
otherwise,
(n1 , n2 ) 2 X: (19:65)
All ones of the functions lie on parallel lines passing the knots of the corresponding sets Vp,s,t.
Example 19.7 Consider the following 2-D sequence f of size 4 3 4 : f0,0 f 1, 0 f ¼ f2, 0 f3, 0
f0, 1 f1, 1 f2, 1 f3, 1
f0, 2 f1, 2 f2, 2 f3, 2
f0, 3 1 f1, 3 2 ¼ f2, 3 1 f3, 3 2
2 0 3 4
1 1 2 1
3 1 : 2 2
19-19
Multidimensional Discrete Unitary Transforms
The set of generators of the cyclic groups Tp,s of the covering sJ equals J ¼ f(0, 1), (1, 1), (2, 1), (3, 1)g [ f(1, 0), (1, 2)Þg:
(19:66)
We first describe the splitting-signal corresponding to the frequency-point (p, s) ¼ (0, 1). For that, we write all values t in equations np þ ms ¼ t mod 4 in the form of the following matrix: 0 0 kt ¼ (n 0 þ m 1) mod 4kn,m¼0:3 ¼ 0 0
1 1 1 1
2 2 2 2
3 3 : 3 3
The components of this splitting-signal are calculated as follows:
f1,0,0
f1,0,1
f1,0,2
f1,0,3
2
1 61 6 ¼ X 1,0,0 f ¼ 6 41 1
0 0 0 0
¼1þ2þ1þ2¼6 2 0 1 60 1 6 ¼ X 1,0,1 f ¼ 6 40 1 0 1 ¼2þ0þ3þ4¼9 2 0 0 60 0 6 ¼ X 1,0,2 f ¼ 6 40 0 0 0 ¼1þ1þ2þ1¼5 2 0 0 60 0 6 ¼ X 1,0,3 f ¼ 6 40 0 0 0 ¼3þ1þ2þ2¼8
0 0 0 0
3 2
0 1 7 6 07 62 76 05 41 0 2
2 0 3 4
0 0 0 0
3 2 0 1 6 07 7 62 76 05 41 0 2
2 0 3 4
1 1 1 1
3 2
0 1 6 07 7 62 76 05 41 0 2
0 0 0 0
3 2
1 1 62 17 7 6 76 15 41 1 2
2 0 3 4
2 0 3 4
1 1 2 1
3
3 17 7 7 25 2
1 1 2 1
3 3 17 7 7 25 2
1 1 2 1
3
3 17 7 7 25 2
1 1 2 1
3
3 17 7 7 25 2
F0 60 6 6 40 0
F1 0
F2 0
0 0
0 0
3 2 F3 28 7 6 07 6 0 7¼6 05 4 0 0
0
1j 0
6 0
0 0
0 0
of this splitting-signal is calcu3 2 0 1 7 6 17 62 76 05 41 0 2
¼1þ1þ2þ4¼8
2 0 3 4
3 3 17 7 7 25 2
1 1 2 1
(19:71)
f1,1,1 ¼ X 1,1,1 f ¼ 2 þ 2 þ 2 þ 1 ¼ 7 (19:72)
f1,1,2 ¼ X 1,1,2 f ¼ 1 þ 0 þ 1 þ 2 ¼ 4 f1,1,3 ¼ X 1,1,3 f ¼ 3 þ 1 þ 3 þ 2 ¼ 9:
(19:67)
(19:68)
Thus, the splitting-signal fT1, 1 ¼ {8, 7, 4, 9}. The four-point DFT of this signal equals (A0, A1, A2, A3) ¼ (28, 4 þ 2j, 4, 4 2j), which defines the 2-D DFT of f at the frequency-points (0, 0), (1, 1), (2, 2), and (3, 3). At this step, we can record the other three values of the 2-D DFT as follows: 2
F0 60 6 6 40 0
(19:69)
(19:70)
3 1þj 0 7 7 7: 0 5 0
Therefore, the first component lated by 2 1 0 0 60 0 0 6 f1,1,0 ¼ X 1,1,0 f ¼ 6 40 0 1 0 1 0
and similarly we obtain the next three components
The splitting-signal fT0, 1 ¼ {6, 9, 5, 8}. The four-point DFT of this signal equals (F0, F1, F2, F3) ¼ (28, 1 j, 6, 1 þ j), which can be written in the table of the 2-D DFT of f at the frequencypoints (0, 0), (0, 1), (0, 2), and (0, 3) as follows: 2
We also consider the splitting-signal corresponding to the next generator (p, s) ¼ (1, 1). For this generator, equations np þ ms ¼ t mod 4 result in the following matrix: 0 1 2 3 1 2 3 0 kt ¼ (n 1 þ m 1) mod 4kn,m¼0:3 ¼ : 2 3 0 1 3 0 1 2
F1 A1 0
F2 0 A2
0
0
3 2 F3 28 6 0 0 7 7 6 7¼6 0 5 4 0 0
A3
1 j 6 4 þ 2j 0 0 4 0
0
1þj 0 0 4 2j
3
7 7 7: 5
The first component A0 ¼ F0 ¼ 28 and could be omitted from the calculations, to avoid the redundancy. The redundancy of calculation takes place for other splitting-signals not only at frequencypoint (0, 0) but frequency-points with even coordinates, i.e., in the quarter of all frequency-points, as it can be seen from Figure 19.17. For instance, the four-point DFT of the splitting-signal fT2, 1 ¼ {4, 8, 7, 9} equals (B0, B1, B2, B3) ¼ (28, 3 þ j, 6, 3 j). These values define in the 2-D DFT at the frequencypoints (0, 0), (2, 1), (0, 2), and (2, 3). At this step, we can record the other two values of the 2-D DFT as follows: 2
3
2
28
F0
F1
F2
F3
60 6 6 40 0
A1 B1 0
0 A2 0
6 0 7 7 6 0 7¼6 B3 5 4 0 A3 0
1j
6
1þj
3
4 þ 2j 0 0 7 7 7: 3 þ j 4 3 j 5 0 0 4 2j
It is clear, that the incomplete four-point DFT is required, to avoid calculations for components B0 and B2 that have already been calculated.
19-20
Transforms and Applications Handbook T1,1
T0,1
T2,1
T3,1
T1,0
T1,2
3
3
3
3
3
3
2
2
2
2
2
2
1
1
1
1
1
1
0
0
0 0
1
2
3
0
1
2
3
1
2
3
0
0
0 0
0
1
2
3
0
1
2
3
0
1
2
3
FIGURE 19.17 The disposition of frequency-points of six groups T of the covering sJ of X4,4.
We continue the calculation of the 2-D DFT. The 4 point DFT of the splitting-signal fT3, 1 ¼ {5, 7, 7, 9} equals (C0, C1, C2, C3) ¼ (28, 2 þ 2j, 4, 2 2j). It defines the 2-D DFT of f at the frequency-points (0, 0), (3, 1), (2, 2), and (1, 3). At this step, we can record two new values of the 2-D DFT as follows: 2
F0 60 6 6 40 0
F1 A1
F2 0
B1 C1
A2 0
3 2 F3 28 6 0 C3 7 7 6 7¼6 B3 5 4 0 0
A3
1j 4 þ 2j
3 þ j 2 þ 2j
3 6 1þj 0 2 2j 7 7 7: 4 3 j 5 0
4 2j
For this signal, the incomplete four-point DFT is required to avoid calculations for components C0 and C2, which have already been determined on the second step of our calculations (when (p, s) was (1, 1)). The remaining five values of the 2-D DFT will be calculated by the splitting-signals corresponding to the generators (1, 0) and (1, 2). The splitting-signal fT1, 0 ¼ {7, 4, 8, 8}. The 4-point DFT of this signal equals (D0, D1, D2, D3) ¼ (28, 1 þ 5j, 2, 1 5j) and defines the 2-D DFT of f at the frequency-points (0, 0), (1, 0), (2, 0), and (3, 0). At this step, we can record three new values of the 2-D DFT as follows: 2
F0 6D 6 1 6 4 D2 D3
F1 A1
F2 0
B1 C1
A2 0
3 2 F3 28 7 6 C3 7 6 1 þ 5j 7¼6 B3 5 4 2
1 5j
A3
1j 4 þ 2j
3 þ j 2 þ 2j
3 6 1þj 0 2 2j 7 7 7: 4 3 j 5 0
4 2j
The redundancy of calculation on this step is only at (0, 0). At the last step, the four-point DFT of the splitting-signal fT1, 2 ¼ {7, 9, 8, 4} is calculated. It equals (E0, E1, E2, E3) = (28, 1 5j, 2, 1 þ 5j) and defines the 2-D DFT of f at the frequency-points (0, 0), (1, 2), (2, 0), and (3, 2). We fill the 2-D DFT by the values of E1 and E2,
2
F0 F1 6 D 1 A1 6 4 D2 B1 D3 C1
F2 E1 A2 E3
3
2
F3 28 6 1 þ 5j C3 7 7¼6 B3 5 4 2 A3 1 5j 2 F0,0 F0,1 6 F1,0 F1,1 ¼6 4 F2,0 F2,1 F3,0 F3,1
3
1j 6 1þj 4 þ 2j 1 5j 2 2j 7 7 3 þ j 4 3 j 5 2 þ 2j 1 þ 5j 4 2j 3 F0,2 F0,3 F1,2 F1,3 7 7: F2,2 F2,3 5 F3,2 F3,3
For this signal, it is sufficient to perform an incomplete 4-point DFT, to avoid the redundancy of calculations in two points.
Thus, the 4 3 4-point 2-D DFT is calculated in the tensor algorithm by six 4-point DFTs. Namely, one full 4-point DFT and five incomplete 4-point DFTs are required in this algorithm.
19.5.3 Modified Tensor Algorithms When N is a power of two, many cyclic groups of the irreducible covering s are intersected at frequency-points (p1, p2) 6¼ (0, 0), which leads to a redundancy of the calculation. The demonstration of such a redundancy in the N ¼ 4 case has been shown in Example 19.7, when the repeated calculations have occured at all frequency-points with even coordinates. In general N ¼ 2r case, when r > 1, the following intersection holds Tp1 ,1 \ Tp1 þ2r1 ,1 ¼ T2p1 ,2 , T1,1p2 \ T1,2p2 þ2r1 ¼ T2,4p2 , (19:73) when p1 ¼ 0 : (2r 1 1) and p2 ¼ 0 : (2r 2 1). For large 2r, we can also consider other intersections: Tp1 ,1 \ Tp1 þ2rk ,1 ¼ T2k (p1 ,1) ,
T1,2p2 \ T1,2p2 þ2rk ¼ T2k (1,2p2 ),
(19:74)
when k ¼ 2 : (r 1) and p1 ¼ 0 : (2rk 1) and p2 ¼ 0 : (2rk1 1). Therefore, in the tensor algorithm, the calculation of many 2r-point DFTs of splitting-signals can be reduced to the calculation of incomplete DFTs, such as the 2r1-point DFT, to remove all repeated calculations of spectral components. As a result, we can achieve the effective calculation of the 2r 3 2r-point DFT with the number of multiplications estimated as m2r ,2r (2r þ 1)m2r ¼ (2r þ 1) 2r1 (r 3) þ 2 ,
(19:75)
where m2r denotes the number of multiplications required for the 2r-point DFT. We use the known estimate M2r ¼ 2r1 (r 3) þ 2 for the Cooley–Tukey algorithm and the fast paired transforms [10,11]. In each group Tp1 ,1 or T1,2p2 of the covering s, we consider the set complement of the intersection T2k (p1 ,1) or T2k (1,2p2 ) , respectively. In other words, we define the following subsets k ¼ T1,2p2 nT2k (1,2p2 ) Tpk1 ,1 ¼ Tp1 ,1 nT2k (p1 ,1) and T1,2p 2
(19:76)
19-21
Multidimensional Discrete Unitary Transforms
with 2r 2rk points each. When k ¼ 1, and the totality of sets
(many frequency-points that lie in the intersections of the groups are underlined). 1 1 It directly follows from this covering, that to calculate the s(1) ¼ Tp1 ,1 , Tp1 þ2r1 ,1 , T1,2p2 , T1,2p2 þ2r1 p1 ¼0:(2r1 1) p2 ¼0:(2r2 1) 8 3 8-point DFT, it is sufficient to fulfill the following full and incomplete DFTs. is the covering of the period X2r ,2r . The calculation of the 2-D DFT at frequency-points of each set T1 can be reduced to the Step 1. The 8-point DFT, F 8 , over the sequence fT0,1 , to determine 2r1-point DFT. For that, we can use, for instance, the FFT the 2-D DFT at frequency-points of the group T0,1, algorithm with decimation in frequency. Indeed, after the first F0,k ¼ F 8 fT0,1 k , k ¼ 0: 7: iteration in this algorithm, the 2r-point DFT is reduced to two 2r1-point DFTs. One of them defines the 2r-point DFT at all Step 2. Two incomplete DFTs over the splitting-signals fT1,1 and even frequencies and another at odd frequencies. Therefore, we fT1,0 , to determine the 2-D DFT at frequency-points of the groups can determine the components of the 2-D DFT at frequencyT1,1 and T1,0, except zero point (0, 0), 1 1 points of subsets Tp1 ,1 and T1,2p2 , by fulfilling about half of the operations of multiplication that are used for calculating the 2-D Fk,k ¼ F 8 fT1,1 k , Fk,0 ¼ F 8 fT1,0 k , k ¼ 1: 7: DFT at frequency-points of the corresponding groups Tp1 ,1 and T1,2p2 . Consequently, the number of multiplications in the tensor Step 3. Three incomplete DFTs over the splitting-signals fT2,1 , fT3,1 , algorithm can be reduced to and fT1,2 , to determine the 2-D DFT at frequency-points of the corresponding groups T2,1, T3,1, and T1,2, except the frequencyr2 r1 r m2r ,2r ¼ 3 2 (m2r þ m2r1 þ 2 2) 9=8 2 m2r : points with the coordinates which are integer multiple to four, Continuing similar discussions, we can eliminate the intersections i.e., the frequency-points (0, 0), (4, 0), (0, 4), and (4, 4), T2k (p1 , 1) and T2k (1, 2p2 ) , at other groups of T of the covering s(1), F2k,k ¼ F 8 fT2,1 k , F3k,k ¼ F 8 fT3,1 k , for k ¼ 2 : (r 1). As a result, we achieve a good estimation of the number of multiplications, mr2r , 2r ð(2r þ 1)=2r Þ Fk,2k ¼ F 8 fT1,2 k , k ¼ 1, 2, 3, 5, 6, 7: r r r r 2 m2 ¼ (2 þ 1)m2 . To demonstrate the described improvement of the tensor algorithm, we consider in detail the N ¼ 8 example. Step 4. Six incomplete DFTs on the splitting-signals fT4,1 , fT5,1 , fT6,1 , fT7,1 , fT1,4 , and fT1,6 , to determine the 2-D DFT at Example 19.8 frequency-points of corresponding groups T4,1, T5,1, T6,1, T7,1, T1,4, and T1,6, except the frequency-points with even coordinates, Let f ¼ ffn1 , n2 ; n1 , n2 ¼ 0: 7g be a two-dimensional sequence and let F 8, 8 be the 8 3 8-point DFT. The tensor transform
f ! fTp,s ; Tp,s 2 sJ
defines 12 splitting-signals with generators from the set J ¼ f(p1 , 1); p1 ¼ 0: 7g [ f(1, 2p2 ); p2 ¼ 0: 3g:
We consider all groups of the irreducible covering sJ: T0,1 ¼ f(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7)g
T1,1 ¼ (0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7)
T2,1 ¼ (0, 0), (2, 1), (4, 1), (6, 3), (0, 4), (2, 5), (4, 6), (6, 7)
T3,1 ¼ (0, 0), (3, 1), (6, 2), (1, 3), (4, 4), (7, 5), (2, 6), (5, 7)
T4,1 ¼ (0, 0), (4, 1), (0, 2), (4, 3), (0, 4), (4, 5), (0, 6), (4, 7)
T5,1 ¼ (0, 0), (5, 1), (2, 2), (7, 3), (4, 4), (1, 5), (6, 6), (3, 7)
T6,1 ¼ (0, 0), (6, 1), (4, 2), (2, 3), (0, 4), (6, 5), (4, 6), (2, 7)
T7,1 ¼ (0, 0), (7, 1), (6, 2), (5, 3), (4, 4), (3, 5), (2, 6), (1, 7)
T1,0 ¼ (0, 0), (1, 0), (2, 0), (3, 0), (4, 0), (5, 0), (6, 0), (7, 0)
T1,2 ¼ (0, 0), (1, 2), (2, 4), (3, 6), (4, 0), (5, 2), (6, 4), (7, 6)
T1,4 ¼ (0, 0), (1, 4), (2, 0), (3, 4), (4, 0), (5, 4), (6, 0), (7, 4)
T1,6 ¼ (0, 0), (1, 6), (2, 4), (3, 2), (4, 0), (5, 6), (6, 4), (7, 2)
(19:77)
F4k,k ¼ F 8 fT4,1 k , F5k,k ¼ F 8 fT5,1 k , F6k,k ¼ F 8 fT6,1 k , F7k,k ¼ F 8 fT7,1 k , Fk,4k ¼ F 8 fT1,4 k , Fk,6k ¼ F 8 fT1,6 k , k ¼ 1, 3, 5, 7:
All reiterations of calculation of the 2-D DFT in the tensor algorithm, the number of which equals 32, are eliminated in the improved algorithm. For large values of r, the improvement of the tensor algorithm of the 2r 3 2r-point DFT is estimated as 1.5 by the number of multiplications, k(2r ) ¼
3 2r1 m2r 3 : r r (2 þ 1)m2 2
The known Cooley–Tukey algorithm with base (2 3 2) uses 4r1(3r 4) þ 1 operations of multiplication, which exceeds 1.7 times the number of multiplications in the improved tensor algorithm (see Table 19.1). The efficiency of the tensor algorithm by operations of multiplication with respect to the Cooley–Tukey algorithm is also given in the table.
19.5.4 Recursive Tensor Algorithm We now describe another, more elegant improved tensor algorithm for calculating the 2-D DFT, which we call the recurrent tensor algorithm, since it represents itself the recurrent
19-22
Transforms and Applications Handbook TABLE 19.1 The Number of Operations of Multiplication Required for Calculating the 2r 3 2r-Point DFT in the Tensor and Improved Tensor Algorithms, as well as by the Cooley–Tukey Algorithm with the Base (2 3 2) 2r
T ¼ 3(2r1 )m2r
256
246,528
I ¼ (2r þ 1)m2r 164,994
C ¼ 4r1 (3r 4) þ 1
C=T
C=I
327,681
1.33
1.99
512
1,181,184
788,994
1,507,329
1.28
1.91
1024 2048
5,508,096 25,171,968
3,675,650 16,789,506
6,815,745 30,408,705
1.24 1.21
1.85 1.81
4096
113,258,496
75,524,098
134,217,729
1.19
1.78
8192
503,341,056
335,601,666
587,202,561
1.17
1.75
r1
Note: It is assumed that m2r ¼ 2
(r 3) þ 2.
where the 2-D sequence g is defined as
procedure of calculation of the 2-D DFT by means of the 2-D DFTs of small orders. The calculation of the 2r 3 2r-point DFT is reduced to 3 2r1 incomplete transformations F 2r ;2 and one 2r1 3 2r1-point DFT. F 2r ;2 denotes the incomplete 2r-point DFT when all 2r1 components with even numbers are not calculated. The number of operations of multiplication in such a recurrent algorithm equals
for p1, p2 ¼ 0 : (2r1 1). The number of operations of multiplication required to fulfill the incomplete transform F 2r ;2 fT equals
m2r , 2r ¼ 4r =6(3r 7) þ 8=3:
m2r ;2 ¼ m2r m2r1 ¼ 2r1 2 þ m2r1 ¼ 2r2 (r 2),
(19:78)
In the tensor algorithm, the redundancy of the calculations (p1, p2) occurs in all frequency-points with even coordinates. The set of these frequency-points can be written as 2X2r1 , 2r1 . We can define the following partition of the lattice X2r , 2r : [ Tp,s nT2p,2s [ 2X2r1 , 2r1 : X2r ,2r ¼ (p,s)2J
The calculation of the 1-D DFTs of the splitting-signals fTp, s at only odd points can be reduced to the incomplete transforms F 2r ; 2 fTp, s . The calculation of the 2-D 2r 3 2r-point DFT at all even frequency-points can be reduced to the 2r1 3 2r1-point DFT. Indeed, the following holds: F2p1 , 2p2 ¼ ¼
r r 2X 1 2X 1
2n p1 þ2n2 p2
fn1 , n2 W2r 1
n1 ¼1 n2 ¼1
r1 r1 2X 1 2X 1
n1 ¼1
gn1 , n2 ¼ fn1 , n2 þ fn1 þ2r1 , n2 þ fn1 , n2 þ2r1 þ fn1 þ2r1 , n2 þ2r1 ,
if we use the valuation m2r ¼ 2r1 (r 3) þ 2. The number of multiplications required in the recurrent algorithm of the 2r 3 2rpoint DFT can be estimated as follows: m2r , 2r ¼ m2r1 , 2r1 þ 3 2r1 m2r ; 2 ¼ m2r1 , 2r1 þ 3 22r3 (r 2) ¼
4r (3r 7) þ 8=3, 6
when we similarly continue the splitting of the 2r1 3 2r1point DFT, and then 2r2 3 2r2-point DFT, and so on. Table 19.2 shows the number of multiplications in the recurrent algorithm in comparison with the tensor algorithm, for N ¼ 2r, when r ¼ 8 : 15. 19.5.4.1 N Is a Power of an Odd Prime
n1 p1 þn2 p2 gn1 , n2 W2r1
n2 ¼1
¼ ðF 2r1 , 2r1 g Þp1 , p2 , (19:79)
We now consider a splitting of the N 3 N-point DFT, when N ¼ Lr, L > 1 is an arbitrary odd prime number, and r > 1. The irreducible covering sJ ¼ (Tp,s) of the lattice XLr , Lr consists of
TABLE 19.2 The Number of Operations of Multiplication Required for Calculation of the 2r 3 2r-Point DFT by the Tensor Algorithm and Recurrent Tensor Algorithm r
2r
8
256
185,688
1.33
9
512
1,181,184
873,816
1.35
10
1,024
5,508,096
4,019,544
1.37
11
2,048
25,171,968
18,175,320
1.38
12
4,096
113,258,496
81,089,880
1.40
13
8,192
503,341,056
357,913,944
1.41
14 15
16,384 32,768
2,214,641,664 9,663,774,720
1,565,873,496 6,800,364,888
1.41 1.42
T ¼ 3(2r1 )m2r 246,528
R ¼ 4r=6(3r 7) þ 8=3
T=R
19-23
Multidimensional Discrete Unitary Transforms
Lr1 (L þ 1) cyclic groups and can be defined by the following set of generators: J ¼ JLr ,Lr ¼
r L[ 1
p1 ¼0
(p1 , 1)
[ Lr1 [1 p2 ¼0
1, Lp2 :
(19:80)
Therefore, to calculate the Lr 3 Lr-point DFT, it is sufficient to fulfill Lr1(L þ 1) Lr-point DFTs of the splitting-signals. The number of multiplications is estimated as mLr ,Lr ¼ Lr1 (L þ 1)mLr : The column–row algorithm uses 2(Lr) Lr-point DFTs and 2(Lr )mLr multiplications. The tensor algorithm reduces the number of multiplications by 2L=(L þ 1) times. The tensor algorithm can be improved, when removing the redundancy of calculations in the intersections of the cyclic groups Tp,s of the covering. For instance, for the r ¼ 2 case, the calculation of the L2 3 L2-point 2-D DFT can be reduced to one L2-point DFT, L incomplete L2-point DFTs (without calculation of the first component at zero point), and L2 1 incomplete L2point DFTs (without calculation of components at points which are integer multiple to L). The redundancy of the algorithm is in calculation of the spectral components at all frequency-points with coordinates which are integer multiple to L. The recurrent tensor algorithm can be constructed similar to the L ¼ 2 case, when Lr 3 Lr-point DFT is reduced to one Lr1 3 Lr1-point 2-D DFT and Lr1 (L þ 1) incomplete transforms F Lr ; L fT . The incomplete transforms are not calculated at points which are integer multiple to L. In such a recurrent algorithm, the number of operations of multiplication can be estimated by the following recursive formula: mLr ,Lr ¼ mLr1 ,Lr1 þ (L þ 1)Lr1 ðmLr mLr1 Þ:
(19:81)
The difference of number of multiplications in the tensor and recurrent tensor algorithms is estimated as: DmLr ,Lr ¼ (L þ 1)Lr1 mLr mLr ,Lr (L2 1)Lr2 mLr1 : For example, for the 25 3 25-point DFT, the recurrent tensor algorithm saves more than 240 multiplications, and 768 multiplications for the 49 3 49-point DFT. We use the known valuations m5 ¼ 10 and m7 ¼ 16, for the number of multiplications in the five-point and seven-point DFTs, respectively. 19.5.4.2 Case N ¼ L1L2 (L1 6¼ L2 > 1)
We consider the N ¼ L1L2 case, where L1 and L2 are arbitrary coprime numbers > 1. The irreducible covering sJ of the lattice XL1 L2 , L1 L2 consists of (L1 þ 1) (L2 þ 1) cyclic groups Tp,s. Such a covering sJ can be determined by the following set of generators: 0 1 L1[ L2 1 [ J¼ (p1 , 1) [ f(1, 0)g [ @ (1, p2 )A p1 ¼0
[ f(L1 , L2 )g [ f(L2 , L1 )g:
g:c:d:(p2 , L1 L2 )>1
(19:82)
To calculate the 2-D (L1L2) 3 (L1L2)-point DFT, it is sufficient to perform (L1 þ 1)(L2 þ 1) L1L2-point 1-D DFTs of splittingsignals fTp, s . The number of multiplications required for this algorithm equals mL1 L2 , L1 L2 ¼ (L1 þ 1)(L2 þ 1)mL1 L2 :
(19:83)
For instance, the calculation of the 20 3 20-point 2-D DFT is reduced to calculation of thirty 20-point DFTs, instead of forty 20-point DFTs in the column–row algorithm. 19.5.4.3 Other Orders N1 3 N2 The tensor algorithm as well as the improved tensor algorithm can be constructed for other orders N1 3 N2 of the 2-D DFT, when N1 6¼ N2 > 1. The tensor algorithm is defined by the irreducible covering sJ of the lattice XN1 , N2 by the cyclic groups Tp,s. It is not difficult to compose such a covering for each N1 3 N2 case under consideration. By analyzing the intersections of groups of the covering, where the calculations of the spectral components are repeated, we can obtain an effective improvement of the tensor algorithm.
Example 19.9 We consider 3 3 6-point DFT, F 3, 6 , i.e., the case when N1 ¼ 3 and N2 ¼ 6. Let f ¼ fn1 , n2 be a 3 3 6-point 2-D sequence. The irreducible covering sJ of the lattice X3,6 can be defined by the following set of four generators: J ¼ {(0, 1), (1, 1), (2, 1), (1, 3)}. All cyclic groups of this covering and their intersections are shown below: T0, 1 ¼ f(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5)g
T1, 1 ¼ (0, 0), (1, 1), (2, 2), (0, 3), (1, 4), (2, 5)
T2, 1 ¼ (0, 0), (2, 1), (1, 2), (0, 3), (2, 4), (1, 5) 8 9 > < (0, 0), (0, 3) > = T1, 3 ¼ (1, 0), (1, 3) : > > : ; (2, 0), (2, 3)
Figure 19.18 illustrates the locations of all frequency-points of these four groups. The groups are intersected only at two frequency-points (0, 0) and (0, 3), i.e., when the coordinates of the points are integer multiple to 3. Therefore, the transformation F 3, 6 can be split by one 6-point DFT and three incomplete 6-point DFTs, F 3, 6 {F 6, F 6;3 , F 6;3 , F 6;3 }: These transforms can be performed in the following way: Step 1: One 6-point DFT over the splitting-signal fT0,1 , to determine the 2-D DFT at frequency-points of the group T0,1, F0, k ¼ F 6 fT0, 1 k , k ¼ 0: 5:
Step 2: Three incomplete 6-point DFTs over the splittingsignals fT1,1 , fT2,1 , and fT1,3 , to determine the 2-D DFT at all
19-24
Transforms and Applications Handbook
T0,1
T1,1
T2,1
T1,3
5
5
5
5
4
4
4
4
3
3
3
3
2
2
2
2
1
1
1
1
0
0
1
2
0
0
1
2
0
0
1
2
0
0
1
2
FIGURE 19.18 Arrangement of frequency-points of groups Tp1 , p2 covering the lattice X3,6.
frequency-points of the corresponding groups T1,1, T2,1, and T1,3, except points (0, 0) and (0, 3), Fk mod 3, k ¼ F 6;3 fT1,1 k , Fk mod 3, 3k mod 6
F2k mod 3,k ¼ F 6;3 fT2,1 k , ¼ F 6;3 fT1,3 k ,
where k ¼ 1, 2, 4, 5. When the input is real, the 6-point DFT uses six operations pffiffiffi ¼ exp ð jp=3 Þ ¼ 1 j 3 =2, of multiplication by factors W 6 pffiffiffi Þ ¼ 1 þ j 3 =2, and W32 ¼ expðj4p=3Þ ¼ W3 ¼ exp pffiffiðffij2p=3 1 j 3 =2. We here consider the fast paired algorithm of the six-point DFT [16]. Each such multiplication requires one pffiffiffi real multiplication by the factor of 3 and two shifting operations. Six real operations of multiplication are required for the 6-point DFT. The incomplete Fourier transformation F 6;3 requires also six multiplications. Therefore, the 3 3 6-point DFT uses the real multiplications in number 6 þ 3(6)¼24. For comparison, in the column–row algorithm, three 6-point DFTs with real inputs and six 3-point DFTs with complex inputs are used, and the number of multiplications equals 3(6) þ 6(2)¼30.
T1,1
T2,1
T1,2
T1,4
T1,0
Example 19.10 We consider the 6 3 8-point Fourier transformation, F 6, 8 . Let f ¼ fn1 , n2 be a 6 3 8-point 2-D sequence. The irreducible covering sJ of the lattice X6,8 can be defined by the set of generators J ¼ {(1, 1), (2, 1), (1, 2), (1, 4), (1, 0)}.
The intersections of the cyclic groups of this covering are shown below:
T1, 1
T2,1
8 > < (0, ¼ (2, > : (4, 8 > < (0, ¼ (4, > : (2, 8 > < (0, ¼ (4, > : (2, 8 > < (0, ¼ (2, > : (4, 8 (0, > > > > > (1, > > > > < (2, ¼ > (3, > > > > > > (4, > > : (5,
9 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (0, 6), (1, 7) > = 0), (3, 1), (4, 2), (5, 3), (0, 4), (1, 5), (2, 6), (3, 7) > ; 0), (5, 1), (0, 2), (1, 3), (2, 4), (3, 5), (4, 6), (5, 7) 9 0), (2, 1), (4, 2), (0, 3), (2, 4), (4, 5), (0, 6), (2, 7) > = 0), (0, 1), (2, 2), (4, 3), (0, 4), (2, 5), (4, 6), (0, 7) > ; 0), (4, 1), (0, 2), (2, 3), (4, 4), (0, 5), (2, 6), (4, 7) 9 0), (1, 2), (2, 4), (3, 6) > = 0), (5, 2), (0, 4), (1, 6) > ; 0), (3, 2), (4, 4), (5, 6) 9 0), (1, 4) > = 0), (3, 4) > ; 0), (5, 4) 9 0) > > > > 0) > > > > > 0) = : (19:84) 0) > > > > > > 0) > > > ; 0)
The location of frequency-points of the five cyclic groups Tp,s of the covering are shown in Figure 19.19 The groups are intersected only at 12 frequency-points with even coordinates. Therefore, the transformation F 6, 8 can be split by one 24-point DFT, two incomplete 24- and 12-point DFTs, and two incomplete 6-point DFTs. Namely, the following splitting is valid:
T1, 4
T1, 2
T1, 0
6
6
6
6
6
4
4
4
4
4
2
2
2
2
2
0
0
5
0
0
5
0
0
5
0
0
FIGURE 19.19 Arrangement of frequency-points of groups Tp,s covering the lattice X6,8.
5
0
0
5
Multidimensional Discrete Unitary Transforms
19-25
F 6, 8 {F 24 , F 24;2 , F 12;2 , F 6;2 , F 6;2 }:
considering m12 ¼ 16 for complex data. The incomplete 24-point DFT, F 24;2 , requires 18 real multiplications. Thus, the above described 6 3 8-point 2-D DFT uses 36 þ 18 þ 4 þ 2 þ 2 ¼ 62 real multiplications for real data fn1 , n2 . It should be noted for the comparison, that the column–row algorithm for this transform uses six 8-point DFTs of real inputs and eight 6-point DFTs of complex inputs with total 6(2) þ 8(12) ¼ 108 real multiplications.
All these incomplete DFTs are not calculated for the components with even points, and, therefore, they can be reduced to calculation of the DFTs of twice smaller orders. Thus the 6 3 8-point DFT can be split by the 24-, 12-, 6-point DFTs, and two 3-point DFTs, and the redundancy of calculations in the tensor algorithm will be removed. These transforms can be performed in the following way: Step 1: One 24-point DFT over the splitting-signal fT1, 1 , to determine the 2-D DFT at frequency-points of the group T1,1, Fk mod 6, k mod 8
¼ F 24 fT1, 1 k , k ¼ 0 : 23:
Step 2: One incomplete 24-point DFT over the splitting-signal fT2,1 , to determine the 2-D DFT at 12 frequency-points of the group T2,1 with odd coordinates,
19.5.5 n-Dimensional DFT The concepts of the tensor transform and representation can be extended for the n-dimensional DFT, when n > 2. Let f ¼ ffn1 ,..., nn g be an arbitrary n-dimensional sequence. We consider for simplicity of our calculations that the sizes of f are equal, i.e., nk ¼ 0 : (N 1), k ¼ 1 : n. The n-dimensional DFT of the sequence f at the frequency-point (p1, . . . , pn) 2 XN, . . . , N, accurate to the normalizing factor Nn=2, is defined as
F2k mod 6, k mod 8 ¼ F 24 fT2,1 k , k ¼ 1, 3, . . . , 21, 23:
Step 3: One incomplete 12-point DFT over the splitting-signal fT1, 2 , to determine the 2-D DFT at frequency-points of the group T2,1 with odd coordinates,
Fk mod 6, 2k mod 8 ¼ F 12 fT1,2 k , k ¼ 1, 3, . . . , 9, 11: Step 4: Two incomplete 6-point DFTs over the splitting-signals fT1, 4 and fT1, 0 , to determine respectively the 2-D DFT at frequency-points of the groups T1,4 and T1,0 with odd coordinates, Fk mod 6,4k mod 8 ¼ F 6;2 fT1,4 k , Fk,0 ¼ F 6;2 fT1,0 k , k ¼ 1, 3, 5:
The location of all 48 frequency-points of the cyclic groups of the covering sJ, at which the calculations of the Fourier transform components Fp1 , p2 are performed, are shown in Figure 19.20. To estimate the number of real multiplications used in this algorithm, we use the valuations of the fast 1-D DFT by paired transforms [16]. The number of multiplications for the incomplete transforms F 6;2 and F 12;2 over real data equal 2 and 4, respectively. The 24-point DFT of real data is reduced to the 12point DFT with additional 10 multiplications by twiddle factors, which results in the total 16 þ 20 ¼ 36 real multiplications, when
T1,1
T2,1
Fp1 ,..., pn ¼
N 1 X
n1 ¼0
N 1 X
fn1 ,..., nn W n1 p1 þþnn pn
where W ¼ WN ¼ exp(2pj=N). In the tensor representation, each spectral component Fp1 ,..., pn is represented by the corresponding vector of dimension N : 0 Fp1 ,..., pn ! fp1 ,..., pn , 0 , fp1 ,..., pn , 1 , . . . , fp1 ,..., pn , N1
Fp1 ,..., pn ¼
N 1 X
Vp1 ,..., pn , t ¼
T1,2
n X (n1 , . . . , nn ); nk pk ¼ t modN \ X (19:88) k¼1
T1,4
T1,0
6
6
4
4
4
4
4
2
2
2
2
2
0
0
5
0
0
(19:87)
To open the complex number Fp1 ,..., pn as a vector, we perform a summation of the initial n-dimensional sequence at spacial points of the following N disjoint subsets of the lattice X:
6
5
fp1 ,..., pn , t W t :
t¼0
6
0
(19:86)
whose superposition with the exponential wave of ‘‘the low frequency’’ v0 ¼ (2p=N) equals to the component,
6
0
(19:85)
nn ¼0
5
0
0
5
0
0
5
FIGURE 19.20 Arrangement of frequency-points of groups Tp,s which divide the lattice X6,8, after removing their intersections.
19-26
Transforms and Applications Handbook
where t ¼ 0 : (N 1). In other words, the components of the vector in Equation 19.86 are calculated by fp1 ,..., pn , t ¼
X
Vp1 ,..., pn , t
fn1 ,..., nn , t ¼ 0: (N 1):
(19:89)
We denote by HN,N the N 3 N-point DHT, whose image HN,N f upon an N 3 N sequence f ¼ ffn1 , n2 g is defined as: Hp1 ,p2 ¼ (HN,N f )p1 ,p2 ¼
N 1 X N 1 X
n1 ¼0 n2 ¼0
fn1 ,n2 Cas(n1 p1 þ n2 p2 ),
(p1 , p2 ) 2 XN, N ,
The following assertion holds:
(19:92)
where the transform kernel is the real periodic function Fkp1 ,..., kpn ¼
N 1 X t¼0
fp1 ,..., pn , t W kt , k ¼ 0: (N 1):
(19:90)
The frequency-points kp1 ,..., kpn ¼ ðkp1 mod N, ... , kpn mod N Þ compose the cyclic group Tp1 ,..., pn ¼ The 1-D signal fTp1 ,..., pn
kp1 , . . . , kpn ; k ¼ 0: (N 1) :
¼ fp1 ,..., pn , t , t ¼ 0: (N 1)
(19:91)
is referred to as the splitting-signal, which carries the spectral information of the n-dimensional sequence f at frequency-points of Tp1 ,..., pn . To collect the whole spectral information of the sequence, we need to cover the lattice X by the cyclic groups. Let s ¼ Tp1 ,..., pn be an irreducible covering of the lattice X. Then, the representation of the n-dimensional sequence as a set of splitting-signals n
fn1 ,..., nn ! fTp1 ,..., pn ; Tp1 ,..., pn 2 s
o
is called the tensor transform of f. Thus, in the tensor representation, the n-dimensional sequence is considered as the set of the splitting-signals. The number of splitting-signals increases with dimension n, but the lengths of all signals are the same and equal N.
19.6 Discrete Hartley Transforms In this section, we consider the DHT [24,25], whose kernel is the sum of the cosine and sine waves of the exponential kernel of the Fourier transformation. The Hartley transform of real data is real, and it has many properties which are similar to the Fourier transform. From the standpoint of the arithmetical computation, both transforms have almost the same complexity in the multidimensional case, as well as in 1-D case. Both transformations lead to the same tensor representation of multidimensional signals and images, and the transforms thus are split by the same number of 1-D transforms. We first describe the 2-D case and then the 3-D case which at the same time will illustrate the tensor transformation for the 3-D Fourier transform.
2px 2px 2px Cas(x) ¼ CasN (x) ¼ cas ¼ cos þ sin : N N N (19:93) The above defined 2-D DHT is a real-to-real and nonseparable transform. The inversion formula for the DHT coincides with the initial formula (accurate to the factor 1=N2), i.e., 2 H1 N,N ¼ 1=N HN,N . The 1-D N-point DHT of a 1-D sequence f ¼ {fn} is defined by Hp ¼ (HN f )p ¼
N 1 X
fn Cas(np),
n¼0
p ¼ 0 : (N 1):
(19:94)
For a given frequency-point (p1, p2) of the lattice XN, N, the following property holds: Hkp1 ,kp2 ¼
N1 X
fp1 ,p2 ,t Cas(kt),
t¼0
k ¼ 0 : (N 1),
(19:95)
where fp1 , p2 , t are components of the splitting-signals which are defined exactly as for the Fourier transform, fp1 ,p2 ,t ¼
X
Vp1 ,p2 ,t
fn1 ,n2 , t ¼ 0 : (N 1):
(19:96)
The uniqueness of the tensor representation for the Hartley and Fourier transforms follows from the identical form of the relation between the spatial points (n1, n2) and frequency-points (p1, p2) in the kernel of these transforms. This is the Diophantus form L(n1 , n2 ; p1 , p2 ) ¼ n1 p1 þ n2 p2 . Therefore, if sJ is the irreducible covering of the lattice XN,N, which is composed from the cyclic groups Tp,s, then the 2-D DHT is split by the card(J) 1-D DHT. The splitting of the 2-D DFT and DHT are similar; both are defined by the same number of the 1-D transforms. We can thus apply all reasonings used for the Fourier transform to the Hartley transform in the 2-D case. The tensor algorithm of calculation of the N 3 N-point 2-D DHT uses mN, N ¼ (card J)mN
(19:97)
operations of multiplication, where mN denotes the number of multiplications for the 1-D N-point DHT. In many cases of N, we have card J < 2N, therefore the tensor algorithm uses less multiplications than the column–row method does for the separable 2-D DHT.
19-27
Multidimensional Discrete Unitary Transforms
The multiplication c1(y þ z) is the only nontrivial multiplication used in this transform. The 3-point DHT requires one more addition for calculating the first component at point 0, as x þ (y þ z). The direct calculation of the tensor transform [X ]f requires 24 mL, L ¼ (L þ 1)mL (19:98) additions (subtractions). Therefore, to calculate the 3 3 3-point operations of multiplication. We can compare this estimation DHT by the tensor algorithm, it is sufficient to use with the known estimation mL,L ¼ L2 þ 2L 3, which has been 6 þ 3 3 5 þ 24 ¼ 45 real additions. In the column–row algorithm, obtained by Boussakta and Holt by using an index mapping the calculation of the 3 3 3 separable DHT is fulfilled via six scheme, when calculating the L 3 L-point DHT [26]. Using the 3-point DHTs. Therefore, respectively 6 1 ¼ 6 and 6 6 ¼ 36 real estimation mL ¼ (L 1) introduced with the Fermat number operations of multiplication and addition are used in such an transform [27], we gain the following number of multiplications: algorithm, i.e., three multiplications more, but nine additions less than in the tensor algorithm. 2 In the general N 3 N case, the construction of the irreducible D(L) ¼ mL, L mL, L ¼ (L þ 2L 3) (L þ 1)(L 1) ¼ 2(L 1): covering (19:99) For instance, when N is an odd prime L > 2, then the calculation of the L 3 L-point DHT is reduced to calculations of (L þ 1) 1-D L-point DHTs, and it is sufficient to fulfill
sJ ¼ (Tp, s )p, s2J
Example 19.11 Let N ¼ 3 and f be a 2-D sequence ffn1 , n2 ;n1 , n2 ¼ 0, 1, 2g. The tensor algorithm of the 3 3 3-point DHT of f uses 4 real multiplications and 45 real additions if f is real. Indeed, using the covering s ¼ (T0,1, T1,1, T2,1, T1,0), the 3 3 3-point DHT reduces to calculation of four splitting-signals fT0, 1 , fT1, 1 , fT2, 1 , and fT1, 0 as is described in Example 19.6. Then one three-point DHT of the first splitting-signal is calculated and three incomplete 3-point DHTs of other signals.
In matrix form, the tensor algorithm of the 3 3 3-point DHT can be written as follows: 2
3 2 3 1 1 1 H0,0 7 6 H0,1 7 6 1 C1 C2 7 6 7 6 7 6H 7 61 C C 7 6 0,2 7 6 2 1 7 6 7 6 6 H1,1 7 6 7 1 C1 C2 7 6 7 6 7[X s ]f 6 H2,2 7 ¼ 6 1 C C 2 1 6 7 6 7 7 6H 7 6 1 C C 1 2 7 6 2,1 7 6 7 6 7 6 1 C2 C1 7 6 H1,2 7 6 7 6 7 6 4 H1,0 5 4 1 C1 C2 5 H2,0 1 C2 C1 where the coefficients of the Hartley matrices equal C1 ¼ Cas3(1) and C2 ¼ Cas3(2). Each incomplete Hartley transform uses one real multiplication and five additions. Indeed, since Cas(1) ¼ cos(2p=3) þ sin (2p=3) and Cas(2) ¼ cos(2p=3) sin(2p=3), we can write the transform as follows:
2 3 x x þ (c1 þ s1 )y þ (c1 s1 )z 1 C 1 C2 6 7 4y5 ¼ 1 C2 C1 x þ (c1 s1 )y þ (c1 þ s1 )z z x þ c1 (y þ z) þ s1 (y z) ¼ x þ c1 (y þ z) s1 (y z) pffiffiffi c1 ¼ 3=2, s1 ¼ 1=2:
(19:101)
of the square lattice XN,N can be implemented in the following way [16]. We first define the set BN ¼ {n 2 XN; g.c.d.(n, N) > 1} and function b(p) which equals the number of elements s 2 BN being coprime with p and such that ps < N. Then the set of generators can be defined as follows:
J¼
N1 [ p¼0
(p, 1)
[ [
s2BN
0 [ @ (1, s) !
[
p, s2BN , g:c:d:(p, s)¼1, p, sN
1
(p, s)A:
(19:102)
The number of generators in this set, or the number of 1-D DHTs required to calculate the 2-D N 3 N-point DHT equals cardsJ ¼ 2N f(N) þ
X
b(p)
(19:103)
p2BN
where we denote by f(N) Euler’s function, that is, the number of positive integers which are smaller than N and coprime with N. It is easy to verify, that f(N) S{b(p); p 2 BN}, so that card s 2N. Herewith, the equality in this expression takes place when L1 ¼ 2 and L2 ¼ 3 (or when L1 ¼ 3 and L2 ¼ 2); since card s ¼ 2(3) þ 2 þ 3 þ 1 ¼ 2(6) ¼ 12. In this case, the column–row and tensor algorithms use the same number, twelve, of the 6-point DHTs. When r > 2 and L is a prime, the calculation of the Lr 3 Lrpoint 2-D DHT is reduced to calculations of (L þ 1)Lr1 1-D Lr-point DHTs. The covering of the lattice XLr , Lr is defined as sJ ¼ (Tp, 1 )p¼0:(Lr 1) , (T1, Ls )s¼0:(Lr1 1) :
(19:104)
For example, the covering of the lattice X9,9 equals
(19:100)
sJ ¼
Tp, 1 ;p ¼ 0: 8 , T1, 0 , T1, 3 , T1, 6 :
(19:105)
Therefore, the 9 3 9-point 2-D DHT is calculated in the tensor algorithm by 12 1-D 9-point DHTs. In the column–row algorithm, this 2-D transform requires 18 9-point DHTs.
19-28
Transforms and Applications Handbook
150
156
fT4,1(t)
154
100
152
50
150
0
148
−50
146
−100
144
t 0
100
200
(b)
(a)
DHT
−150
0
100
200
(c)
150 |DFT| 100
50
0 (d)
0
100
200 (e)
(f )
FIGURE 19.21 (a) The image 256 3 256, (b) the image-signal fT4, 1 , (c) the 1-D DHT of the signal, (d) the 1-D DFT of the signal (in the absolute scale), (e) the 2-D DHT, and (f) the 2-D DFT of the image. (The frequency-points of the group T4,1 are marked on the 2-D transforms.)
In the L ¼ 2 case, the calculation of the 2r 3 2r-point DHT of a 2-D sequence fn1 , n2 is performed by 3 2r1 2r-point DHTs of the splitting-signals fTp, s with generators (p, s) 2 J. This is the main point in the tensor representation of the 2-D DHT, and 2-D DFT as well. The image in the tensor representation is not considered in the form of the 2-D square matrix in the spacial domain, but as another figure in the 3-D domain, namely the (2-D frequency)– (1-D time) domain,
fT
fn1 , n2 ! fp, s, t ;t ¼ 0: (N 1), (p, s) 2 J :
As an example, Figure 19.21 shows the original image of size 256 3 256 in part a, along with the image-signal fT4, 1 of length 256 in part (b), and the 256-point DHT of the signal in (c), and the magnitude of the Fourier transform of the signal in (d). For both transforms, this image-signal carries the spectral information of the image at frequency-points of the cyclic group T4,1. The 2-D DHT and DFT of the image are shown in parts (e) and (f), respectively. The locations of all frequency-points (p1, s1) of this group in the frequency domain of both transforms are also shown. These points lie along four parallel lines at the angle arctan(4) ¼ 75.96388 to the verticals (which are assigned for the first coordinates p1 of frequency-points). The complete set of image-signals that represent a 2-D sequence (or image), as well its 2-D DHT and DFT can be shown in the 3-D space in different ways. As an example, Figure 19.22 shows a
t
t
t
t
FIGURE 19.22 The 3-D ring with 384 image-signals fTp, s of the image 256 3 256.
3-D figure for the set of 384 all image-signals fTp, s , (p, s) 2 J, of the above considered image 256 3 256. Image-signals of length 256 are located along 384 directions in a ring with the inner circle of radius 32.
19-29
Multidimensional Discrete Unitary Transforms
The presentation of the image f in the form of the ring is performed through the transform:
X s : f ! fp, s, t ;(p, s) 2 J, t ¼ 0: (N 1) ,
(19:106)
where the set J of the generatrices (p, s) of sets T 2 s is defined by Equation 19.102. We call the transformation X s to be the tensor, or vector transformation, because it transfers the image f into the set of card s vectors, or image-signals fTp, s , (p, s) 2 J. The number of required multiplication used in the 2r 3 r 2 -point DHT can be estimated as follows: m2r , 2r ¼ 3 2r1 m2r 3 4r1 (r 3) þ 2r :
19.6.1 3-D DHT Tensor Representation In this section, we describe the tensor representation for dividing the calculation of the nonseparable 3-D DHT by the 1-D DHTs. For simplicity of future calculations, we discuss the case of the transform of the order N 3 N 3 N, when N ¼ 2r, r > 1. However, the concept of tensor representation can be applied to the 3-D DHT transform of an arbitrary order. It will be shown, that the number of multiplications required for calculating the 3-D DHT can be reduced to 7[8r1(r 3) þ 4r 1]. This number can be reduced about 1.6 times, when removing the redundancy of the tensor algorithm, which occurs because of intersections of many cyclic groups covering the 3-D lattice of the frequency-points. Such improvement or the recurrent tensor algorithm can be obtained similar to the algorithms described in Sections 19.5.3 and 19.5.4 for the 2-D DFT. Let X be the cubic N 3 N 3 N lattice (19:108)
We denote by HN,N,N the N 3 N 3 N-point DHT whose image HN,N,N f on a 3-D sequence f ¼ ffn1 ,n2 ,n3 g is defined as follows: Hp1 ,p2 ,p3 ¼ (HN,N,N f )p1 ,p2 ,p3 ¼
N1 X N1 X N1 X
n1 ¼0 n2 ¼0 n3 ¼0
fn1 ,n2 ,n3 Cas(n1 p1 þ n2 p2 þ n3 p3 ),
T ¼ Tp1 ,p2 ,p3 ¼
kp1 ,kp2 ,kp3 ; k ¼ 0: (N 1) , ðT0,0,0 ¼ f(0, 0, 0)gÞ: (19:110)
For a given frequency-point (p1, p2, p3) 6¼ (0, 0, 0), the collec
tion of subsets Vp1 , p2 , p3 , t ;t ¼ 0: (N 1) is a partition of X. Therefore, the following property holds for spectral components of the 3-D DHT:
(19:107)
Here we use the well-known fact that, for computing the 2r-point DHT, it is enough to fulfill M2r ¼ 2r1 (r 3) þ 2 multiplications [28,29]. Comparing this estimation with the number of multiplications m2r , 2r ¼ 2 4r (r 2) þ 2rþ2 obtained in the Bracewell algorithm [30], we obtain that m22r , 2r =M2r , 2r 4=3. In other words, the number of multiplications reduces 4=3 times. The number of multiplications can be reduced, by removing the redundancy of calculations at the intersections of the cyclic groups, as is done for the 2-D DFT, when we have constructed the improved and recurrent tensor algorithms.
XN,N,N ¼ f(p1 , p2 , p3 ); p1 , p2 , p3 ¼ 0: (N 1)g:
where (p1, p2, p3) 2 X. The tensor representation of f is defined by the irreducible covering sJ of the lattice X, which is composed by the following cyclic groups in X:
(19:109)
Hkp1 , kp2 , kp3 ¼
N1 X t¼0
fp1 , p2 , p3 , t Cas(kt), k ¼ 0: (N 1):
(19:111)
Thus the splitting-signal
fTp1 , p2 , p3 ¼ fp1 , p2 , p3 , 0 , fp1 , p2 , p3 , 1 , . . . , fp1 , p2 , p3 , N1
(19:112)
carries the spectral information of the 3-D sequence f at frequency-points of Tp1 , p2 , p3 . The complete set of the splittingsignals fT, T 2 s, is the tensor representation of the sequence f with respect to the 3-D DHT. With respect to the 3-D Fourier transform, the sequence f has the same tensor representation, as is mentioned in 19.5.5. The components of splitting-signals are calculated by linear integrals fp1 ,p2 ,p3 t ¼
X
Vp1 ,p2 ,p3 ,t
fn1 ,n2 ,n3 , t ¼ 0: (N 1),
(19:113)
along the parallel hyperplanes lying in the sets Vp1 ,p2 ,p3 ,t ¼ f(n1 , n2 , n3 ); n1 p1 þ n2 p2 þ n3 p3 ¼ t g: (19:114): Indeed, each set Vp1 , p2 , p3 , t , if it is not empty, is the set of spatial points (n1, n2, n3) along parallel hyperplanes: xp1 þ yp2 þ zp3 xp1 þ yp2 þ zp3 xp1 þ yp2 þ zp3
9 ¼t > > > = ¼tþN > > > ; ¼ t þ (p1 þ p2 þ p3 1)N
(19:115)
in the cube [0. N] 3 [0, N] 3 [0, N]. The number of 1-D DHTs splitting the 3-D DHT equals to the number of generators of the cyclic groups of the irreducible covering sJ of the lattice X. The set of these generators can be constructed for any order of the transform, and we stand here on examples, when N ¼ 4 and 8.
19-30
Transforms and Applications Handbook
Example 19.12 We consider the lattice 4 3 4 3 4 and the following 3-D image
1 1 f ¼ 2 3
n3 2 0 1 2
¼0 n3 3 1 2 2 1 2 4 3 2 1 1 2 1 2 5 1
¼1 1 3 4 3
5 3 1 1 4 3 1 4
¼2 n3 2 1 1 2 1 6 1 2 4 1 1 1 5 2 4 1 5
n3 1 2 2
¼3 2 3 3 3 4 4 5 4
which is presented separately by four 2-D matrices in planes n3 ¼ 0, 1, 2, and 3. The value of f0,0,0 ¼ 1 is underlined.
Let the generator be (p1, p2, p3) ¼ (2, 1, 1). All values of the time variable t in the Diophantus form n1p1 þ n2p2 þ n3p3 ¼ t mod N can be written in the form of the following four matrices 4 3 4 that compose a 3-D matrix 4 3 4 3 4: kt ¼ (n1 2 þ n2 1 þ n3 1) mod 4kn3 ,n2 ,n1 ¼0:3 ¼ 0 2 ¼ 0 2
n3 ¼ 0
1 2 3 0 1 2 3 0
3 1 3 1
1 3 1 3
n3 ¼ 1
2 3 0 1 2 3 0 1
0 2 0 2
2 0 2 0
n3 ¼ 2
3 0 1 2 3 0 1 2
1 3 1 3
3 1 3 1
n3 ¼ 3
0 1 2 2 3 0 0 1 2 2 3 0
This transform coincides with the 3-D DHT of f at the following frequency-points of the group T2,1,1 : (0, 0, 0), (2, 1, 1), (0, 2, 2), and (2, 3, 3), as shown below
151 [H4,4,4 f ] ¼
p3 ¼ 0
p3 ¼ 1
7
p3 ¼ 2 3
p3 ¼ 3
: 9
We can construct similarly other splitting-signals and fill the 3-D DHT by the 1-D DHTs of these signals. For, instance, when the generator is (1, 1, 1), we obtain the splitting-signal fT1, 1, 1 ¼ {45, 30, 46, 30} and its 4-point DHT equals 3 2 32 3 1 1 1 1 151 45 6 1 7 6 1 1 1 1 76 30 7 7 6 6 76 7 4 31 5 ¼ 4 1 1 1 1 54 46 5: 1 30 1 1 1 1 2
This transform coincides with the 3-D DHT at the frequencypoints of the group T1,1,1, i.e., (0, 0, 0), (1, 1, 1), (2, 2, 2), and (3, 3, 3). The first value, 151, has been already calculated in the previous step. We can fill other three values of the 3-D DHT as follows:
Therefore, the image-signal fT2, 1, 1 of f is defined as follows:
8 þf þf þf ¼ 1þ1þ2þ1¼ 5 f > > 0,0,0 1,2,0 2,0,0 3,2,0 < f0,3,1 þ f1,1,1 þ f2,3,1 þ f3,1,1 ¼ 5 þ 3 þ 4 þ 1 ¼ 13 X> f2,1,1,0 ¼ ¼ 39 > > f0,2,2 þ f1,0,2 þ f2,2,2 þ f3,0,2 ¼ 2 þ 1 þ 4 þ 4 ¼ 11 > : f0,1,3 þ f1,3,3 þ f2,1,3 þ f3,3,3 ¼ 2 þ 3 þ 1 þ 4 ¼ 10 8 þf þf þf ¼ 2þ2þ1þ2¼ 7 f > > 0,1,0 1,3,0 2,1,0 3,3,0 < f0,0,1 þ f1,2,1 þ f2,0,1 þ f3,2,1 ¼ 2 þ 3 þ 1 þ 3 ¼ 9 X> f2,1,1,1 ¼ ¼ 33 > > > f0,3,2 þ f1,1,2 þ f2,3,2 þ f3,1,2 ¼ 1 þ 2 þ 1 þ 5 ¼ 9 : f0,2,3 þ f1,0,3 þ f2,2,3 þ f3,0,3 ¼ 2 þ 1 þ 4 þ 1 ¼ 8 8 f 0,2,0 þ f1,0,0 þ f2,2,0 þ f3,0,0 ¼ 3 þ 1 þ 2 þ 3 ¼ 9 > > > < X f0,1,1 þ f1,3,1 þ f2,1,1 þ f3,3,1 ¼ 2 þ 1 þ 2 þ 1 ¼ 6 f2,1,1,2 ¼ ¼ 38 > f0,0,2 þ f1,2,2 þ f2,0,2 þ f3,2,2 ¼ 3 þ 1 þ 3 þ 2 ¼ 9 > > : f0,3,3 þ f1,1,3 þ f2,3,3 þ f3,1,3 ¼ 3 þ 2 þ 4 þ 5 ¼ 14 8 þf þf þf ¼ 1þ0þ1þ2¼ 4 f > > 0,3,0 1,1,0 2,3,0 3,1,0 < f0,2,1 þ f1,0,1 þ f2,2,1 þ f3,0,1 ¼ 1 þ 4 þ 4 þ 5 ¼ 14 X> f2,1,1,3 ¼ ¼ 41 > f0,1,2 þ f1,3,2 þ f2,1,2 þ f3,3,2 ¼ 1 þ 6 þ 2 þ 4 ¼ 13 > > : f0,0,3 þ f1,2,3 þ f2,0,3 þ f3,2,3 ¼ 1 þ 3 þ 1 þ 5 ¼ 10
Thus fT2, 1, 1 ¼ {39, 33, 38, 41} and the 4-point DHT of this splitting-signal is calculated by 3 2 32 3 2 3 1 1 1 1 39 151 H0 6 H1 7 6 1 1 1 1 76 33 7 6 7 7 6 7¼6 76 7 6 7 4 H2 5 4 1 1 1 1 54 38 5 ¼ 4 3 5: H3 1 1 1 1 41 9 2
151 [H4,4,4 f ] ¼
p3 ¼ 0
p3 ¼ 1 1 7
p3 ¼ 2 3
31
p3 ¼ 3
: 9 1
We consider also the generator (1, 2, 0), for which the splittingsignal and its Hartley transform are defined as follows: fT1, 2, 0 ¼ {31, 39, 38, 43} ! H4 fT1, 2, 0 ¼ {151, 11, 13, 3}:
As a result, we define other three components the 3-D DHT at the frequency-points of the group T1,2,0, namely, at (1, 2, 0), (2, 0, 0), and (3, 2, l0), as shown:
151 [H4,4,4 f ] ¼ 13
p3 ¼ 0
11 3
p3 ¼ 1
1 7
p3 ¼ 2
3
31
p3 ¼ 3
: 9 1
To calculate all values of the 3-D DHT, we need cover the 3-D lattice 4 3 4 3 4 by cyclic groups Tp1 ,p2 ,p3 . No more than 28 generators are required for such a covering sJ, and they can be taken from the following set:
19-31
Multidimensional Discrete Unitary Transforms
J ¼ f (p, 1, z); p, z ¼ 0: 3g [ f(1, 2s, z); z ¼ 0: 3, s ¼ 0, 1g [ f(0, 2, 1), (2, 2, 1), (2, 0, 1), (0, 0, 1)g: Therefore, the 4 3 4 3 4-point DHT can be calculated by 28 four-point DHTs.
Example 19.13 The number of 8-point DHTs required for calculation of the 8 3 8 3 8-point DHT by using the tensor representation is equal to the minimum number of cyclic groups Tp1 , p2 , p3 covering the 3-D lattice 8 3 8 3 8. The generators (p1, p2, p3) of these groups can be defined from the following set of 112 triplets: J ¼f(p, 1, z); p, z ¼ 0: 7g [ f(1, 2s, z); z ¼ 0: 7, s ¼ 0: 3g[
In the general case when N ¼ 2r, r 1, we can construct the irreducible covering sJ of the 3-D lattice X2r , 2r , 2r by using the following set of 7 4r1 generators: J¼
r 2[ 1
f(1, s, z); s ¼ 0: 2r 1g [ (2p, 1, z); p ¼ 0: 2r1 1
z¼0
r[ 1 2rk1 [1
k¼1
z¼1
kþ1
(2
Therefore, the 2r 3 2r 3 2r-point DHT (or DFT) can be split by 4r1 7 1-D 2r-point DHTs (or DFTs). To estimate the total number of multiplications required to calculate the 2r 3 2r 3 2rpoint DHT, we use the following estimate for the 2r-point DHT: m2r ¼ 2r1 (r 3) þ 2 (r 3):
[ f(0, 4, 1), (4, 4, 1), (4, 0, 1), (0, 0, 1)g:
cas(t) ¼ cos (t) þ sin (t) ¼ Re (ejt ) Im(ejt ): Therefore, the following relation holds: Hp1 ,p2 ,p3 ¼ Re Fp1 ,p2 ,p3 Im Fp1 ,p2 ,p3 :
Fp1 ,p2 ,p3 ¼
n3 ¼0
7 X 7 X
n2 ¼0 n1 ¼0
fn1 ,n2 ,n3 W
n1 p1 þn2 p2
(19:118)
The 3-D DHT by the tensor transform uses operations of multiplication in the number m2r , 2r , 2r ¼ 7 4r1 m2r ¼ 7 8r1 (r 3) þ 2 4r1 :
(19:119)
For the comparison, Table 19.3 shows the number of multiplications required by the column–row approach based on the radix-2 algorithm [31,32], the radix-2 3 2 3 2 algorithm [33], and the tensor algorithm, for calculating the 2r 3 2r 3 2r-point DHT when r ¼ 7 : 12. In the radix-2 3 2 3 2 algorithm, the 3-D DHT is divided into eight 2r1 3 2r1 3 2r1-point DHTs, and the process of division is similarly repeated (r 2) times, until we receive the transforms of order 2 3 2 3 2.
(19:116)
The 3-D DFT of order 8 3 8 3 8 has the splitting similar to the Hartley transform, and it is calculated by 112 eight-point DFTs. For comparison, we consider the column–row approach for calculating the 3-D DFT, which is based on the expression 7 X
p, 2k , 2k z þ 1); p ¼ 0: 2rk1 1 g [ f(0, 0, 1)g:
(19:117)
[ f(2p, 2, z); z ¼ 1, 3, p ¼ 0: 3g [ f(2, 4s, z); z ¼ 1, 3, s ¼ 0, 1g[
Thus, from the perspective of the tensor representation, the 8 3 8 3 8-DHT is calculated by 112 8-point DHTs. It should be noted, that the 3-D DHT under the consideration is not separable, but it can be expressed through the separable 3-D DFT. Indeed, the kernel of the Harley transform is the sum of real and imaginary parts of the exponential kernel of the Fourier transform,
(2k , 2k s, 2k z þ 1); s ¼ 0: 2rk 1 ,
!
W n3 p3 ,
where W ¼ exp(2p j=8). In this approach, eight 8 3 8-point DFTs are calculated first, and then 64 eight-point DFTs along the third dimension n3. Totally, 8(2 8) þ 64 ¼ 3 64 ¼ 192 eight-point DFTs are used, or 80 eight-point DFTs more than the tensor transform method uses. When using the row–column approach to the 3-D DFT of the order N 3 N 3 N, as in the above N ¼ 8 case, all N-point DFTs in calculations have complex input data, except the first N transforms in each (n1, n2)-plane, if the 3-D sequence f is real. However, all 1-D DFTs in the tensor algorithm are performed over the real splitting-signals, when f is real.
19.6.2 n-Dimensional DHT The tensor transform of an n-dimensional sequence f defines the splitting-signals in the number which is determined by the covering sJ of the lattice X. In the 2-D case, the square lattice X2r , 2r is covered by 3 2r1 cyclic groups T. In the 3-D case, the cubic lattice X2r ,2r ,2r is covered by 7 4r1 groups T. TABLE 19.3 The Number of Multiplications per Sample, Which Are Required to Calculate the 2r 3 2r 3 2r-Point DHT by the Column–Row Radix-2 Algorithm (C–R), Radix-2 3 2 3 2 Algorithm, and the Tensor Algorithm (T) C-R Radix-2
Radix-2 3 2 3 2
T
7
10.64
6.20
3.53
8
13.57
7.91
4.39
9
16.53
9.64
5.26
10
19.51
11.38
6.13
r
11
20.50
13.13
7.00
12
25.50
14.87
7.88
19-32
Transforms and Applications Handbook
In the general n 2 case, the n-dimensional 2r 3 2r 3 3 2rpoint DHT is split by 1-D DHTs in the number equal to the cardinality of the covering sJ which is calculated by cardsJ ¼ (2n 1)2(r1)(n1) :
(19:120)
The number of multiplications required for computing the transform can be estimated thus by n
(r1)(n1)
m2r , 2r ,..., 2r ¼ (2 1)2
r1 2 (r 3) þ 2 :
(19:121)
This number can be reduced, when removing repeated calculation of spectral components. For instance, in the 3-D case, we can remove the repeated calculations for all components at frequency-points 2k (p1 , p2 , p3 ), for k ¼ 1 : (r 1). The number of these reiterations equals r1
D ¼ (7 4
r
r
)2 8 ¼ (3=4)8
is the square lattice XN,N shifted in the 2-D plane by the vector (1=2, 1=2). The frequency-points of the transform are considered on the square lattice X. For simplicity of indexing, we denote points (n1 þ 1=2, n2 þ 1=2) of the lattice Y by (n1, n2), as well as samples fn1 þ1=2,n2 þ1=2 of a sequence defined on Y by fn1 ,n2 . The components of the 2-D SDFT, F sN,N , of a 2-D sequence f ¼ fn1 ,n2 are defined by N 1 X N 1 X 1 1 Fps 1 ,p2 ¼ F sN,N f p1 ,p2 ¼ fn1 ,n2 W ðn1 þ2Þp1 þðn2 þ2Þp2 , n1 ¼0 n2 ¼0
p1 , p2 ¼ 0: (N 1),
(19:123)
where W ¼ exp(j2p=N). The transform is periodic, but its fundamental period is the lattice X2N,2N, not XN,N. The following properties are valid for this transform: s s ¼ Fps 1 ,Nþp2 ¼ Fps 1 ,p2 , FNþp ¼ Fps 1 ,p2 , FNþp 1 ,p2 1 ,Nþp2
r
(19:124)
which is a quite big value. To remove them, we consider the following partition of the set complement of all frequency-points with even coordinates
where p1, p2 ¼ 0 : (N=2 1). Therefore, it is enough to perform the calculations for the transform only at frequency-points of one-fourth of the lattice X2N,2N, i.e., in XN,N. If f is real, then the following properties of complex conjugacy hold:
s1 ¼ Tp1 ,p2 ,p3 nT2p1 ,2p2 ,2p3 ; (p1 , p2 , p3 ) 2 J :
s s ¼ F p1 ,Np2 , FNp ¼ F p1 ,p2 , p1 , p2 ¼ 1: (N=2 1): FNp 1 ,p2 1 ,Np2
s
Each cyclic group T in the covering s is divided by two parts of frequency-points with all even and not all even coordinates. Each part consists of 2r1 points. Therefore, we can obtain the recurrent algorithm for computing H2r , 2r , 2r via H2r1 , 2r1 , 2r1 and 7 4r1 incomplete 1-D 2r-point DHTs, which we denote by HN;2 and for which only components with odd numbers are calculated. Each such incomplete transform can be reduced to 2r 1-point DHT. Herewith, for computing 3-D DHT, the number of real multiplications can be calculated by the following recurrent formula: m02r ,2r ,2r ¼ m02r1 ,2r1 ,2r1 þ 7 4r1 ðm2r m2r1 Þ 35 8r3 r, (m8,8,8 ¼ 214):
(19:122)
The reduction of multiplications in Equation 19.119 forms the 5=8 of all multiplications.
19.7 2-D Shifted DFT In this section, tensor algorithms for calculating the two-dimensional DCT are described. We analyze the multiplicative complexity of the N 3 N-point 2-D DCT, for cases of the most interest, when N ¼ Lr, where L is a general prime number and r 1, and N ¼ L1 L2, where L1 and L2 are arbitrary coprime numbers. The tensor algorithm and its modification are described in detail for the 8 3 8 and 15 3 15 cases. We first move on to the study of the tensor representation of the 2-D shifted discrete Fourier transform (SDFT) [34], which is applied to 2-D sequences defined on the 2-D lattice YN,N which
s
(19:125)
It follows directly from the definition, that the 2-D SDFT can be expressed through the 2-D DFT as 1
Fps 1 ,p2 ¼ W 2(p1 þp2 ) Fp1 ,p2 , p1 , p2 ¼ 0: (N 1):
(19:126)
Thus, in order to obtain the table of all values of the 2-D SDFT, the table of values of the 2-DFT is multiplied point-wise by the 1 table of (2N 1) twiddle coefficients W 2t , t ¼ 0: (2N 1). For instance, for N ¼ 4, we have h
1
W 2(p1 þp2 )
i
2
1 6 W 12 6 ¼4 1 p1 ,p2 ¼0:3 W 3 W2
1
W2 W1 3 W2 2 W
W1 3 W2 2 W 5 W2
3 3 W2 W2 7 7, 5 W2 5
W3
where W ¼ W4 ¼ exp(2pj=4) ¼ j. The splitting-signals defined for the 2-D DFT can be applied to split the shifted 2-D DFT by cyclic groups Tp,s of the covering sJ of the lattice XN,N. The application can be described by the following three steps: h h i i 1 s ¼ W 2(pþs)k Fk , fn1 ,n2 ! fTp, s ! Fk ¼ F fTp,s k ! Fkp,ks k ¼ 0: (N 1),
(19:127)
which are performed for each generator (p, s) of the set J. Subscripts kp and ks are taken modulo 2N. However, we can note that, if kp mod 2N ¼ N þ p0 and p0 < N, then p0 ¼ kp mod N s s s and the following is valid: Fkp, ks ¼ FNþp0 , ks ¼ Fp0 , ks , and if
19-33
Multidimensional Discrete Unitary Transforms s ks mod 2N ¼ N þ s0 and s0 < N, then s0 ¼ ks mod N and Fkp, ks ¼ s s s s s ¼ F . In addition, F ¼ F ¼ F Fkp, Nþp0 , Nþs0 p, s0 . Nþs0 kp, s0 kp, ks This is why, we can consider subscripts kp and ks by modulo N, and s s Fkp mod 2N, ks mod 2N ¼ Fkp mod N, ks mod N , k ¼ 0: (N 1):
(19:128)
We also can define the concept of the tensor representation of f with respect to the 2-D SDFT, by using a method different from what is given in Equation 19.127. In the kernel of this transform, the relation between the spacial points and frequency-points is described by the nonlinear form 1 1 1 L(n1 , n2 ; p1 , p2 ) ¼ n1 þ p1 þ n2 þ p2 ¼ (n1 p1 þ n2 p2 ) þ (p1 þ p2 ), 2 2 2
and in arithmetic modulo N it takes integer values as well as mixed numbers with the fraction 1=2. Therefore complex coefficients of the kernel lie on the N or 2N equidistant points on the unit circle. The number of these points depends on the evenness of coordinates p1 and p2. For a given frequency-point (p1, p2), we define the following sets of points ~ p1 ,p2 ,t ¼ f(n1 , n2 ); L(n1 , n2 ; p1 , p2 ) ¼ t mod N g V
(19:129)
and components ~fp ,p ,t ¼ 1 2
X
~ p ,p ,t V 1 2
fn1 ,n2 , t 2 [0, N 1],
(19:130)
which allow for writing Equation 19.123 as Fps 1 ,p2 ¼
NDt X
~fp ,p ,t W t : 1 2
(19:131)
t¼1Dt
The number t in Equations 19.129 through 19.131 runs the interval [0, N 1] with the step Dt ¼ 1 or 1=2, depending on the evenness of coordinates of the frequency-point (p1, p2). For instance, if both the coordinates are simultaneously even or odd, then Dt ¼ 1 and the variable t takes only integer values in the interval [0, N 1]. If the evenness of the coordinates of the frequency-point (p1, p2) are different, then all numbers t in the above formulas have the fraction Dt ¼ 1=2. We write the fact of equal evenness by e(p1, p2) ¼ 0, and e(p1, p2) ¼ 1, if the evenness of coordinates are different. The following general formula is valid: s ¼ Fkp ,kp 1
2
NDt X
t¼1Dt
~fp ,p ,t W kt , k ¼ 0: (N 1): 1 2
(19:132)
This property is used in the tensor algorithm of the 2-D SDFT. Let sJ be an irreducible covering of the lattice XN,N, which is
composed by cyclic groups Tp1 , p2 . The spectral information of f at frequency-points of a group Tp1 , p2 2 sJ is determined by the following splitting-signal: ~fT ¼ ~fp1 ,p2 ,1Dt , ~fp1 ,p2 ,2Dt , . . . , ~fp1 , p2 ,NDt : p1 ,p2
(19:133)
Indeed, depending on the evenness of p1 and p2, the expression in Equation 19.132 can be rewritten as follows: s ¼ Fkp ,kp 1
2
N1 X t¼0
k
s ¼ W2 Fkp ,kp 1
2
~fp ,p ,t W kt , if e(p1 , p2 ) ¼ 0, 1 2
N1 X t¼0
~fp ,p ,tþ1 W kt , 1 2 2
if e(p1 , p2 ) ¼ 1,
(19:134)
(19:135)
where k ¼ 0 : (N 1). The 2-D SDFT of f at the group Tp1 , p2 coincides with the 1-D DFT of the splitting-signal, if p1 and p2 have the same evenness, otherwise it coincides with the modified 1-D DFT. This property can be written shortly as
h i 8 < F N ~fTp ,p , 1 2 h i F sN,N [f ] jTp , p ¼ 1 2 : (eN F N ) ~fT , p1 ,p2
e(p1 , p2 ) ¼ 0; e(p1 , p2 ) ¼ 1; (19:136)
where eN is the N-point DT with the diagonal matrix 1 3 N N 1 [eN ] ¼ diag 1, W 2 , W 1 , W 2 , W 2 , . . . , W 2 1 , W 2 2 :
(19:137)
Thus, we split the 2-D SDFT and this splitting set which we denote by R F sN, N ;sJ consists of the N-point transforms F N and eN F N . We remind for comparison, that the splitting of the 2-D DFT by the covering sJ consists only of the transforms F N . The representation of the 2-D sequence f as a set of splittingsignals {~f T; T 2 sJ} is called the tensor representation of f with respect to the 2-D SDFT, and the transformation f ! {~f T; T 2 sJ} is the tensor transformation.
Example 19.14 We consider the N ¼ 8 case. The irreducible covering sJ of the lattice X8,8 consists of 12 cyclic groups whose generators (p, s) can be taken from the set J ¼ {(0, 1), (1, 1), (2, 1), . . . , (7, 1), (1, 0), (1, 2), (1, 4), (1, 6)}. Four generators (p, s), which are (1, 1), (3, 1), (5, 1), and (7, 1), have the same evenness, and the remaining eight generators have different evenness. Therefore, the 8 3 8point SDFT is split into four 8-point DFTs, F 8 , and eight compositions of the 8-point Fourier transform and scalar transform e8, i.e., 9 8 = s < R F 8, 8 ; sJ ¼ F 8 , F 8 , F 8 , F 8 , e8 F 8 , . . . ,e8 F 8 : |fflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} ; : 8 times
19-34
Transforms and Applications Handbook
It is important to note, that the tensor representation of the shifted 2-D DFT can be derived directly from the tensor representation of the 2-D DFT. Indeed, the following relation is valid: ~ p,s,t ¼ Vp,s,(tt0 ) mod N , t0 ¼ (p þ s)=2: V
(19:138)
If e(p, s) ¼ 0, the subscript t t0 takes only integer values, and therefore ~fp,s,t ¼ fp,s,tt0 mod N . If e(p, s) ¼ 1, then the similar cyclic shift can be performed over the splitting-signal fTp,s , to obtain the signal ~fTp,s , namely, the cyclic shift by t0 ¼ b(p þ s)=2c elements to the right. We here denote by bxc the floor function, i.e., the greatest integer x. For instance, in the considered 8 3 8 case, the cyclic shifting for the generators (1, 1), (2, 1), and (3, 1) are performed by t0 ¼ 1, 1, and 2, respectively. Therefore, the following relations hold between the splitting-signals of these generators:
3,1
f2,1,1 , f2,1,2 , f2,1,3 , f2,1,4 , f2,1,5 , f2,1,6 , f2,1,7 Þ f2,1,0 , f2,1,1 , f2,1,2 , f2,1,3 , f2,1,4 , f2,1,5 , f2,1,6 Þ f3,1,1 , f3,1,2 , f3,1,3 , f3,1,4 , f3,1,5 , f3,1,6 , f3,1,7 Þ
2
k
s ¼ W2 Fkp ,kp 1
2
N1 X t¼0
5=2
In the general 2r 3 2r case, when r > 2, the covering sJ of the lattice X2r , 2r consists of 3 2r cyclic groups. 2r1 generators of these groups have the same evenness and 2r generators have a different evenness. The splitting of the 2r 3 2r-point SDFT by the 1-D transforms can be written as 9 8 = s < R F 2r , 2r ; s ¼ F 2 r , F 2 r , . . . , F 2r , e 2 r F 2r , . . . , e 2r F 2 r : :|fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} ;
N1 X t¼0
fp1 ,p2 ,tp1 þp2 mod N W kt ,
(19:139)
2
2r times
¼ m2r , 2r þ 2r ½(2r 2)m1 2 ,
fp1 ,p2 ,tbp1 þp2 c mod N W kt , k ¼ 0 : (N 1):
(19:142)
where we denote by m2r , 2r the number of multiplications used for calculating the 2r 3 2r-point DFT. m1 is the number of real multiplications used for performing a complex multiplication, which is considered equal 4. The number of operations of complex multiplications is calculated as ms2r ,2r
r
r
r
¼ m2r ,2r þ 2 (2 2) ¼ 4
2
r 1 8 2rþ1 þ : 2 6 3 (19:143)
(19:140)
3=2
5=2
3=2
5 3 ¼ W16 ¼ W16 ¼ W8 . Therefore, W8 F5 ¼ W8 F3 . 6=2
2=2
7=2
Two other equalities are W8 F6 ¼ W8 F2 and W8 F7 ¼ 1=2
19.7.1 2r 3 2r-Point SDFT
ms2r , 2r ¼ 2r1 m2r þ ½2r ðm2r þ (2r 2)m1 2Þ
We now estimate the number of real multiplications used in the 8 3 8-point SDFT. When f is real, all splitting-signals are real, too. For each 8-point DFT, (F0, F1, . . . , F7), in Equation 19.140, it is enough to calculate only the first 5 components. For instance, F5 and F3 are complex conjugates, i.e., F5 ¼ F3 and W8
since the 8-point DFT requires two multiplications and m8,8 ¼ 12m8 ¼ 24.
f3,1,7 , f3,1,0 , f3,1,1 , f3,1,2 , f3,1,3 , f3,1,4 , f3,1,5 Þ
s ¼ Fkp ,kp
(19:141)
Therefore, the number of multiplications, ms2r , 2r , required to calculate the 2r 3 2r-point SDFT can be estimated by
Taking into consideration this property, we can write the expressions in Equations 19.134 and 19.135 respectively as follows:
1
¼ m8,8 þ 80 ¼ 12 2 þ 80 ¼ 104,
2r1 times
fT1,1 ¼ ðf1,1,0 , f1,1,1 , f1,1,2 , f1,1,3 , f1,1,4 , f1,1,5 , f1,1,6 , f1,1,7 Þ ~fT ¼ ðf1,1,7 , f1,1,0 , f1,1,1 , f1,1,2 , f1,1,3 , f1,1,4 , f1,1,5 , f1,1,6 Þ 1,1 fT2,1 ¼ ðf2,1,0 , ~fT ¼ ðf2,1,7 , 2,1 fT3,1 ¼ ðf3,1,0 , ~fT ¼ ðf3,1,6 ,
ms8,8 ¼ 4m8 þ ½8ðm8 þ (4 þ 4 þ 2)Þ ¼ 12m8 þ 80
k=2
k W8 F1 . The twiddle coefficients W8 ¼ W16 , when k ¼ 0 : 4, equal to 1, 0.9239 0.3827j, 0.7071(1 j), 0.3827 0.9239j, and j, respectively. Multiplications of a complex number by 0.7071(1 j) required two real multiplications. We consider that for each multiplications by W1=2 and W3=2 four real multiplications are used. Denoting by ms8, 8 and m8,8 respectively the numbers of multiplications required to calculate the 8 3 8-point SDFT and DFT by the corresponding splitting-signals ~f T and fT, T 2 sJ, we obtain the following relation between these estimates:
19.7.2 Lr 3 Lr-Point SDFT We now consider the case when N ¼ Lr, (r 1), and L is an odd prime number. In the N ¼ L case, the irreducible covering of XL,L consists of (L þ 1) cyclic groups and can be taken as sJ ¼ (T0,1 T1,1,. . ., TL1,1, T1,0). The number of the generators of the groups of s that have the same evenness equals (L 1)=2, and the remaining (L þ 3)=2 generators have a different evenness. Therefore, the splitting of the L 3 L-point SDFT consists of (L 1)=2 L-point DFTs and (L þ 3)=2 modified DFTs which are the compositions of the L-point DFT with the scalar transform eL, i.e.,
R F sL,L ; sJ
9 > = ¼ F L , F L , . . . , F L , eL F L , . . . , e L F L : > ; :|fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} > 8 >
1, the irreducible covering of XLr , Lr R F s15, 15 ; s ¼ F 15 , F 15 , . . . , F 15 , e15 F 15 , . . . , e15 F 15 : :|fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} ; can be taken as 12 times 12 times (19:146) (19:145) sJ ¼ (Tp, 1 )p¼0:(Lr 1) , (T1, Ls )s¼0:(Lr1 1) : The number of multiplications required to compute the 15 3 15r r1 (L þ L )=2 1 generators of groups of this covering have the point SDFT can be calculated in the form ms15, 15 ¼ 12m15 þ same evenness and (Lr þ Lr1)=2 þ 1 generatrices have a different 12(m15 þ 14) ¼ 24m15 þ 168. evenness. Therefore, the splitting of the shifted 2-D DFT consists of (Lr1 þ Lr)=2 1 Lr-point DFTs and (Lr1 þ Lr)=2 þ 1 modi- 19.8 2-D DCT fied Lr-point DFTs, In this section we consider the tensor representation of the 2-D 9 8 DCT, by using the tensor algorithm of calculation of the > > = s < R F Lr ,Lr ; sJ ¼ F Lr , F Lr , . . . , F Lr , eLr F Lr , . . . , eLr F Lr : 2-D SDFT. ffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} > > Let N be an arbitrary even number and let f be a real even ; :|fflfflfflfflfflfflfflfflfflfflfflffl Lr1 (Lþ1)=21 times Lr1 (Lþ1)=2þ1 times sequence fn1 , n2 of the size N 3 N, which is determined by the following relation: For the number of multiplications required to calculate the Lr 3 9 Lr-point SDFT, we have the following expression: ¼f f = msL,L ¼
n1 , n2
Nn1 1, Nn2 1
msLr ,Lr
r Lr þ Lr1 L þ Lr1 1 mLr þ þ 1 ðmLr þ Lr 1Þ ¼ 2 2 : L2r1 (L þ 1) (L þ 1) ¼ Lr1 (L þ 1)mLr þ Lr1 1 2 2
For example, the splitting set for the 9 3 9-point SDFT equals
R F s9,9 ; sJ ¼
8
1. The covering sJ for the lattice XL1 L2 ,L1 L2 has
fNn1 , n2 ¼ 0 fn1 , Nn2 ¼ 0
;
, n1 , n2 ¼ 0 : ðN=2 1Þ: (19:147)
It is not difficult to see, that the shifted Fourier transform F sN, N over the sequence f is real and can be expressed in the following form: Fps 1 ,p2
¼2
N=21 X X N=21 n1 ¼0
n2 ¼0
2p fn1 ,n2 cos N
1 1 n 1 þ p1 þ n 2 þ p 2 , 2 2
p1 , p2 ¼ 0: (N 1):
(19:148)
The following relations hold between components of this transform: s s FNp ¼Fps 1 , p2 , F0,s Np2 ¼ F0,s p2 , FNp ¼ Fps 1 , 0 , 1 , Np2 1, 0
p1 , p2 ¼ 1 : ðN=2 1Þ,
(19:149)
s s and FN=2, 0 ¼ F0, N=2 ¼ 0. Therefore, the calculation in Equation 19.148 can be performed only for half of the frequency-points (p1, p2) of the lattice XN,N. The factor of 2 can also be omitted from the definition.
19-36
Transforms and Applications Handbook
In the general case, the 2-D SDFT is reduced to the transform with the cosine kernel function, which is why we call this transform the 2-D DCT. The N=2 3 N=2-point DCT of a sequence f ¼ ffn1 , n2 ; n1 , n2 ¼ 0: ðN=2 1Þg is defined by
Example 19.15 Let N ¼ 8, and let f be the following sequence fn1 , n2 of size 4 3 4, which is extended to 8 3 8 as follows:
2
1 63 f ¼ ffn1 , n2 g ¼ 6 42 5
2 1 3 3
2
1 3 6 3 6 2 62 6 37 5 7!6 6 5 4 6 6 2 6 4
1 2 1 2
2 1 3 3
1 2 1 2
3
2 3 4 2 2 4 3 2
0
7 0 7 7 7 7 : 2 3 57 7 7 1 3 27 5 2 1 3 1 2 1 (19:150)
The sequence f is even. Below is the 8 3 8-point SDFT of this sequence, which is written in the form of the table: 74
2.2961
9.8995
5.5433
11.8519 35.3137 2.9302 20.3848
1.4142
2.7444 0
11.8519
1.4142 11.8519
5.5433 9.8995
4.4609 1.4142
16.3848 8.1564 12.6863 2.2961 4.9092
2.7444
2
0
5.5433 20.1421
4.2426
6.6257
0.3170
8
2.7279 3.3785 6.6257
4.2426
0.3170
8.1421
8
2.7444 1.4142
1.2137 20.1421
4.4609
2
11.8519
5.5433 20.3848 2.9302 35.3137
The table is divided by nine parts, in correspondence with the properties of Equation 19.149. Let us consider the values of the 8 3 8-point SDFT in the first subtable (4 3 4), 74
2.2961
9.8995
11.8519
35.3137
2.9302
20.3848
2.7444
16.3848
8.1564
12.6863
1.4142
11.8519
2.0000
5.5433 4.4609
(19:151)
which we call the 4 3 4-point DCT of f. These values are defined by the cosine kernel function, and all values of the transform, which are lying outside this subtable, are defined by transforms whose kernel is the sine function. Indeed, the following holds: FN=2þp1 , p2 ¼ 2
N=21 X X N=21 n1 ¼0
sin
n1
(1) fn1 , n2
n2 ¼0
2p N
1 1 n 1 þ p1 þ n 2 þ p 2 , 2 2
and Fp1 , N=2þp2 ¼ 2
N=21 X X N=21 n1 ¼0
n2 ¼0
2p
sin N where p1, p2 ¼ 0 : (N 1).
¼
N=21 X X N=21 n1 ¼0
n2 ¼0
p 1 1 fn1 ,n2 cos n1 þ p1 þ n2 þ p2 , N=2 2 2
p1 , p2 ¼ 0: ðN=2 1Þ:
(1)n2 fn1 , n2
1 1 n 1 þ p1 þ n 2 þ p 2 , 2 2
(19:152)
The 2-D DCT is nonseparable and periodical with the period (N, N), not (N=2, N=2). Since this transform is a special case of the shifted 2-D DFT, the tensor representation of the shifted 2-D DFT can be used for splitting the 2-D DCT by a minimum number of 1-D transforms. Indeed, it directly follows from definitions (19.130) and (19.147), that the relation
4.9092
2.7279 2.2961 12.6863 8.1564 16.3848
8.1421 3.3785 22.7279
18
2.7444
2.2961
1.2137 22.7279
Cp1 ,p2 ¼ CN=2,N=2 f p1 , p2
~fp , p , t ¼ ~fp , p , Nt 1 2 1 2
(19:153)
is valid, when t ¼ 1 : (N=2 1). If e(p1, p2) ¼ 0, then the splittingsignal corresponding to the generator (p1, p2) has the following form: n o ~fT ¼ ~fp1 ,p2 ,0 , ~fp1 ,p2 ,1 , ~fp1 ,p2 ,2 , . . . , ~fp1 ,p2 ,N=2 , . . . , ~fp1 ,p2 ,2 , ~fp1 ,p2 ,1 , p1 ,p2
(19:154)
i.e., the signal is even. Therefore, the following calculations can be performed for the transform in Equation 19.134: N 1 X t¼0
~fp ,p ,t W kt ¼ ~fp ,p ,0 þ 1 2 1 2
N=21 X
¼ ~fp1 ,p2 ,0 þ 2
t¼1
~fp ,p ,t W kt þ W k(Nt) þ ~fp ,p ,N=2 W kN=2 1 2 1 2
N=21 X t¼1
~fp ,p ,t cos 2p kt þ (1)k~fp ,p ,N=2 : 1 2 1 2 N
For simplicity of calculations, we denote ~fp1 , p2 , t ¼ ~fp1 , p2 , t =2, when t ¼ 0 and N=2. Thus, we can write the following: Ckp1 ,kp2 ¼
N=21 X t¼0
~fp ,p ,t cos 2p kt þ (1)k~fp ,p ,N=2 , if e(p1 , p2 ) ¼ 0: 1 2 1 2 N (19:155)
The sum in this equation represents the N=2-point DCT of type I, which we denote by KN=2 . The N=2 3 N=2-point DCT at frequency-points of the group Tp1 , p2 with generator having coordinates of the same evenness is defined by the N=2-point DCT of type I plus=minus the value of ~fp1 , p2 , N=2 . The equation can also be written as
19-37
Multidimensional Discrete Unitary Transforms
Ckp1 , kp2
~fp , p , t cos 2p kt , if e(p1 , p2 ) ¼ 0: ¼ 1 2 N t¼0 N=2 X
We now consider the N=2 3 N=2-point DCT at frequency-points of the group Tp1 , p2 , when the coordinates of its generator have different evenness, e(p1, p2) ¼ 1. In this case, the splitting-signal corresponding to the generator (p1, p2) has the following form: n ~fT ¼ ~fp1 ,p2 ,12 , ~fp1 ,p2 ,32 , . . . , ~fp1 ,p2 ,N=212 , ~fp1 ,p2 ,N=2þ12 , . . . , p1 ,p2 o ~fp ,p ,3 , ~fp ,p ,1 : 1 2 2 1 2 2
(19:156)
The modified Fourier transform in Equation 19.135 can thus be written as follows: s ¼ Fkp ,kp 1
2
N 1 X t¼0
~fp ,p ,tþ1 W kðtþ12Þ ¼ 1 2 2
N=21 X t¼0
h i ~fp ,p ,tþ1 W kðtþ12Þ þ W kðNt12Þ 1 2 2
N=21 X ~fp ,p ,tþ1 cos 2p k t þ 1 : ¼2 1 2 2 N 2 t¼0
Then, the 2-D DCT at frequency-points of this group can be written as
fp1 ,p2 ,t ¼
Xn
o fn1 ,n2 ; (n1 , n2 ) 2 Vp1 ,p2 ,t ,
we obtain the following: ~fp ,p ,tD ¼fp ,p ,tD þ fp ,p ,NtþD , t ¼ 1, 2, . . . , N=2, 1 2 1 2 1 2 ~fp ,p ,0 ¼ 2fp ,p ,0 , ~fp ,p ,N=2 ¼ 2fp ,p ,N=2 : 1
2
1
2
1
2
1
2
It follows from Equation 19.155 and 19.157, that for a given group Tp1 , p2 2 s, the corresponding splitting-signal can be written as
~fT ¼ fp1 , p2 , 0 , fp1 , p2 , 1 þ fp1 , p2 , N1 , . . . , fp1 , p2 , N=21 p1 , p2 þ fp1 , p2 , N=2þ1 , fp1 , p2 , N=2 ,
if p1 and p2 are the subscripts of the same evenness, or
n ~fT ¼ fp1 , p2 , 12 þ fp1 , p2 , N12 , fp1 , p2 , 32 þ fp1 , p2 , N32 , . . . , p1 , p2 o fp1 , p2 , N=212 þ fp1 , p2 , N=2þ12 , if p1 and p2 are the subscripts of different evenness.
Example 19.16 Ckp1 ,kp2 ¼
N=21 X t¼0
~fp ,p ,tþ1 cos 2p k t þ 1 : 1 2 2 N 2
(19:157)
The sum in Equation 19.157 represents the N=2-point DCT of type II, which we denote by CN=2 . Thus, for the generator (p1, p2) with coordinates of the same evenness, the N-point DFT is reduced to the N=2-point DCT of type I. For the generator with coordinates of different evenness, the N-point modified transform eN F N is reduced to the N=2-point DCT of type II. We note, that if gn is a sequence of length N, then the following properties hold for these cosine transforms:
KN=2 gn
Nk
¼ KN=2 gn k and CN=2 gn Nk ¼ CN=2 gn k
When N ¼ 8, the 4 3 4-point 2-D cosine transform of a sequence fn1 , n2 is calculated by
Cp1 , p2 ¼
3 X 3 X
fn1 , n2 cos
n1 ¼0 n2 ¼0
p1 , p2 ¼ 0 : 3:
CN=2,N=2 f
jTp
1 , p2
¼
KN=2 ~fTp1 , p2 þ P ~fp1 ,p2 ,N=2 , CN=2 ~fTp1 , p2 ,
e(p1 , p2 ) ¼ 0; e(p1 , p2 ) ¼ 1, (19:158)
where P ¼ (1, 1, 1, 1, . . . , 1, 1)0 . ~ p1 ,p2 ,t and components ~fp1 ,p2 ,t of the splitNote that the sets V ting-signals, which are defined respectively in Equations 19.129 and 19.130, have been considered in the lattice XN,N, and the 2-D DCT is considered for (n1, n2) 2 XN=2,N=2. Denoting by Vp1 ,p2 ,t the ~ p1 ,p2 ,t \ XN=2,N=2 and defining the components set intersection V
(19:159)
We consider the 2-D sequence f defined in Equation 19.150 and two generators (p1, p2) ¼ (1, 1) and (p1, p2) ¼ (1, 2), with the equal and different evenness. For the first generator, the values of the form L(n1 , n2 , 1, 1) can be written as the following table: 2
when k ¼ 1 : (N=2 1). Therefore, we can consider kpn ¼ kpn mod ðN=2Þ, n ¼ 1, 2, in both Equations 19.155 and 19.157, wherein k ¼ 0 : (N=2 1). As a result, the following property of the 2-D DCT is valid: (
p 1 1 n1 þ p1 þ n2 þ p2 , 4 2 2
jt ¼ n1 þ n2 þ 1jn1 , n2 ¼0:7
1 2 2 3 3 4
3 4 5
6 6 6 6 6 64 5 6 6 ¼6 6 6 6 6 0 6 4
4 5 6
0
7 1
2
2 3
3 4
4
5
3
7 7 7 7 7 7 7 7: 3 47 7 4 57 7 7 5 65 6 7
The corresponding splitting-signal of the tensor representation f ! {~f T; T 2 s} is calculated as follows: n o ~fT ¼ ~f1,1,0 , ~f1,1,1 , ~f1,1,2 , ~f1,1,3 , ~f1,1,4 ¼ {0, 3, 11, 11, 12}, 1,1
19-38
Transforms and Applications Handbook
where
groups Tp,s. Among all 3 2r1 generators (p, s) of these groups, 2r1 generators have coordinates of the same evenness and the remaining 2r generators have coordinates of different evenness. The splitting of the 2r 3 2r-point DCT consists thus of 2r1 2rpoint cosine transforms K2r and 2r cosine transforms C2r , that is,
~f1,1,0 ¼ f1,1,0 ¼ 0 ~f1,1,1 ¼ f1,1,1 þ f1,1,7 ¼ f0,0 þ f3,3 ¼ 1 þ 2 ¼ 3, ~f1,1,2 ¼ f1,1,2 þ f1,1,6 ¼ (f0,1 þ f1,0 ) þ (f2,3 þ f3,2 ) ¼ (2 þ 3) þ (4 þ 2) ¼ 11,
9 = RðC2r , 2r ; sJ Þ ¼ K2r , K2r , . . . , K2r , C2r , C2r , . . . , C2r : :|fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflffl} ; 8
2, ðm(C4 ) ¼ 3Þ:
(19:163)
For m(K2r ), we consider the following estimate obtained by the 1-D paired transforms [34] m(K2r ) ¼ 2rþ1
(r 2)(r þ 5) 8, ðm(K4 ) ¼ 4Þ: 2
(19:164)
Substituting these estimates into Equation 19.162, we obtain the following multiplicative complexity of the 2-D DCT in the tensor representation: mc2r , 2r ¼ 3 4r 2r2 (r 2 þ 7r þ 14):
(19:165)
For N ¼ 8, 16, and 32, we obtain respectively mc8, 8 ¼ 104, mc16, 16 ¼ 536, and mc32, 32 ¼ 2480.
19-39
Multidimensional Discrete Unitary Transforms
19.8.1 Modified Algorithms of the 2-D DCT The tensor representation of the 2-D sequence with respect to the 2-D DCT associates with the covering sJ of the lattice XN,N, which is not a partition. The deficiency of the tensor algorithm, which splits the 2-D DCT into 1-D cosine transforms, is its reiteration of the spectral component computations at the frequency-points (p1, p2) which lie in the intersections of the groups of the covering. For N ¼ 2r, the number of these frequency-points equals 4r=2, which composes half of the total number of frequency-points. We can improve the tensor algorithm for the 2-D DCT, in order to avoid the reiteration of the computations. Such an improvement can be obtained similar to the way described above in the improved tensor algorithms for the 2-D DFT and 2-D DHT. For that, we consider a new covering s1J of the lattice, which contain 3 2r2 groups T1, p2 and T2p1 , 1 , p2 : 0: (2r1 1), p1 ¼ 0 : (2r2 1), at which we can calculate the complete 2r-point DCTs. The rest of the covering consists of 3 2r2 subgroups of T1, p2 þ2r1 and T2p1 þ2r1 , 1 , where p2 : 0 : (2r1 1), p1 ¼ 0 : (2r2 1), at which each second element is omitted. The calculation of the 2-D DCT at frequency-points of these subgroups is reduced to incomplete 1-D DCTs (or 2r1-point DCTs), when only components with odd numbers are calculated. We denote these incomplete DCTs of types I and II by K2r ;2 and C2r ;2 respectively. By using this covering s1J , we obtain the following splitting of the 2-D DCT by 3 2r2 complete and 3 2r2 incomplete DCTs: 8 < R C2r ,2r ; s1 ¼ K2r , . . . , K2r , K2r ;2 , . . . , K2r ;2 , C2r , . . . , C2r , :|fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} 2r2 times 2r1 times 2r2 times 9 = C2r ;2 , . . . , C2r ;2 : (19:166) |fflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflffl} ; 2r1 times
The cyclic groups of the covering sJ intersect at the original point, (0, 0), as well as the following intersections take place T1,p2 \ T1,p2 þ2rk ¼ T2k (1,p2 ) , Tp1 ,1 \ Tp1 þ2rk ,1 ¼ T2k (p1 ,1) , k ¼ 1: (r 1),
(19:167)
for an arbitrary (p1 , p2 ) 2 X2r , 2r . We denote by Tk the set-complements of these groups k T1,p ¼ T1,p2 nT2k (1,p2 ) , Tpk1 ,1 ¼ T2k (p1 ,1) nTp1 ,1 , 2
(19:168)
which contain (2r 2rk) frequency-points. The splitting described in Equation 19.166 corresponds to the k ¼ 1 case, when the following covering of the lattice X2r , 2r is composed: s1J
¼
T1,p2 ,
1 T1,p r1 2 þ2
p2 ¼0:(2r1 1)
,
1 T2p1 ,1 , T2p r1 ,1 1 þ2
p1 ¼0:(2r2 1)
:
This covering is smaller than the initial covering s, which means that each set of s1J includes in a group of sJ.
We can count the number of multiplications saved on this stage of calculations. Denoting by m(K2r ;2 ) and m(C2r ;2 ) respectively the numbers of multiplications required to compute the incomplete cosine transforms K2r ;2 and C2r ;2 , we can estimate the total number of multiplications used in the splitting Equation 19.166 as mc2r , 2r ¼ 2r2 ½mðK2r Þ þ mðK2r ;2 Þ þ 2r1 ½mðC2r Þ þ mðC2r ;2 Þ :
(19:169)
We consider that mðK2r ;2 Þ ¼ mðK2r Þ. For the incomplete cosine transforms C2r ;2 , we can use the recursive algorithm [35,36], which splits the transform C2r into two transforms C2r1 , by dividing the frequencies k ¼ 0 : (N 1) into two sets of the even and odd frequencies. Then, we can write that mðC2r ;2 Þ ¼ mðC2r1 Þ þ 2r1 . Therefore the estimate in Equation 19.169 can be calculated by mc2r ,2r ¼ mc2r ,2r jstep1 ¼ 2r1 mðK2r Þ þ 2r1 mðC2r Þ þ mðC2r1 Þ þ 2r1 : (19:170)
In particular, for the 8 3 8-point DCT we obtain mc8, 8jstep1 ¼ 4 4 þ 4(11 þ 3 þ 4) ¼ 88. In the general r > 3 case, by substituting the considered estimates for mðK2r Þ and mðC2r Þ into Equation 19.170, we obtain mc2r ,2r jstep1 ¼ 11 4r1 (r2 þ 7r þ 12)2r2 :
(19:171)
This number can be reduced if we consider other intersections Tk, k ¼ 2 : (r 1), of the groups of the covering, and, to show that, we describe the 8 3 8 case in detail.
Example 19.17 For the N ¼ 8 case, we consider all groups of the irreducible covering sJ of the lattice X8,8 : T1,0 T1,1 T1, 2 T1,3 T1,4 T1,5 T1,6 T1,7 T0,1 T2,1 T4,1 T6,1
¼ f(0, 0), (1, 0), (2, 0), (3, 0), (4, 0), (5, 0), (6, 0), (7, 0)g,
¼ (0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7) ,
¼ (0, 0), (1, 2), (2, 4), (3, 6), (4, 0), (5, 2), (6, 4), (7, 6) ,
¼ (0, 0), (1, 3), (2, 6), (3, 1), (4, 4), (5, 7), (6, 2), (7, 5) ,
¼ (0, 0), (1, 4), (2, 0), (3, 4), (4, 0), (5, 4), (6, 0), (7, 4) ,
¼ (0, 0), (1, 5), (2, 2), (3, 7), (4, 4), (5, 1), (6, 6), (7, 3) ,
¼ (0, 0), (1, 6), (2, 4), (3, 2), (4, 0), (5, 6), (6, 4), (7, 2) ,
¼ (0, 0), (1, 7), (2, 6), (3, 5), (4, 4), (5, 3), (6, 2), (7, 1) ,
¼ (0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7) ,
¼ (0, 0), (2, 1), (4, 2), (6, 3), (0, 4), (2, 5), (4, 6), (6, 7) ,
¼ (0, 0), (4, 1), (0, 2), (3, 6), (0, 4), (4, 5), (0, 6), (4, 7) ,
¼ (0, 0), (6, 1), (4, 2), (2, 3), (0, 4), (6, 5), (4, 6), (2, 7) :
In accordance with this covering sJ, the splitting of the 2-D DCT is R(C8, 8 ; sJ ) ¼ {C8 , K8 , C8 , K8 , C8 , K8 , C8 , K8 , C8 , C8 , C8 , C8 }:
19-40
Transforms and Applications Handbook
Let f ¼ ffn1 , n2 ; n1 , n2 ¼ 0 : 7g be a 2-D sequence and let {~f T; T 2 s} be the s-representation, or tensor representation of the sequence f by the 8 3 8-point DCT. Then, to calculate the 8 3 8-point DCT, it is sufficient to fulfill: Step 1: Four 8-point DCTs of type II, C8 , over the sequences ~fT , ~fT , ~fT , ~fT , as well as two 8-point DCTs of type I, K8 , 1, 0 1, 2 0, 1 2, 1 over the sequences ~fT1, 1 and ~fT1, 3 , to determine the 2-D DCT at frequency-points of the corresponding six groups T1,0, T1,2, T0,1, T2,1, T1,1, and T1,3. Step 2: Four incomplete 8-point DCTs of type II, C8;2 , over the sequences ~fT1, 4 , ~fT1, 6 , ~fT4, 1 , ~fT6, 1 , as well as two incomplete 8-point DCTs of type I, K8;2 , over the sequences ~fT1, 5 and ~fT1, 7 , to determine the 2-D DCT at frequency-points of the subgroups T1,1 4 , T1,1 6 , T4,1 1 , T6,1 1 , T1,1 5 , and T1,1 7 . On Step 2, we save 4 multiplications every time we use the incomplete cosine transform C8;2 instead of C8 , thereby avoiding repeated calculations at the frequency-points with even coordinates. Therefore, we save 16 multiplications on this step of the algorithm, which results in the total number of used multiplications to be mc8, 8jstep, 1 ¼ 104 16 ¼ 88. We can see that this process of the improvement can be continued. Indeed, to avoid the repeated calculations in the groups T1,2, T1,3, and T2,1 at frequency-points with both coordinates multiple to 4, we can perform two incomplete cosine transforms of type I, C~8;4 , without outputs at points 0 and 4. The use of pffiffitwo ffi transforms C8;4 saves in addition two multiplications by 2=2, and the total number of multiplications reduces to mc8, 8jstep2 ¼ 88 2 ¼ 86. At last, we have only intersections of the group T1,0 with T1,1 and T0,1 at (0, 0). Although, these reiterations of calculations do not reduce the multiplicative complexity of the algorithm, we can perform two incomplete cosine transforms K8;0 and C8;0 avoiding the repeated calculations at the origin point. As a result, we obtain a partition s0 of the lattice X8,8. The sets of this partition and the 1-D DCTs used to calculate the 2-D DCT at frequency-points of these sets are given below in the form of the table:
T1,0
{(0, 0), (1, 0), (2, 0), (3, 0), (4, 0), (5, 0), (6, 0), (7, 0)}
3 T1,1
{(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7)}
2 T1,2
{(1, 2), (2, 4), (3, 6), (5, 2), (6, 4), (7, 6)}
2 T1,3
{(1, 3), (2, 6), (3, 1), (5, 7), (6, 2), (7, 5)}
1 T1,4 1 T1,5
{(1, 4), (3, 4), (5, 4), (7, 4)} {(1, 5), (3, 7), (5, 1), (7, 3)}
1 T1,6
{(1, 6), (3, 2), (5, 6), (7, 2)}
1 T1,7
{(1, 7), (3, 5), (5, 3), (7, 1)}
3 T0,1
{((0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7)}
2 T2,1
{(2, 1), (4, 2), (6, 3), (2, 5), (4, 6), (6, 7)}
1 T4,1
{(4, 1), (3, 6), (4, 5), (4, 7)}
1 T6,1
{(6, 1), (2, 3), (6, 5), (2, 7)}
C8
K8 ;0 C8 ;4
K8 ;4 C8 ;2 K8 ;2 C8 ;2
Thus, all reiterations of computation of the 2-D DCT in the tensor algorithm has been eliminated by using the incomplete cosine transforms of types I and II. In the general N 8 case, we can use the similar procedure in order to improve the tensor algorithm. So, for the next k ¼ 2 step, we can eliminate the intersections in other remained groups T of the covering s1, using equalities (19.167). Then, we obtain the following covering: s ¼ T1, p2 , T1,2 p2 þ2r2 2
p2 ¼0:(2r2 1)
2 T2p1 , 1 , T2p r2 , 1 1 þ2
, T1,1 p2 þ2r1
p1 ¼0:(2r3 1)
p2 ¼0:(2r1 1)
1 , T2p r1 ,1 1 þ2
,
p1 ¼0:(2r2 1)
Dm02r 2r1 mðC2r ;2 Þ þ 2r2 mðC2r ;4 Þ þ þ 2mðC2r ;2r1 Þ 1 1 1 r1 2 mðC2r Þ mðC2r1 Þ mðC2r2 Þ r3 m(C8 ) 2 4 2 1 r 2m(C4 ) (4 16) 3 1 r (4 4) 2r þ 8, 3 where we consider that m C2k ;2 ¼ mðC2k Þ mðC2k1 Þ 2k1 , for k ¼ 3 : r, and mðC2r ;2r1 Þ ¼ 2. Therefore, at the last step of improvement, the total number of multiplications is reduced to 2 20 mc2r , 2r jstep(r1) ¼ mc2r , 2r Dm02r 4rþ1 2r2 (r 2 þ 7r þ 10) : 3 3 (19:172) So, using the tensor representation of the shifted Fourier transform, we obtain three estimates for the multiplicative complexity of the 2-D DCT, which are estimated respectively by formulae (19.161), (19.165), and (19.172). For some integers r, the values of these estimates and the known estimates, which have been received in the polynomial and recursive algorithms, are given in Table 19.4. We should note, that a few addition multiplications can be saved on the steps k ¼ 2, 3, . . . , (r 1) in the improved tensor algorithm, if we succeed in calculating the incomplete 1-D DCT of type I, K2r ;2k , by a method similar to the pruning algorithm [39,41].
C8 ;0
Example 19.18
C8 ;2
We here describe the improved tensor algorithm of the 15 3 15-point DCT. The splitting of the 15 3 15-point DFT is performed by twelve 15-point DFTs and twelve 15-point modified DFTs, as is shown in (19.146). Therefore, the 15 3 15-point DFT can be split by twelve 15-point DCTs of
C8 ;2
,
wherein the sets T2 contain 3 2r 2 frequency-points each. Continuing such an improvement, we can estimate the number, Dm02r , of multiplications saved by this improvement as follows:
K8 ;2 C8 ;4
19-41
Multidimensional Discrete Unitary Transforms
TABLE 19.4 The Number of Multiplications Required to Calculate the 2r 3 2r-Point DCT by the SDFT, Tensor, Improved Tensor, Polynomial, and Recursive Algorithms 2r 3 2r
mcSDFT
mctensor
mcstep1
mstep(r1)c
mcpolyn
mcrec
838
110
104
88
86
96
16 3 16
582
536
476
460
512
112 640
32 3 32
2,870
2,480
2,236
2,164
2,560
3328
64 3 64
13,590
10,816
9,820
9,508
12,288
—
128 3 128
62,678
45,568
41,532
40,228
57,344
—
256 3 256 512 3 512 1024 3 1024
283,734
188,032
171,772
166,436
262,144
—
1,266,518 5,591,382
766,208 3,098,624
700,924 2,836,988
679,332 2,750,116
1,179,648 5,242,880
— —
type I, and twelve 15-point DCTs of type II. The number of multiplications required to calculate this 2-D DCT can be estimated as mc15, 15 ¼ 12½m(K15 ) þ m(C15 ) :
Taking the known estimate m(C15 ) ¼ 18[38] and considering that m(K15 ) < m(C15 ), we obtain mc15, 15 < 24m(C15 ) ¼ 432. However, this estimate can be essentially improved if we use the modified tensor algorithm. Indeed, by means of the calculations similar to the ones given in Example 19.17, we can obtain the following splitting RðC15, 15 ; s0 Þ of the 15 3 15-point DCT: 8 < C , K , C , C , K , C , K15;3, 5 , . . . , K15;3, 5 , : 15 15;0 15;0 15;0 15;5 15;5 |fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl} 10times 9 = C15;3, 5 , . . . , C15;3, 5 (19:173) |fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl}; 8times
where K15;3, 5 and C15;3, 5 denote respectively the incomplete 15point DCTs of types I and II, wherein the outputs with numbers multiple to 3 and 5 are not calculated. Outputs are calculated only for eight points 1, 2, 4, 7, 8, 11, 13, and 14. In the incomplete cosine transforms K15;5 and C15;5 , the outputs with numbers 0, 5, and 10 are not calculated. It is not difficult to see, that the 15point DCT can be split into three 5-point DCTs. For that we consider the sets Tp0 0 ¼ f(p0 þ 6k) mod 15; k ¼ 0 : 4g, where p0 is a number of the integer interval [0, 14]. Then, three subsets 0 form a partition v0 of the set T00 , T50 , and T10 X15 ¼ {0, 1, . . . , 14} : T00 ¼ {0, 6, 12, 3, 9}, T50 ¼ {5, 11, 2, 8, 14}, 0 ¼ {10, 1, 7, 13, 4}. This partition corresponds to the wellT10 known Chinese remainder theorem mapping for N ¼ 15. The restriction of the 15-point DCT on each subset T0 2 v0 is the 5-point DCT. For instance, for p 2 T00 , we have the following: 14 p X p fn cos fn cos 2nk n(0 þ 6k) ¼ 15 5 n¼0 n¼0 4 p X (fn þ fnþ5 þ fnþ10 ) cos n(2k) , k ¼ 0 : 4: ¼ 5 n¼0
Cp ¼ C(0;k) ¼
14 X
To calculate the incomplete 15-point DCT of type II, C15;3, 5 , it is sufficient to calculate two incomplete 5-point DCTs, C5;0 , which requires 12 multiplications. Therefore, using the splitting in Equation 19.173, we can estimate the multiplicative complexity of the 15 3 15-point DCT as mc15,15 m(C15 ) þ 3m(C15;0 ) þ 2m(C15;5 ) þ 18m(C15;3, 5 ) ¼ 6 18 þ 18 12 ¼ 324: We note for comparison, that the same estimate have been obtained in [38].
19.9 3-D Paired Representation In this section, we introduce a more advanced form of representation of multidimensional signals and images, than the tensor representation. Such a form associates with special partitions of the spatial lattice in the frequency-domain, that reveal the multidimensional DTs such as the Fourier, Hadamard, and cosine transforms. We stand in detail on the 2-D case, since the threeand more-dimensional cases are described similarly. In the tensor and the new forms of representation proposed here, multidimensional signals and images are represented by the sets of 1-D splitting-signals, and dimensions of the signals change only the cardinality of such sets. The tensor representation of multidimensional signals is associated with an irreducible covering sJ not being partitions of the lattice. This is why, there is a redundancy in calculations; the same spectral information contains in different groups of frequency-points and the splitting-signals together contains more points than the volume of the represented multidimensional signals. As an illustration, Figure 19.23 shows the image 256 3 256 in part (a), along with the image of 384 1-D splitting-signals in (b), which are lying on rows of this image. These 1-D signals are also called image-signals; they describe uniquely the original image and at the same time they split the mathematical structure of the 2-D DFT (shown in (c)) into a set of separate 1-D transforms (shown all together as an image in (d)). The 2-D
19-42
Transforms and Applications Handbook
50
100
150
200
250 (a)
50
100
150
50
100
150
200
250
(c)
50
50
100
100
150
150
200
200
250
250
300
300
350
350 50
100
150
200
250
(b)
200
250
(d)
FIGURE 19.23 Conventional and tensor representation of the image and its spectrum. (a) Tree image 256 3 256, (b) the image of all 384 splitting-signals. (c) the 256 3 256-point DFT of the image, and (d) the image of the 256-point DFTs of splitting-signals. (All DFTs are shifted to the centers.)
DFT is thus considered as a unique set of separate 1-D DFTs. The image in (b) is the tensor representation of the image, or the tensor transform of the image,
X ¼ X s : f ! fTp,s ; Tp,s 2 sJ :
(19:174)
One can note, that a few slitting-signals are well expressed, which can be seen also from the energy plot of all splittingsignals, which is given in Figure 19.24. For this image, the high
energy is concentrated in the splitting-signals of numbers 1, 129, 257, 172, 52, 258, and 2 that correspond to the generators (p, s) ¼ (0, 1), (128, 1), (1, 0), (171, 1), (51, 1), (1, 2), and (1, 1), respectively. The processing of only these signals may lead to good results, for instance in image enhancement [17,18]. One such example has been shown in Figure 19.5, where the image is enhanced by one splitting-signal of the generator (p, s) ¼ (7, 1). However, all together the splitting-signals contain 384 256 points which is much greater than the original size 256 256.
19-43
Multidimensional Discrete Unitary Transforms
1500
1000
500
0
0
50
100
150
200
250
300
350
FIGURE 19.24 Energy of all 384 splitting-signals of the tree image.
In order to remove such a redundancy, we consider the concept of the paired representation of multidimensional signals and images. Unlike the tensor representation, the paired representation allows for distribution of the spectral information of multidimensional signals and images by disjoint sets of frequencypoints. This property makes the paired representation the most attractive in multidimensional signal and image processing as well as in the construction of effective algorithms for calculating multidimensional transforms. As examples, we first consider the paired representation of images with respect to the discrete unitary Fourier, Hadamard, Hartley, and cosine transforms on the rectangular lattices. We also will describe the paired representation of images defined on the hexagonal lattices.
19.9.1 2D-to-3D Paired Transform A complete system of functions can be derived from the 2-D DFT as a system that splits the transform [12,21]. In other words, there exists a system of functions that reveals completely the mathematical structure of the 2-D DFT when considering it as a minimal composition of short 1-D DFTs. Such a system is 3-D, i.e., the system of functions is numbered by three parameters, namely, two parameters for the spatial frequencies and one parameter for the time. The change in time determines a series of functions, and the total number of triples numbering the system of functions, since the system is complete, equals the size of the 2-D DFT, let us say N 3 N. The complete systems of such functions, which are called the paired functions, exist also in the one- and multidimensional cases. Let N be not a prime number. We consider an irreducible covering sJ of the lattice XN, N and for each generator (p, s) 2 J, we determine the characteristic functions of the sets Vp,s,t ¼ {(n1, n2); n1p þ n2s ¼ t mod N} as follows X p,s,t (n1 , n2 ) ¼
1; 0;
(n1 , n2 ) 2 Vp,s,t, otherwise,
Indeed, the components fp,s,t of the splitting-signal fTp, s are defined as fp,s,t ¼ X p,s,t f ¼
N1 X N1 X
n1 ¼0 n2 ¼0
These functions describe the tensor transformation of a 2-D sequence or image f with respect to the Fourier transformation.
t ¼ 0 : (N 1): (19:176)
Therefore, the tensor transformation X s is described by the following system of binary functions
X s ¼ X p,s,t ; Tp,s 2 sJ , t ¼ 0 : (N 1) :
(19:177)
The tensor transformation is not orthogonal, but, by means of this transformation, one can synthesize unitary transformations.
Definition 19.2: Let L be a nontrivial divisor of the number N and let WL ¼ exp (2pj=L). For a given frequency-point (p, s) 2 XN,N and integer t 2 {0, 1, 2, . . . , N=L 1}, the function X 0p, s, t (n1 , n2 ) ¼ X 0p, s, t;L (n1 , n2 ) ¼
L1 X k¼0
WLk X p, s, tþkNL (n1 , n2 ) (19:178)
is called an L-paired function. The operation of the paired functions over a 2-D sequence f of size N 3 N determines the components which can be calculated from components of the tensor representation by fp,0 s, t ¼ X 0p, s, t f ¼
t ¼ 0 : (N 1): (19:175)
X p,s,t (n1 , n2 )fn1 ,n2 ,
L1 X k¼0
fp, s, tþkN=L WLk ,
k ¼ 0 : ðN=L 1Þ: (19:179)
The N=L-point signal 0 0 0 0 fTp, s ¼ fp, s, 0 , fp, s, 1 , . . . , fp, s, N=L1
(19:180)
19-44
Transforms and Applications Handbook
determines the 2-D DFT of f at frequency-points of the following subset of the cyclic group Tp,s: T 0 ¼ Tp,0 s;L ¼
(kL þ 1)p, (kL þ 1)s ; k ¼ 0 : ðN=L 1Þ :
signals or images f of size N 3 N, as well as the paired transform of f on the example when N ¼ 8. The general case, when N ¼ 2r, r > 1, is also considered.
(19:181)
Example 19.19
Indeed, the following formula holds:
F(kLþ1)p, (kLþ1)s ¼ F(kLþ1)p, (kLþ1)s ¼
N=L1 X t¼0
Let L ¼ 2 and let X be the lattice X8,8. To compose a partition of the lattice, we first note, that each subset Tp,0 s is the orbit of the point (p, s) with respect to the movement group G ¼ {1, 3, 5, 7}. For instance, the subset T1,0 1 ¼ T1,0 1;2 ¼ f(1, 1), (3, 3), (5, 5), (7, 7)g is the orbit of the point (1, 1). The point (1, 1) ‘‘is moving’’ starting (at time t ¼ 0) from itself and ‘‘returning’’ back in four units of time (at time t ¼ 4) as (1, 1) ! (3, 3) ! (5, 5) ! (7, 7) ! (1, 1). In the orbit T3,0 1 , the point ‘‘is moving’’ as follows: (3, 1) ! (1, 3) ! (7, 5) ! (5, 7) ! (3, 1).
kt fp,0 s, t W t WN=L
(19:182)
where k ¼ 0 : (N=L 1). Thus, the N=L-point DFT of the modified signal 0 0 0 gTp,s0 ¼ fT0 p,s W ¼ fp,s,0 , fp,s,1 W, . . . , fp,s,N=L1 W N=L1 (19:183) defines the 2-D DFT at frequency-points of the subset Tp,0 s;L . Here W denotes the diagonal matrix with coefficients 1, W, W2, . . . , WN=L1 on the diagonal. The signal fTp,0 s is called the paired splitting-signal, and the signal gTp,0 s is the modified paired splitting-signal. 0 It is interesting to note, that the subsets Tp, s compose a unique 0 0 partition s ¼ Tp, s of the lattice XN,N. Indeed, each subset Tp,0 s itself represents an orbit of the point (p, s) with respect to the movement group G ¼ {(kL þ 1) mod N; k ¼ 0 : (N=L 1)}. All orbits are equal or disjoint between themselves, and together they compose the whole lattice. In the general case, such a partition is defined as s0 ¼ s0J 0 ¼
0 Tp,s;L
L2D (p,s)2J 0
,
(19:184)
for a certain subset J0 of generators (p, s) and a set of factors D of the number N. In many cases of N, the set D consists of only one factor.
19.9.2 N Is a Power of Two In this section, we consider the case of most interest, when N is a power of two. We describe the paired representation of 2-D
The set of all orbits dividing the lattice X8,8 in unique and can be described as follows: 0 0 0 0 0 0 0 0 0 0 0 0 T0,1 T1,1 T2,1 T3,1 T4,1 T5,1 T6,1 T7,1 T1,0 T1,2 T1,4 T1,6 0 0 0 0 0 0 T0,2 T2,2 T4,2 T6,2 T2,0 T2,4 0 0 0 T4,4 T4,0 T0,4 0 T0,0
(19:185) 22 orbits divide the lattice, and the orbits shown on the same row have equal cardinalities, which are shown in the right column of this table. For instance, the point (0, 1) runs around its orbit for time t ¼ 4, and the orbit of the point (0, 2) is twice shorter, (0, 2) ! (0, 6) ! (0, 2). The set of generators of these orbits are shown in Figure 19.25 in part a, by the filled circles. The locations of frequency-points of orbits Tp,0 s for generators (p, s) ¼ (1, 1), (3, 1), and (6, 1) are shown in parts (b) through (d), respectively. Twenty-two splitting-signals defined for generators of the subsets, or orbits T0 of the partition s0 determines the value of the 2-D DFT at frequency-points of these subsets. For instance, the signal fT0 1, 1 carries the spectral information of a 2-D signal f at frequency-points (1, 1), (3, 3), (5, 5), and (7, 7). According to the table of (19.185), the 2-D 8 3 8-point DFT is split by twelve 4-point DFTs, six 2-point DFTs, and four 1-point DFTs (which are the identity transformations).
6
6
6
6
4
4
4
4
2
2
2
2
0
0 0
(a)
2
4
6
0 (b)
0 2
4
4 2 1 1
6
0 0
(c)
2
4
6
0 (d)
2
4
6
FIGURE 19.25 (a) Twenty-two generators on the lattice 8 3 8, and the orbits of points (b) (1, 1), (c) (3, 1), and (d) (6, 1). (The generators and elements of the orbits are shown by the filled circles.)
19-45
Multidimensional Discrete Unitary Transforms
This table shows also the way of composing the complete set of basis paired functions. Indeed, it follows from the definition (19.178), that 2-paired, or simply paired functions are calculated by X 0p, s, t (n1 , n2 ) ¼ X p, s, t (n1 , n2 ) X p, s, tþ4 (n1 , n2 ) ( 1; if n1 p þ n2 s ¼ t mod 8, ¼ 1; if n1 p þ n2 s ¼ t þ 4 mod 8: 0; otherwise, (19:186) where t ¼ 0 : 3. For instance, the values of the function X 03, 1, 2 (n1 , n2 ) can be written in the form of the following mask: 2
0 6 0 6 6 1 6 0 6 0 X 3, 1, 2 ¼ 6 6 0 6 6 0 6 4 1 0
0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 0
0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0
3
0 1 7 7 0 7 7 0 7 7: 0 7 7 1 7 7 0 5 0
The 64 triplets (p, s, t), or numbers of the complete system of paired functions X 0p, s, t are composed from the generators (p, s) and time variable t, which runs numbers 0, 1, 2, 3, for the orbits in the first row in the table of Equation 19.185. For short orbits in the second row, t takes only values of 0 and 2. Indeed, both coordinates of points on these orbits are even, therefore conditions (n1p þ n2s ¼ t mod 8) and (n1p þ n2s ¼ t þ 4 mod 8) do not hold for odd t ¼ 1 and 3. In other words, for these triples (p, s, t), we have X 0p, s, t 0, and we do not consider such functions. There is no movement (or t ¼ 0) in single-point orbits for (p, s) ¼ (0, 4), (4, 4), (4, 0), and (0, 0). The set of 64 triplets for the complete system of paired functions are given in the following table: (p, s) (0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1), (1, 0), (1, 2), (1, 4), (1, 6) (0, 2), (2, 2), (4, 2), (6, 2), (2, 0), (2, 4) (0, 4), (4, 4), (4, 0) (0, 0)
t 0, 1, 2, 3 0, 2 0 0
(19:187)
Figure 19.26 shows all 64 basis paired functions X 0p, s, t , which are placed in the order given in the above table, starting form the top on the left to the bottom to the right. For instance, the first
FIGURE 19.26 The complete system of 2-D basis functions of the 8 3 8-paired transform. (Values of 1 are shown by the filled circles, and 1 by the open circles.)
19-46
Transforms and Applications Handbook
four functions have numbers (0, 1, t), where t ¼ 0: 3. The filled circle is used for value 1 and the open circle for 1. It is not difficult to see that all these functions are orthogonal. The complete set of paired functions define the paired representation of the 2-D sequence or image f n o f ! fTp,0 s ; Tp,0 s 2 s0 :
(19:188)
The components of these paired splitting-signals fTp,0 s are calculated from the components of the splitting-signals fTp, s of the tensor representation of f by fp,0 s, t ¼ fp, s, t fp, s, tþ4 , t 2 {0, 1, 2, 3}:
(19:189)
0 fT3,1 8 0 f3, 1, 0 ¼ (f0, 0 þ f1, 5 þ f2, 2 þ f3, 7 þ f4, 4 þ f5, 1 þ f6, 6 þ f7, 3 ) > > > > > (f0, 4 þ f1, 1 þ f2, 6 þ f3, 3 þ f4, 0 þ f5, 5 þ f6, 2 þ f7, 7 ) ¼ > > > > > (1 þ 4 þ 2 þ 1 þ 1 þ 4 þ 2 þ 2) (1 þ 0 þ 1 þ 1 þ 2 þ 2 þ 1 þ 2) ¼ 7 > > > > > > f3,0 1, 1 ¼ (f0, 1 þ f1, 6 þ f2, 3 þ f3, 0 þ f4, 5 þ f5, 2 þ f6, 7 þ f7, 4 ) > > > > > > > (f0, 5 þ f1, 2 þ f2, 7 þ f3, 4 þ f4, 1 þ f5, 6 þ f6, 3 þ f7, 0 ) ¼ > < (2 þ 2 þ 2 þ 4 þ 3 þ 1 þ 1 þ 1) (2 þ 1 þ 2 þ 3 þ 4 þ 2 þ 2 þ 2) ¼ 2 ¼ > f3,0 1, 2 ¼ (f0, 2 þ f1, 7 þ f2, 4 þ f3, 1 þ f4, 6 þ f5, 3 þ f6, 0 þ f7, 5 ) > > > > > (f0, 6 þ f1, 3 þ f2, 0 þ f3, 5 þ f4, 2 þ f5, 7 þ f6, 4 þ f7, 1 ) > > > > > (1 þ 1 þ 1 þ 1 þ 2 þ 2 þ 2 þ 1) (1 þ 2 þ 1 þ 1 þ 1 þ 1 þ 1 þ 4) ¼ 1 > > > > > > f3,0 1, 3 ¼ (f0, 3 þ f1, 0 þ f2, 5 þ f3, 2 þ f4, 7 þ f5, 4 þ f6, 1 þ f7, 6 ) > > > > > (f0, 7 þ f1, 4 þ f2, 1 þ f3, 6 þ f4, 3 þ f5, 0 þ f6, 5 þ f7, 2 ) ¼ > > : (3 þ 2 þ 1 þ 0 þ 2 þ 1 þ 4 þ 3) (3 þ 2 þ 3 þ 3 þ 2 þ 2 þ 5 þ 1) ¼ 5
This splitting-signal is modified as fT3,0 1 W, where
n o W ¼ diag 1, ejp=4 , ej2p=4 , ej3p=4
The transformation
¼ diagf1, a(1 j), j, a(1 þ j)g
n o X 0 : f ! fp,0 s, t ; Tp,0 s 2 s0 , t ¼ 0: card Tp,0 s 1
(19:190)
is called the paired transformation. Thus the paired transform is the representation of the image in the form of 1-D splittingsignals; components of all splitting-signals together compose the paired transform. Let (p, s) ¼ (3, 1) and let f be the following image of size 8 3 8 : 1 2 1 4 f ¼ 2 2 2 2
2 0 3 1 4 4 4 4
1 1 2 0 1 1 1 1
3 2 2 1 2 2 2 2
1 2 1 3 1 1 1 1
2 4 1 1 3 2 5 1
3 1 2 1 : 2 1 1 2
1 2 1 3 2 2 2 3
kt ¼ (3n1 þ n2 ) mod 8kn2 , n1 ¼0:7
1 4 7 2 5 0 3 6
2 5 0 3 6 1 4 7
3 6 1 4 7 2 5 0
gT3,0 1 ¼ fT3,0 1 W ¼ {7, 1:4142 þ j1:4142, j, 3:5355 þ j3:5355} is calculated as follows: 2
3 32 32 7 1 0 0 0 1 1 1 76 2 7 6 0 j 1 j 7 7 76 76 0 a(1 j) 0 54 1 5 0 j 0 1 1 1 54 0 5 0 0 0 a(1 þ j) j 1 j 3 2 9:1213 þ j5:9497 6 4:8787 þ j3:9497 7 7 ¼6 4 4:8787 j3:9497 5: 9:1213 j5:9497
1 61 6 41 1
The obtained 4-point DFT coincides with the 8 3 8-point DFT of the image f at four frequency-points of the orbit T3,0 1 , i.e.,
We will calculate the 2-D DFT of f at frequency-points of the orbit of the point (3, 1). For that, first we write all values of t in the equations 3n1 þ n2 ¼ t mod 8 in the form of the following table: 0 3 6 1 ¼ 4 7 2 5
pffiffiffi and a ¼ 2=2 ¼ 0:7071. The four-point DFT of the modified splitting-signal
4 7 2 5 0 3 6 1
5 0 3 6 1 4 7 2
6 1 4 7 2 5 0 3
7 2 5 0 : 3 6 1 4
According to this table, the components of the splitting-signal fT3,0 1 are defined as follows:
3 3 2 9:1213 þ j5:9497 F3, 1 6 F1, 3 7 6 4:8787 þ j3:9497 7 7 7 6 6 4 F7, 5 5 ¼ 4 4:8787 j3:9497 5: F5, 7 9:1213 j5:9497 2
We can also see from this example, that the pairs of complex conjugate components F3,1 and F5,7, and F1,3 and F7,5 are obtained from the same splitting-signal. In general, if (p1, p2) lies on the orbit Tp,0 s , then the point (N p1, N p2) lies also on this orbit. In the N ¼ 2r case, when r > 1, the partition s0 ¼ (T0 ) of the lattice X2r , 2r is composed similarly to the case N ¼ 8. For integers n ¼ 0 : (r 1), we define the sets of generators J2n , 2n ¼ f(1, 0), (1, 1), . . . , (1, 2n 1), (0, 1), (2, 1), (4, 1), . . . , (2n 2, 1)g:
19-47
Multidimensional Discrete Unitary Transforms
The totality of 3 2r2 subsets T20 n p, 2n s s ¼ 0
(p, s)2J2rn , 2rn
n¼0:(r1)
, f(0, 0)g
!
2rn1-point DFT. In other words, if (p, s) ¼ 2n(p0, s0) where (p0 , s0 ) 2 J2rn , 2rn , then (19:191)
is the partition of the lattice X. The complete set of paired n o functions X 0 ¼ X 0p, s, t ; (p, s, t) 2 U can thus be defined by
the following set of triplets:
U¼
r[ 1
n¼0
2n (p, s, t); (p, s) 2 J2rn ,2rn , t ¼ 0:(2rn1 1) [ f(0, 0, 0)g: (19:192)
The paired transform X 0 : f !
n
fp,0 s, t ; (p, s, t) 2 U
o
does not
require multiplications, but only operations of addition and subtraction. For L ¼ 2, the equation in Equation 19.182 takes the following form: r1 2X 1
F (2kþ1)p, (2kþ1)s ¼
t¼0
0 fp,s,t W t W2ktr1 , k ¼ 0: (2r1 1),
(19:193)
where the components 0 fp,s,t ¼ fp,s,t fp,s,tþ2r1 ,
t ¼ 0: (2r1 1):
It directly follows from Equation 19.193 and the given set of triplets U, that the 2r 3 2r-point DFT is split by the paired transform into a set of 2r3 2 short DFTs, namely 3 2r1 2r1-point DFTs, 3 2r2 2r2-point DFTs, . . . , six 2-point DFTs, and four 1-point identity transforms, i.e.,
RðF 2r ,2r ; s0 Þ ¼
8
2): (19:198)
When estimating this number, we use the fact that the 2n-point fast DFT requires m2n ¼ 2n1 (n 3) þ 2 multiplications, when n 3 [11, 22]. As an illustration of the concept of the paired representation, Figure 19.27 shows the tree image of size 256 3 256 in part (a), along with the splitting-signal fT3,0 1 of length 128 together with the modified signal fT3,0 1 in (b), the 128-point DFT of the modified signal in (c), and the samples of the 2-D DFT of the image at frequency-points of the subset T3,0 1 on the lattice X256,256, at which the 2-D DFT can be filled by the calculated 1-D DFT of the modified signal.
20
RegT΄3,1
10 0 −10 fT΄3,1 −20 (a)
(b)
700
0
50
100
200
600 100
500 400
0 300 200
200
100
100 0 (c)
0
50
0
100 (d)
0
50
100
150
200
250
FIGURE 19.27 (a) Tree image 256 3 256, (b) the paired splitting-signal fT3,0 1 and the real part of the modified signal, (c) the 1-D DFT of the modified signal, and (d) the arrangement of values of the 1-D DFT at frequency-points of the subset T3,0 1 . (The 1-D DFT is shown in the absolute scale.)
19-49
Multidimensional Discrete Unitary Transforms
The paired representation of 2-D signals or images in the form of separate 1-D splitting-signals of different lengths is the transformation of the 2-D data into 3-D space which is a space of 2-D frequency with 1-D time, n
0
X : fn1 , n2 ! fTp,0 s ¼
19.9.3 N Is a Power of Odd Prime For the Lr 3 Lr case, when L > 2 is a prime number, the complete set of 2-D paired functions X0 ¼
o :
{fp,0 s, t }
U¼
(19:199)
r[ 1 L[ 1
n¼0 k¼1
Uk;n [ f(0, 0, 0)g
(19:200)
where the disjoint sets Uk;n are defined as Uk;n ¼ fLn (k, kp2 , t), Ln (kLp1 , k, t); p2 ¼ 0 : (Lrn 1), (19:201) p1 , t ¼ 0: (Lrn1 1) :
The number of all triplets (p, s, t) in Equation 19.200 is equal to L2r. The set U is not unique; one can replace, for instance, each triplet (p, s, t) in U by (s, p, t); but that will change only the numbering of the paired basis functions, and the sign for some of them. Therefore, the new complete system of paired functions will be similar to X 0 . The set J0 of generators (p, s) in the triplets (p, s, t) 2 U are taken from the partition of the lattice XLr , Lr by subsets Tp,0 s , which are considered to be 0
@ Tp0 1 , p2 ;L sJ 0 ¼
(p1 , p2 )2kLn JLrn , Lrn
350
300
300
250
250
200 150
k¼1:(L1)
!
n¼0:(r1)
1
, (0, 0)A:
(19:202)
350
DFT number
Signal number
100
o
X 0p, s, t ; (p, s, t) 2 U
is defined by the following set of triplets:
The paired transform is not separable, but the paired splittingsignals carry the spectral information of fn1 , n2 at disjoint subsets of frequency-points, and the processing of 2-D signals and images thus can be reduced to processing their 1-D splittingsignals. To illustrate such splitting-signals for images of large sizes, Figure 19.28 shows the tree image fn1 , n2 of size 256 3 256 in part (a), along with the totality of all 766 splitting-signals of the image in (b). The first 384 splitting-signals of length 128 each are shown in the form of the ‘‘image’’ 384 3 128. The next 192 splitting-signals of length 64 each are shown in the form of the image 192 3 64, the splitting-signals of length 32 are shown in the form of the image 96 3 32, and so on. The whole picture represents the paired transform of the tree image. The set of the 1-D DFTs of all splitting-signals are shown in the form of a similar figure in (c). These 1-D DFTs represent the splitting of the 2-D DFT, by frequency-points which are distributed by disjoint subsets, or orbits in the lattice 256 3 256. The number of elements in both images (b) and (c) are equal to 2562. As compared with the tensor representation of the tree image and splitting of the 2-D DFT of the image, which are shown in Figure 19.23, the paired representation provides the representation of the image and the splitting of the 2-D DFT without redundancy.
50
n
200 150
150
100
100
200
50
50
250 50 (a)
100
150
200
250
128 (b)
64 Signal length
32 16 8 (c)
128 64 32 16 8 . . Length of the DFT
0 }), and (c) 1-D DFTs of the modified FIGURE 19.28 (a) Tree image ({fn,m}), (b) the splitting-signals of lengths 128, 64, 32, 16, 8, 4, 2, 1 ({ fp,s,t splitting-signals (the transforms are shown in the absolute scale and shifted to the center) ({F(2kþ1)p, (2kþ1)s }).
19-50
Transforms and Applications Handbook
In other words, this set equals J 0 ¼ JL0 r , Lr ¼
r1 [ [ L1
fp01 , p2 , 0 ¼ fp1 , p2 , 0 þ fp1 , p2 , 3 W3 þ fp1 , p2 , 6 W32 , kLn JLrn , Lrn
(19:203)
n¼0 k¼1
and has the cardinality (L þ 1)(Lr 1) þ 1. In the paired representation, the set of 1-D DFTs which split the 2-D DFT of order Lr 3 Lr consists therefore of (L þ 1)(Lr 1) þ 1 transforms, (L2 1)Lr1 of which are Lr1-point DFTs, (L2 1)Lr2 are Lr2-point DFTs, . . . , and (L2 1)L are L-point DFTs. The number of operations of multiplication which are used for calculating the Lr 3 Lr-point DFT by such splitting is estimated as
m0Lr , Lr (L2 1)
r1 X n¼1
Lrn mLrn þ (L 1)L2r (L þ 1)Lr þ 2 :
(19:204)
The number in the square brackets is referred to as the number of multiplications by all twiddle factors that are used for the calculation of the modified splitting-signals of Equation 19.183. It is supposed that each component fp,0 s, t of the paired transform is calculated directly by the formula (19.179) with (L 1) operations of complex multiplication by twiddle factors.
Example 19.22 (9 3 9-Paired Transformation) Let N ¼ 9, and let f be a sequence of size 9 3 9. The square lattice X9,9 can be divided by the following totality of subsets: 1 T1,0 0 , T1,0 1 , T1,0 2 , T1,0 3 , T1,0 4 , T1,0 5 , T1,0 6 , T1,0 7 , T1,0 8 , T0,0 1 , T3,0 1 , T6,0 1 0 0 0 0 0 0 0 0 0 0 0 0 s ¼ @ T2, 0 , T2, 2 , T2, 3 , T2, 6 , T2, 8 , T2, 1 , T2, 3 , T2, 5 , T2, 7 , T0, 2 , T3, 2 , T6, 2 A: T3,0 0 , T3,0 3 , T3,0 3 , T0,0 3 , T6,0 0 , T6,0 6 , T6,0 3 , T0,0 6 , T0,0 0 0
0
The first 24 subsets Tp,0 s of this partition (which lie on the first two rows) consists of three elements each, and the remaining 9 subsets are 1-point each. Using the tensor representation of f with respect to the 2-D DFT, it is not difficult to calculate all 33 splitting-signals of the 3-paired representation of f :
fn, m
8 < fT1,0 0 , fT1,0 1 , fT1,0 2 , fT1,0 3 , fT1,0 4 , fT1,0 5 , fT1,0 6 , fT1,0 7 , fT1,0 8 , fT0,0 1 , fT3,0 1 , fT6,0 1 , ! fT2,0 0 , fT2,0 2 , fT2,0 4 , fT2,0 6 , fT2,0 8 , fT2,0 1 , fT2,0 3 , fT2,0 5 , fT2,0 7 , fT0,0 2 , fT3,0 2 , fT6,0 2 , :f 0 , f 0 , f 0 , f 0 , f 0 , f 0 , f 0 , f 0 , f 0 : T3, 0 T3, 3 T3, 6 T0, 3 T6, 0 T6, 6 T6, 3 T0, 6 T0, 0
The first 24 splitting-signals of the 3-paired representation of f are 3-point signals each, and they are modified as: n o fTp,0 s ! gTp,0 s ¼ fp01 , p2 , 0 , fp01 , p2 , 1 W, fp01 , p2 , 2 W 2 , where W ¼ exp (2pj=9). The components of these signals are calculated by
fp01 , p2 , 1 ¼ fp1 , p2 , 1 þ fp1 , p2 , 4 W3 þ fp1 , p2 , 7 W32 , fp01 , p2 , 2 ¼ fp1 , p2 , 2 þ fp1 , p2 , 5 W3 þ fp1 , p2 , 8 W32 ,
where W3 ¼ exp (2pj=3) ¼ 0.5 j0.8660 and W32 ¼ W3 . The calculation of the 2-D DFT of f at frequency-points of subsets Tp,0 s can be calculated by the 3-point DFTs of the modified splitting-signals, F(3kþ1)p mod 9, (3kþ1)s mod 9 ¼
2 X t¼0
fp,0 s, t W9t W3kt , k ¼ 0, 1, 2:
Other splitting-signals with generators (p, s) whose both coordinates are multiple to 3, themselves represent one-point sequences, fT0 p, s ¼ fp,0 s, 0 . We can also write, that n o
fT0 p, s ¼ fp,0 s, 0 , 0, 0 . For example, fT3,0 0 ¼ f3,0 0, 0 , 0, 0 , since
f3,0 0, 1 ¼ f3,0 0, 2 ¼ 0. Therefore, the 2-D DFT of f at these generators equals to these 1-point signals, i.e., Fp, s ¼ fp,0 s, 0 . As a result, the 3-paired representation leads to the following splitting of the 9 3 9 DFT:
R(F 9, 9 ; s0 ) ¼
8
E0. All together, the images of these splittingsignals, or their direction images compose the image shown in Figure 19.31a, and the rest of the direction images compose the image in (b). The sum of these two images equals the original tree image. The image in a provides no details but a very smooth and ‘‘hot’’ picture of the image, and opposite, the image in (b) provides the details of the tree image but the luck of brightness. 19.9.4.1 Series Images From each image specific periodic structures can be extracted, which all together compose the image. To illustrate this property, we call the sum of direction images corresponding to the subset of generators 2k J2rk , 2rk , X 1 (r) S(k) ¼ d , S ¼ d F n, m;p, s n, m;0, 0 0, 0 n, m n, m N2 (p, s)22k J 2rk , 2rk
(a)
the kth series image, where k 2 {0, 1, . . . , r}. Figure 19.32 shows the first five series images for the tree image in parts (a) through (e). One can see that each series image, starting from the second one, has a periodic structure with a resolution which increases exponentially with the number of the series. We call the number 2k the resolution of the kth series image. This is an interesting fact: each resolution is referred to as a periodic structure of one part of the image. The first series image is the component of the image with the lowest resolution, and the (r 1)th series image is the component of the image with the highest resolution. The constant image S(r) has 0 resolution. Sum of the series images equals the original image, as shown in (f) where the image is the sum of only the first five series images. The remaining three resolutions add more details in the image. Periodic structure of the series images takes place for other images as well. As an example, Figure 19.33 shows the first series image of the girl image in part (a), along with the next six series images in (b)–(g), and the sum of these series images in (h). Series images have different ranges of intensities, which decrease when the resolution increases. For instance, the first four series images have values that vary in range 255, 101, 45, and 15, respectively. Therefore for better illustration, all series
(b)
FIGURE 19.31 (a) The sum of 50 direction images defined by splitting-signals of high energy, and (b) the sum of the remaining direction images.
19-53
Multidimensional Discrete Unitary Transforms
(a)
(b)
(c)
(d)
(e)
(f )
FIGURE 19.32 (a)–(e) The first five series images of the tree image, (f) and the sum of these series images.
(a)
(b)
(c)
(d)
(e)
(f )
(g)
(h)
FIGURE 19.33 (a)–(g) The first seven series images of the girl image, and (h) the sum of these images.
images in this figure, as well as in Figure 19.32, have been scaled by using the MATLAB1 command ‘‘imagesc(.)’’. 19.9.4.2 Resolution Map It is important to mention, that the first series image is also composed by periodic structures N=2 3 N=2. In this image, as well as the remaining series images, we can separate subsets of
direction images in the following way. The set of generators J2k , 2k is divided by three parts as J2(1) r , 2r ¼ f(1, 2s); s ¼ 0: ðN=2 1Þg,
J2(2) r , 2r ¼ f(2p, 1); p ¼ 0: ðN=2 1Þg,
J2(3) r , 2r ¼ f(1, 2s þ 1); s ¼ 0: ðN=2 1Þg:
19-54
Transforms and Applications Handbook
In the first two sets, the coordinates of the generators are replaced, i.e., these two sets are symmetric to each other. The directions of the direction images that correspond to the first set of generators are positive, and negative for the second set. The directions defined by the third set of generators are unique. We denote the division of the first series image S(0) by these subsets as Pn,(0)m ¼ Un,(0)m ¼
X
dn, m;p, s ,
(p, s)2J2(1) r , 2r
X
Nn,(0)m ¼
X
U (0) ¼
(p, s)2J2(2) r , 2r
dn, m;p, s ,
so that S(0) ¼ P(0) þ N(0) þ U(0). Figure 19.34 shows the image P(0) for the girl image in part (a), along with the images N(0) and U(0) in (b) and (c), respectively. In these images, one can notice different parts of the girl image with their negative versions periodically shifted by 128 along the horizontal, vertical, and diagonal directions. Each image is divided by four parts N=2 3 N=2 with similar structures, which can be used for composing the entire series image S(0). Indeed, it follows directly from the definition of the paired functions that the following equations are valid: 8 0 ¼ f1,0 2s, nþ2ms mod N f > > < 1, 2s, (nþN=2)þ2ms mod N f1,0 2s, nþ2(mþN=2)s mod N ¼ f1,0 2s, nþ2ms mod N > > :f0 0 1, 2s, (nþN=2)þ2(mþN=2)s mod N ¼ f1, 2s, nþ2ms mod N 8 0 f ¼ f1,0 2sþ1, nþm(2sþ1) mod N > > < 1, 2sþ1, (nþN=2)þm(2sþ1) mod N f1,0 2sþ1, nþ(mþN=2)(2sþ1) mod N ¼ f1,0 2sþ1, nþm(2sþ1) mod N > > :f0 ¼ f 0
1, 2sþ1, nþm(2sþ1) mod N
for s ¼ 0 : (N=2 1), and
8 0 0 > < f2p, 1, 2ðnþN=2Þpþm mod N ¼ f2p, 1, mþ2np mod N 0 0 f2p, 1, 2npþðmþN=2Þ mod N ¼ f2p, 1, mþ2np mod N > 0 :f0 ¼ f2p, 1, mþ2np mod N 2p, 1, 2ðnþN=2ÞpþðmþN=2Þ mod N
(a)
P(0) ¼
dn, m;p, s ,
(p, s)2J2(3) r , 2r
1, 2sþ1, (nþN=2)þ(mþN=2)(2sþ1) mod N
for p ¼ 0 : (N=2 1). Therefore, the series image components P(0), N(0), and U(0) can be defined from their first quarters which we denote by P1, N1, and U1, respectively, as follows: P1
P1 U1
U1
N1 , N (0) ¼ P1 N1 U1 : U1 P1
N1 N1
,
Figure 19.35 shows the decomposition of the next series image S(1) for the girl image. For this series image, as well as the remaining series images S(k), k ¼ 2 : (r 1), the similar decompositions take place. Each of such images can be defined by the three quarters Pkþ1, Nkþ1, and Ukþ1 of their periods N=2kþ1 3 N=2kþ1 in the way similar to the first series image. As a result, the following resolution map (RM) associates with the image f : P1 U1 P2 U2 RM[f ] ¼ N1 N2 P3 U3 N3 . . .
(19:210)
This resolution map has the same size as the image and contains all periodic parts of the series images, i.e., all periods by means of which the original image can be reconstructed. Each periodic part is extracted from the direction image, whose directions are 0 given by subsets of generators of JN, N . In other words, the RM represents itself the image packed by its periodic structures that correspond to specific set of projections. The resolution map can be used to change the resolution of the entire image, by processing direction images for desired directions. Good results of image processing, including the enhancement, can be achieved when working with one or a few high energy splitting-signals, as well as the sets of splitting-signals which are combined by series and correspond to different resolutions written in the image RM. Figure 19.36 shows all 766 generators 0 in part (a), where the 12 generators for the 6th (p, s) 2 J256,256 series image and six generators for the 7th series image are
(b)
FIGURE 19.34 Three components of the first series images of the girl image.
(c)
19-55
Multidimensional Discrete Unitary Transforms
(a)
(b)
(c)
FIGURE 19.35 Three components of the 2nd series image of the girl image.
250 7
200 150 100 50
6
0 0 (a)
100
200 (b)
(c)
0 FIGURE 19.36 (a) 766 generators of the set J256, 256 , and the tree image with amplified (b) 7th series image, and (c) 6th and 7th series images.
marked by ‘‘.’’ and ‘‘þ,’’ respectively. The girl image with amplified series image of number 7 by the factor of 2 is shown in (b), and the image with the amplified series images of numbers 6 and 7 respectively by the factors of 1.2 and 1.5 in (c). These two images are enhanced by resolutions 64 and 64 with 32, respectively. Thus, the paired form of image representation leads to the splitting of the 2-D DFT of the image by the set of 1-D DFTs of splitting-signals which define the direction images as components of the image. This representation allows for extracting the periodic structures of the image, which are defined by direction images united in special groups of directions, which are referred to as series images. The image is packed and described by its resolution map, which can be used for image enhancement and compression. Each periodic structure in the resolution map can be also represented by its resolution map. In such a recursive way, the resolution map can be crushed into small pieces, from which the whole image can be reconstructed.
19.10 2-D DFT on the Hexagonal Lattice In this section, we generalize concepts of the tensor and paired representations with respect to the 2-D DFTs whose fundamental periods are hexagonal lattices. Hexagonal lattices are important for many problems in image processing [42,43]. Sampling 2-D
isotropic functions on hexagonal lattices is significantly more efficient than sampling on rectangular lattices [44]. It also was shown that the vision system relates best to the regular hexagonal tessellation [45,46], which has a lower number of neighbors than the rectangular lattice. Let us first consider the problem of splitting the 2-D DFT into a set of short 1-D transforms, when samples of both 2-D sequence f and transform F are arranged on the similar hexagonal lattices Fp1 þ[p2 ], p2 ¼
N1 X N1 X
n1 ¼0 n2 ¼0
fn1 þ[n2 ], n2 W ðn1 þ[n2 ]Þð p1 þ[p2 ]Þþn2 p2 (19:211)
where W ¼ exp(2pj=N) and [n2 ] ¼ ð1 ( 1)n2 Þ=4, [p2 ] ¼ ð1 (1)p2 Þ=4, for all n2, p2 ¼ 0 : (N 1). Since [n] ¼ 0 when n is even, and [n] ¼ 0.5 when n is odd, each second row of knots of the hexagonal lattice is shifted by 0.5 with respect to the even rows. As an example, Figure 19.37 shows the rectangular lattice 8 3 8 in part (a), along with the hexagonal lattice in (b). It is clear that the traditional row–column method cannot be applied directly for fast computing the values Fp1 þ[p2 ], p2 . The complexity of the 2-D discrete hexagonal Fourier transform (DHFT) is due to the fact that the kernel of the transform has a
19-56
Transforms and Applications Handbook 7
7
6
6
5
5
4
4
3
3
2
2
1
1
0
0
2
4
6
(a)
0
0
2
4
6
(b)
FIGURE 19.37 (a) Rectangular lattice and (b) hexagonal lattice.
more complex form than in the rectangular case, and in addition it is not separable. For any integers p1 and p2, the following relations hold between the spectral components of the transform: F(p1 þ2N)þ[p2 þN], p2 þN ¼ Fp1 þ[p2 ], p2 F(p1 þN)þ[p2 þN], p2 þN 6¼ Fp1 þ[p2 ], p2
:
(19:212)
The period of the transform is (2N, N), not (N, N). In other words, the hexagonal lattice XN,N is not the fundamental period of this transform. Therefore, we consider the 2-D hexagonal lattice of size 2N 3 N,
Example 19.23 Let X be the hexagonal lattice X8,4. We consider two subsets Tp1 þ[p2 ], p2 for (p1, p2) ¼ (1, 1) and (1, 2), T1þ[1], 1 ¼ T1:5, 1 ¼
(
(0, 0), (1:5, 1), (3, 2), (4:5, 3), (6, 0), (7:5, 1), (1, 2), (2:5, 3) (4, 0), (5:5, 1), (7, 2), (0:5, 3), (2, 0), (3:5, 1), (5, 2), (6:5, 3)
)
T1þ[2], 1 ¼ T1, 1 ¼ f(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7)g:
Since 1 þ [1] ¼ 1.5, 1 þ [2] ¼ 1, and k ¼ 0 : 15, the first subset consists of 16 frequency-points and the second subset has 8 frequency-points. The number of points of these sets is defined as the smallest integer k > 0 for which k(p1 þ [p2]) mod 8 ¼ 0. X2N, N ¼ fðp1 þ [p2 ], p2 Þ; p1 ¼ 0 : (2N 1), p2 ¼ 0 : (N 1)g, These two subsets intersect only at (0, 0). (19:213) As in the case of the 2-D DFT on the rectangular lattice, for a given (p1 þ [p2], p2) 2 X2N,N, we unite all spatial points (n1 þ [n2], and the 2-D sequence f ¼ ffn1 þ[n2 ],n2 g as a 2N 3 N-point n2) on the lattice, for which the form sequence defined (or extended) on this lattice. Hereinafter, we define the 2-D DFT, F 2N, N , on the lattice Lðn1 þ [n2 ], n2 ; p1 þ [p2 ], p2 Þ ¼ ðn1 þ [n2 ]Þðp1 þ [p2 ]Þ þ n2 p2 by X 2N,N
Fp1 þ[p2 ], p2 ¼
2N1 N 1 X X
n1 ¼0 n2 ¼0
fn1 þ[n2 ],n2 W ðn1 þ[n2 ]Þðp1 þ[p2 ]Þþn2 p2 ,
(19:214)
ðp1 þ [p2 ], p2 Þ 2 X2N, N , and we call it the 2-D DHFT. We now describe a covering of the lattice X2N,N that allows for splitting the structure of the 2-D DHFT. For that, we define the following subsets (or cyclic groups) of frequency-points on the lattice: Tp1 þ[p2 ], p2
n o ¼ kðp1 þ [p2 ]Þ, kp2 , k ¼ 0 : (4N 1)
(19:215)
where kðp1 þ [p2 ]Þ, kp2 ¼ ðkðp1 þ [p2 ]Þ mod 2N, kp2 mod N Þ.
takes the same values t, where t varies from 0 through N D with a step D which depends on the values of p1 and p2. In other words, we define the following disjoint subsets of points (n1 þ [n2], n2) : Vp1 þ[p2 ], p2 , t ¼ fðn1 þ [n2 ], n2 Þ; Lðn1 þ [n2 ], n2 ; p1 þ [p2 ], p2 Þ ¼ t mod N g:
(19:216)
The sums of the elements of the image f on these subsets of points are denoted by fp1 þ[p2 ], p2 , t ¼
X
Vp1 þ[p2 ], p2 , t
fn1 þ[n2 ], n2 , t ¼ 0, D, 2D, . . . , N D: (19:217)
19-57
Multidimensional Discrete Unitary Transforms
We now compose the following 1-D signal, which we call the splitting-signal of the image,
fT ¼ fp1 þ[p2 ], p2 ,0 , fp1 þ[p2 ], p2 , D , fp1 þ[p2 ], p2 , 2D , . . . , fp1 þ[p2 ], p2 , ND :
The step D is calculated as follows: 8 1
> < nfp1 þ12, p2 , 0 , fp1 þ12, p2 , 14 , fp1 þ12, p2 , 12 , . . . , fp1 þ12, po2 , N14 , fT ¼ > fp1 , p2 , 0 , fp1 , p2 , 12 , fp1 þ12, p2 , 1 , . . . , fp1 , p2 , N12 , > : fp1 , p2 , 0 , fp1 , p2 , 2n1 , fp1 , p2 , 2n , . . . , fp1 , p2 , N2n1 ,
if p2 is odd; if p1 is odd, p2 is even; : if p1 and p2 are even
(19:218)
The following statement can be derived from the above definitions. Let (p1 þ [p2], p2) be a frequency-point of X2N,N, then the 2-D DHFT at this frequency-point, as well as at other frequencypoints of the subset T ¼ Tp1 þ[p2 ], p2 can be calculated by the 1-D Fourier transform as
Fkðp1 þ[p2 ]Þ, kp2 ¼
ND X
t¼0, D
The interpolation of samples has been performed by the cross 3 3 3, such that a ¼ (1 þ 6 þ 6 þ 5)=4 ¼ 4.5, b ¼ (1 þ 6 þ 3 þ 5)= 4 ¼ 3.75, and c ¼ (2 þ 3 þ 3 þ 4)=4 ¼ 3. According to Equation 19.219, the calculation of the 2-D DHFT is reduced to the composition of an irreducible covering s of the hexagonal lattice X2N,N, which is composed of subsets (19.215), i.e.,
for a certain set of generators J X2N,N. The 1-D DFTs of the splitting-signals fT are calculated for all subsets T ¼ Tp1 þ[p2 ], p2 2 s. These 1-D DFTs fill completely the 2-D DHFT.
Example 19.24 In the N ¼ 4 case, the irreducible covering s of the hexagonal lattice X8,4 can be defined as follows: s8, 4 ¼ ðT0þ[1], 1 , T1þ[1], 1 , T1þ[0], 0 , T0þ[2], 2 , T2þ[2], 2 , T4þ[2], 2 Þ where the subsets T equal
T0þ[1], 1 ¼ T0:5, 1 kt
fp1 þ[p2 ], p2 , t W ,
k ¼ 0 : ðcard(T) 1Þ: (19:219)
The index t varies from 0 through N D with step D. The cardinality of the set T equals N=D. Therefore, the sum in this equation is referred to as the N=D-point DFT. The 2-D DHFT on frequency-points of the subset Tp1 þ[p2 ], p2 represents itself thus as one of the following 1-D DFTs: 8 < F 4N ; F 2N, N jTp þ[p ], p F 2N ; 1 2 2 : F n1 ; N=2
if p2 is odd , if p1 is odd , p2 is even, if p1 and p2 are even. (19:220)
As an example, Figure 19.38 illustrates the tree image 512 3 256 written on the hexagonal lattice X512,256, along with the splittingsignal fT1þ[1], 1 of length 1024 in part (b), the 1024-point DFT of this signal in part (c), and 1024 samples of the 1-D DFT at frequency-points of the subset T1þ[1],1, at which the 2-D DHFT of the image coincides with the 1-D DFT, in part (d). In this example, N ¼ 256, p1 ¼ 1, p2 ¼ 1, [p2 ] ¼ 0:5, card T1þ[1], 1 ¼ 4 256 ¼ 1024. For this modeled image, all samples of the hexagonal lattice 512 3 256 have been placed on the rectangular lattice 1024 3 256 in a way shown in Figure 19.38a and b. Then the image has been extended at other points of the rectangular lattice, by calculating the means of the image at the nearest samples of the hexagonal lattice, as shown in Figure 19.39.
(19:221)
T1þ[1], 1 ¼ T1:5, 1
9 8 (0, 0), (0:5, 1), (1, 2), (1:5, 3) > > > > > = < (2, 0), (2:5, 1), (3, 2), (3:5, 3) > ¼ > (4, 0), (4:5, 1), (5, 2), (5:5, 3) > > > > > ; : (6, 0), (6:5, 1), (7, 2), (7:5, 3) 9 8 (0, 0), (1:5, 1), (3, 2), (4:5, 3) > > > > > = < (6, 0), (7:5, 1), (1, 2), (2:5, 3) > ¼ > (4, 0), (5:5, 1), (7, 2), (0:5, 3) > > > > > ; : (2, 0), (3:5, 1), (5, 2), (6:5, 3) (0, 0), (1, 0), (2, 0), (3, 0) (4, 0), (5, 0), (6, 0), (7, 0)
¼ (0, 0), (0, 2)
¼ (0, 0), (2, 2), (4, 0), (6, 2)
¼ (0, 0), (4, 2) :
T1þ[0], 0 ¼ T1, 0 ¼ T0þ[2], 2 ¼ T0, 2 T2þ[2], 2 ¼ T2, 2 T4þ[2], 2 ¼ T4, 2
The 8 3 4-point DHFT at frequency-points of the subsets T0þ[1],1, T1þ[1],1, and T1þ[0],0 is determined by the 16, 16, and 8-point DFTs, respectively. To calculate the DHFT at frequencypoints of all subsets of the covering, it is enough to calculate the following: 1. One 16-point DFT of the splitting-signal fT0:5, 1 , to define components of the DHFT at all 16 frequency-points of the set T0.5,1. 2. One incomplete 16-point DFT of the splitting-signal fT1:5, 1 , when only spectral components with odd points are calculated. This transform can be reduced to the 8-point DFT (which requires six additional multiplications with the twiddle factors). The 2-D DHFT will be defined by the incomplete DFT at frequency-points (1.5, 1), (4.5, 3), (7.5, 1), (2.5, 1), (5.5, 1), (0.5, 1), (3.5, 1), and (6.5, 1).
19-58
Transforms and Applications Handbook
75 50
70
100
65
150
60
200
55
250 200
400
600
800
fT1+[1],1(t)
50
1000
0
200
400
600
t 1000
800
(b)
(a) 300 FFT(0) = 63714 250
300 200 200
150 100
100
50
0
0
400 200
0
200
400
600
800
1000
200 p2
100 0
p1
0
(d)
(c)
FIGURE 19.38 (a) Tree image 512 3 256 with the hexagonal lattice X512,256 written on the rectangular lattice 1024 3 256. (b) Splitting-signal fT1þ[1], 1 of length 1024. (c) Absolute value of the 1024-point DFT of the splitting-signal. (d) Samples of the subset T1þ[1], 1 of X512,256, at which the 2-D DHFT of the image coincides with the 1024-point DFT.
. . . . . (a)
. 1 6 5 .
. 2 3 4 .
. . . . .
. 1
. 6 .
. 1 6
5 .
. 2 3
5 .
. 2
.
3 4 .
(b)
4 .
.
. . . . .
. 6 .
. 1 a 5 .
. 6 .
. 1 b 5 .
. 3 .
. 2 c 4 .
.
. 2
3 .
4 .
. . . . .
(c)
FIGURE 19.39 Diagram for transferring (a) the tree image 256 3 256 to (b) the hexagonal lattice 512 3 256, and, then, extending (c) the image to the rectangular lattice 1024 3 256.
3. One incomplete 8-point DFT of the splitting-signal fT1, 0 , which is calculated at odd frequencies 1, 3, 5, and 7. This transform can be reduced to the 4-point DFT, and for that two additional multiplications are required. The 2-D DHFT will be defined by the incomplete DFT at frequency-points (1, 0), (3, 0), (5, 0), and (7, 0). 4. Two trivial incomplete 2-point DFTs of the splittingsignals fT0, 2 and fT4, 2 , to calculate the DHFT at frequencypoints (0, 2) and (4, 2), respectively. 5. One incomplete 4-point DFT of the splitting-signal fT2, 2 , to calculate the DHFT at frequency-points (2, 2) and (6, 2).
The calculation of the DHFT at frequency-points (0, 2), (2, 2), (4, 2), and (6, 2) can also be performed in a different way, if we note the following. In the case N 4, we can write that F2p1 þ[2p2 ], 2p2 ¼ ¼
2N1 N 1 X X
n1 ¼0 n2 ¼0
fn1 þ[n2 ], n2 W ðn1 þ[n2 ]Þð2p1 þ[2p2 ]Þþn2 2p2
N1 N=21 X X
n1 ¼0 n2 ¼0
ðn þ[n2 ]Þp1 þn2 p2
1 n g1þ[n WN=2 2 ], n2
¼ G p1 , p 2 , (19:222)
19-59
Multidimensional Discrete Unitary Transforms
where p1 ¼ 0 : (N 1) and p2 ¼ 1 : (N=2 1). The sequence g is defined on the hexagon lattice XN,N=2 as follows: gn1 þ[n2 ], n2 ¼ fn1 þ[n2 ], n2 þ fn1 þN=2þ[n2 ], n2 þ fn1 þ[n2 þN=2], n2 þN=2 þ fn1 þN=2þ[n2 þN=2], n2 þN=2 ¼ fn1 þ[n2 ], n2 þ fn1 þN=2þ[n2 ], n2 þ fn1 þ[n2 ], n2 þN=2 þ fn1 þN=2þ[n2 ], n2 þN=2
for numbers n1 ¼ 0 : (N 1) and n2 ¼ 0 : (N=2 1). Therefore, for p1 ¼ 0 : (N 1) and p2 ¼ 1 : (N=2 1), the set of values F2p1 ,2p2 represents itself an incomplete N 3 N=2-point DHFT whose spectral components at points (p1, 0), p1 ¼ 0 : (N 1), are not calculated. This transform is defined at frequency-points (p1, p2) that lie on the rectangular lattice XN, o N=2. We denote this transform by F N, N=2 and the number of operations of multiplication required to calculate this transform by moN, N=2 .
In the N ¼ 4 case, we obtain the splitting F 16 , F o8 , F o4 , F o4, 2 of the 8 3 4-point DHFT. The number of the required multiplications can be estimated as m8, 4 ¼ m16 þ (m8 þ 6) þ (m4 þ 2) þ o m4, 2 ¼ 0 ¼ 10 þ (2 þ 6) þ 2 ¼ 20. In the general N > 4 case, we consider the following covering of the hexagonal lattice X2N,N : s2N,N ¼
Tp1 þ[1],1
p1 ¼0:ðN=21Þ
F(2mþ1)(p1 þ[p2 ]), (2mþ1)p2 ¼
The 2N 3 N-point DHFT can therefore be reduced to N=2 4Npoint DFTs, N=4 2N-point DFTs, and one incomplete N 3 N=2point DHFT. The repeated calculations of the 2-D DHFT at intersections of subsets T 2 s2N,N can be removed similar to the considered above N ¼ 4 case. The number of multiplications required to calculate the 2N 3 N-point DHFT is estimated as m2N, N ¼ N=2m4N þ N=4m2N þ moN,N=2 . Considering the estimate moN,N=2 ¼ mN, N=2 mN for N ¼ 2r 8, we obtain the following recursive formula: N N m4N þ m2N mN 2 4 N2 N (5r 6) (r 6) 2: þ 4 2
m2N, N ¼ mN, N2 þ
(19:224)
(19:226)
and numbers t run from 0 to N=2 D with the step D (which we write as t ¼ 0 : D : (N=2 D)). To construct the paired-representation of the hexagonal image f with respect to the DHFT, we define the following subsets of sets (19.215): Tp0 1 þ[p2 ], p2 ¼
n (2m þ 1)ðp1 þ [p2 ]Þ, (2m þ 1)p2 , o m ¼ 0: ðN=(2D) 1Þ :
(19:227)
Example 19.25 We consider the hexagonal lattice X8,4. For points (p1, p2) ¼ (1, 1) and (1, 2), we have the following:
0 0 T1þ[1], 1 ¼ T1:5, 1 ¼
(
(1:5, 1), (4:5, 3), (7:5, 1), (2:5, 3) (5:5, 1), (0:5, 3), (3:5, 1), (6:5, 3)
)
0 0 T1þ[2], 2 ¼ T1, 2 ¼ f(1, 2), (3, 2), (5, 2), (7, 2)g:
The subsets Tp0 1 þ[p2 ], p2 with different generators are disjoint or coincide, and therefore, we can construct an unique partition of the hexagonal lattice X2N,N s0 ¼ s0J 0 ¼ Tp0 1 þ[p2 ], p2
ðp1 þ[p2 ], p2 Þ2J 0
(19:228)
for a certain set J0 of generators. Depending on the evenness of generators, the 2-D DHFT at frequency-points of subsets Tp0 1 þ[p2 ], p2 represents itself one of the following 1-D DFT:
19.10.1 Paired Representation of the DHFT In this section, we describe a partition of the hexagonal lattice X2N,N, which leads to the splitting of the 2-D DHFT by the 1-D
t¼0, D
mt fp01 þ[p2 ], p2 , t W t WN=2 , (19:225)
fp01 þ[p2 ], p2 , t ¼ fp1 þ[p2 ], p2 , t fp1 þ[p2 ], p2 , tþN=2 ,
(19:223)
o 2XN, N=2 ¼ f(2p1 , 2p2 ); p1 ¼ 0 : (N 1), p2 ¼ 1 : ðN=2 1Þg:
N=2D X
where
o , T1þ[4p2 ],4p2 p2 ¼0:ðN=41Þ , 2XN,N=2
o where 2XN,N=2 is a subset of X2N,N that contains all frequencypoints with even coordinates, except the first column,
¼ mN, N2
DFTs on disjoint subsets of the frequency-points. Such a partition allows also for representing the 2-D image defined on the hexagonal lattice by a set of splitting-signals, which are defined similarly to the paired-splitting signals for the 2-D DFT. Let (p1 þ [p2], p2) be an arbitrary point of the lattice. Then, for integers m ¼ 0 : (N=(2D) 1), the following relation is valid:
F 2N,N 0 0 T
p1 þ[p2 ], p2
8 > < F 2N ; FN; > : F n; N=2
if p2 is odd, if p1 is odd, p2 is even, if p1 and p2 are even.
(19:229)
19-60
Transforms and Applications Handbook
The corresponding modified element of the s0 -representation, or the paired representation of f with respect to the 2-D DHFT has the following form:
gT 0 ¼
8 > > > > > > > > > > >
> > > n if p1 is odd , p2 is even, o > > One can see that all subsets T 0 of the decomposition of these N=2n1 > > fp01 , p2 , 0 , fp01 , p2 , 2n1 WN=2n1 , . . . , fp01 , p2 , N=22n1 WN=2n1 ; > > : three sets T are disjoint or coincident. The first set T0þ[1],1 > : 0 0 if p1 and p2 are even together with two subsets T1þ[1], 1 and T1þ[2], 0 cover 28 (19:230) frequency-points of the lattice. The rest are the frequency-points (2, 2), (6, 2), (2, 4), and (4, 2). Therefore, we can consider the For instance, when p2 is odd, we have the following 2N-point following partition of the lattice X8,4: DFT: 8 0 T0þ[1], 1 ¼ f(0:5, 1), (1:5, 3), (2:5, 1), (3:5, 3), > 2N1 > X > > 0 t mt > W ¼ f W , m ¼ 0: (2N 1): F > (4:5, 1), (5:5, 3), (6:5, 1), (7:5, 3)g 4N 2N p1 þ12,p2 , 4t > (2mþ1)ðp1 þ12Þ,(2mþ1)p2 > > t¼0 > 0 > T1þ[2], 2 ¼ f(1, 2), (3, 2), (5, 2), (7, 2)g > > > > > 0 0 > T The partition s of the hexagonal lattice can be constructed dir> 2þ[0], 0 ¼ f(2, 0), (6, 0)g > > > 0 > ectly from the irreducible covering s of the lattice. Indeed, each set > T4þ[0], > 0 ¼ f(4, 0)g > > T of the covering s can be divided by disjoint or coincident subsets > 0
> T1þ[1], the lattice. To illustrate this property, we consider two examples. 1 ¼ f(1:5, 1), (4:5, 3), (7:5, 1), (2:5, 3), > > > > > (5:5, 1), (0:5, 3), (3:5, 1), (6:5, 3)g > > > > Example 19.26 0 > T1þ[2], 0 ¼ f(1, 0), (3, 0), (5, 0), (7, 0)g > > > > > 0 > T2þ[2], > Let N ¼ 4 and let s be the covering of the hexagonal lattice 2 ¼ f(2, 2), (6, 2)g > > > > 0 X8,4, which has been composed in Example 19.24. The gener> T2þ[4], 4 ¼ f(2, 4)g > > > 0 ators of sets T of this covering are taken from the set : T4þ[2], 2 ¼ f(4, 2)g: J ¼ J8, 4 ¼ fð0 þ [1], 1Þ, ð1 þ [1], 1Þ, ð1 þ [0], 0Þ, ð0 þ [2], 2Þ, ð2 þ [2], 2Þ, ð4 þ [2], 2Þg:
We first consider the set T0þ[1],1 that can be divided as follows:
0 T1þ[2], 2 0 T2þ[0], 0 0 T4þ[0], 0 0 T0þ[0], 0
(0:5, 1), (1:5, 3), (2:5, 1), (3:5, 3) (4:5, 1), (5:5, 3), (6:5, 1), (7:5, 3) ¼ f(1, 2), (3, 2), (5, 2), (7, 2)g
0 T0þ[1], 1 ¼
¼ f(4, 0)g
¼ f(0, 0)g:
m08, 4 ¼ 2(m8 þ 8 2) þ 2(m4 þ 4 2) ¼ 20:
The next set T1þ[1],1 of the covering can be divided as
(1:5, 1), (4:5, 3), (7:5, 1), (2:5, 3) (5:5, 1), (0:5, 3), (3:5, 1), (6:5, 3)
0 T1þ[2], 2 ¼ f(1, 2), (3, 2), (5, 2), (7, 2)g
0 T2þ[0], 0 ¼ f(2, 0), (6, 0)g 0 T4þ[0], 0 ¼ f(4, 0)g 0 T0þ[0], 0 ¼ f(0, 0)g
RðF 8, 4 ; s0 Þ ¼ {F 8 , F 8 , F 4 , F 4 , F 2 , F 2 , 1, 1, 1, 1}: Since the paired transform is fulfilled without multiplications, the multiplicative complexity of the 8 3 4-point DHFT is estimated as
¼ f(2, 0), (6, 0)g
0 T1þ[1], 1 ¼
To calculate the 8 3 4-point DHFT, it is required to calculate two 8-point DFTs, two 4-point DFTs, two 2-point DFT, and four 1-point DFTs. Therefore, the following splitting of the 8 3 4-point DHFT holds:
It should be noted for comparison with the rectangular case, that in the tensor representation, the 8 3 DFT is split 4-point by the covering s8, 4 ¼ T1, p2 ; p2 ¼ 0 : 3 , T2p1 , 1 ; p1 ¼ 0 : 3 as R(F 8, 4 ; s8, 4 ) ¼ {F 8 , F 8 , F 8 , F 8 , F 4 , F 4 , F 4 , F 4 }
which requires 4(m8 þ m4) ¼ 4m8 ¼ 8 operations of multiplication.
19-61
Multidimensional Discrete Unitary Transforms
16 3 8-point DHFT by the partition s016, 8 contains the following 1-D DFTs:
Example 19.27 We now construct a partition s0 of the hexagonal lattice X16,8. For that, we first consider the following covering of the lattice: s16, 8 ¼
0
R(F 16, 4 ; s ) ¼
Tp1 þ[1], 1 p1 ¼0:3 , T1þ[0], 0 , T1þ[4], 4 , T2þ[2], 0 , T6þ[2], 2 , (19:231) T4p1 þ[2], 2 p1 ¼0:3 :
The first four sets contain 32 elements each, the next two subsets contain 16 elements each, T2þ[2],2 and T6þ[2],2 contain 8 elements each, and the last four subsets T4p1 þ[2], 2 contain 4 elements each. The 16 3 8-point DHFT is thus split as R(F 16, 4 ; s) ¼ fF 32 , F 32 , F 32 , F 32 , F 16 , F 16 , F 8 , F 8 , F 4 , F 4 , F 4 , F 4 g:
To remove the redundancy of calculations that are due to the intersections between sets T of the covering s16,8, we consider the following decompositions of sets: T0þ[1], 1 ¼
0 T0:5, 1
þ
T1,0 2
þ
T2,0 4
þ
T4,0 0
þ
T8,0 0
þ
T0,0 0
0 0 0 0 0 0 T1þ[1], 1 ¼ T1:5, 1 þ T3, 2 þ T2, 4 þ T4, 0 þ T8, 0 þ T0, 0 0 0 0 0 0 0 T2þ[1], 1 ¼ T2:5, 1 þ T1, 2 þ T2, 4 þ T4, 0 þ T8, 0 þ T0, 0 0 0 0 0 0 0 T3þ[1], 1 ¼ T3:5, 1 þ T3, 2 þ T2, 4 þ T4, 0 þ T8, 0 þ T0, 0
T1þ[0], 0 ¼
T1,0 0
þ
T2,0 0
þ
T4,0 0
þ
T8,0 0
þ
T0,0 0
8
:|fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflffl} N=2 times N=2 times 9 > = F 4 , . . . , F 4 , F 2 , . . . , F 2 , 1, . . . , 1 : |fflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflffl{zfflfflfflfflfflffl}> ; N=2 times
3N=4 times
N=2 times
(19:233)
By removing all subsets that are not underlined above, we obtain the following partition of the hexagonal lattice X16,8:
0 0 0 0 0 0 0 0 0 0 0 s016, 8 ¼ T0:5, 1 , T1, 2 , T2, 4 , T1:5, 1 , T3, 2 , T2:5, 1 , T3:5, 1 , T1, 0 , T2, 0 , T4, 0 , T8, 0 , 0 T0,0 0 , T1,0 4 , T2,0 2 , T4,0 4 , T6,0 2 , T0,0 2 , T0,0 4 , T4,0 2 , T8,0 4 , T8,0 2 , T12, 2 :
The number of operations of multiplication required to calculate the 2N 3 N-point DHFT is estimated as follows: m02N, N ¼
(19:232)
The cardinalities of these subsets equal 16, 8, 4, 16, 8, 16, 16, 8, 4, 2, 1, 1, 8, 4, 2, 4, 2, 1, 2, 1, 2, 2, respectively. The splitting of the
¼
rþ1 N X ðm2n þ 2n 2Þ 2 n¼2
rþ1 N X 2n1 (n 3) þ 2 þ 2n 2 2 n¼2
¼ N 2 (r 1) þ N:
(19:234)
19-62
Transforms and Applications Handbook
The number of multiplications required to calculate the 2N 3 N-point DHFT at knots of the first part N 3 N of the hexagonal lattice 2N 3 N is estimated as half of m02N, N , i.e., about N2=2 (r 1) þ N=2. Indeed, each subset T0 2 s0 (except 0 0 the one-point ones, such as TN=2, 0 and T0, 0 ) has the equal number of points in both the parts of the lattice. For comparison, we consider the estimate mN1 ¼ N1 (8 log7 N1 1) obtained in Ref. [47] for computing the 2-D DHFT by the radix-7 decimation-in-space algorithm developed for the N1 ¼ 7l case. Taking the number of hexagonal pixels equal N2 ¼ 8l, we can see that the proposed paired method uses m8*l jsample ¼ 0:75l 0:5 operations of multiplication per sample. In the part of the lattice consisting of 7l pixels which lie on the hierarchical structure of the hexagonal aggregates, the radix-7 decimation-in-space algorithm uses m07l jsample ¼ 8l 1 such operations of multiplication per sample, i.e., at least 10 times more operations than the proposed algorithm does.
notation of this 2-D DHFT is identical to the notation considered in (214). The method of splitting the 3N 3 N-point DHFT into a set of 1-D DFTs is similar to the 2N 3 N-point DHFT case.
19.11 Paired Transform–Based Algorithms In this section, we briefly describe the paired transform–based algorithms of calculation of the 2-D Hartley, cosine, and Hadamard transforms of order N 3 N, when N ¼ 2r, r > 1. The unitary paired transform as a core for each of these transforms is derived from the tensor transform in a way that all splitting-signals of the image are transformed into a set of short signals which carry the spectral information of the image at disjoint subsets of frequency-points. The redundancy of the tensor transform is thus removed completely.
19.10.1.1 2-D DHFT on Other Lattices
19.11.1 Calculation of the 2-D DHT
The concept of the 2-D DHFT can also be defined on hexagonal lattices which are constructed in ways different from the lattice considered in Equation 19.213. One can say, that the hexagonal lattice X2N,N is constructed by the broken lines:
The paired representation of an image fn1 , n2 with respect to the Fourier transform can be used for splitting the 2-D DHT, HN, N , defined as
1 (1)n2 ; n2 ¼ 0 : (N 1) , ln1 ¼ n1 þ [n2 ] ¼ n þ 4
Hp1 , p2 ¼
n1 ¼ 0 : (2N 1):
The 2-D DHFT can be defined on another hexagonal lattice X3N,N by [42] Fp1 þ[p2 ], p2 ¼ ¼
3N1 N1 X X
n1 ¼0 n2 ¼0
3N1 N 1 X X
n1 ¼0 n2 ¼0
fn1 þ[n2 ], n2 W ðn1 þ[n2 ]Þðp1 þ[p2 ]Þþn2 p2 (19:235) ½(2n n )p þ3n p fn1 þ[n2 ], n2 W3N 1 2 1 2 2 :
The period of this transformation equals (3N, N). The first coordinates of spatial points (n1 þ [n2], n2) and frequency-points (p1 þ [p2], p2) on the hexagonal lattice X3N,N are calculated by n1 þ [n2 ] ¼ n1
n1 þ n2 p1 þ p 2 , p1 þ [p2 ] ¼ p1 : 3 3
In this case, we consider conditionally that [n2] ¼ (n1 þ n2)=3 and [p2] ¼ (p1 þ p2)=3. The hexagonal lattice X3N,N is constructed by the straight parallel lines n o n1 þ n2 ; n2 ¼ 0 : (N 1) , n1 ¼ 0 : (3N 1): mn1 ¼ n1 3 The points of this lattice are arranged horizontally, 3=2 times more compactly than in the first lattice (see Figure 19.40). We obtain the 3N 3 N-point DHFT with the fundamental period X3N,N being the hexagonal lattice of size 3N 3 N. The given
N 1 X N 1 X
n1 ¼0 n2 ¼0
fn1 ,n2 CasN (n1 p1 þ n2 p2 ),
p1 , p2 ¼ 0 : (N 1), (19:236)
where CasN(x) ¼ cas(2px=N). Both 2-D DHT and DFT result in the same tensor representation of the image, ffn1 , n2 g ! fp1 , p2 , t , as well as the paired representation n o X 0N, N : f fn1 ,n2 g ! fp01 , p2 , t ¼ fp1 , p2 , t fp1 , p2 , tþN=2 : (19:237) The set of triplets (p1, p2, t) is considered to be the set U defined in Equation 19.192. In other words, (p1, p2) ¼ 2n(p, s), where (p, s) 2 J2rn , 2rn and t ¼ 2nt1, t1 ¼ 0 : (2rn1 1). In terms of the paired representation, the 2-D DHT is considered as a transform composed by 1-D Hartley transforms of orders 2rn1, n ¼ 0 : (r 1). Indeed, the following formula H(2mþ1)p1 , (2mþ1)p2 ¼
2rn1 X1
fp01 , p2 , t Cas2rn1
t¼0
mþ
1 t 2 (19:238)
holds for integers m ¼ 0 : (2rn1 1). The N 3 N-point DHT is thus determined at frequency-points of each orbit Tp0 1 , p2 by the odd-frequency 2rn1-point DHT, which we denote by H2rn1 jof and define over a sequence fn by
H2rn1 jof f
m
¼
2rn1 X1 n¼0
fn Cas2rn1
1 mþ n , 2 (19:239)
m ¼ 0 : (2rn1 1):
19-63
Multidimensional Discrete Unitary Transforms 7
7
6
6
m6
5
5
4
4
3
3
2
2
1
1
l3
0
0
2
4
6
0
0
2
4
6
(b)
(a)
FIGURE 19.40 (a) The hexagonal lattice defined by straight lines mn1 (the line m6 is also shown), and (b) the hexagonal lattice defined by broken lines ln1 (the line l3 is also shown).
To estimate the number m2n jof of multiplications required for computing H2n jof , we can use the known recursive estimates m2n jof ¼ 2n1 n þ m2n1 jof , for n 2 [48]. The sets J2rn , 2rn , n ¼ 0 : (r 1), consist of 3 2rn1 generators, respectively. Therefore, by means of the paired transform X 02r , 2r , the 2r 3 2r-point DHT is split into 2r3 2 short oddfrequency DHTs, namely 2r1-point DHTs in number 3 2r1, 2r2-point DHTs in number 3 2r2, . . . , and six 2-point and four 1-point DHTs. This splitting is similar to the splitting of the 2-D DFT, and for calculating the 2r 3 2r-point DHT, it is enough to fulfill m02r , 2r 2 4r1 ðr 7=3Þ þ 8=3 12, (r > 3),
(19:240)
operations of multiplication.
and requires only two multiplications. No multiplication is required for the off-frequency two-point DHT: ( f0, f1) ! ( f0 þ f1, f0 f1). Therefore, the paired transform-based 8 3 8-point DHT uses m08, 8 ¼ 12m4jof ¼ 24 operations of multiplication.
19.11.2 2-D Discrete Cosine Transform We now consider the paired transform–based algorithm for calculation of the 2-D DCT. We stand briefly on the main properties of the algorithm, which is a modification of the tensor algorithm described in detail in Section 19.8. Let CN, N be the 2-D discrete nonseparable N 3 N-point DCT calculated at frequency-points (p1, p2) of the lattice XN,N by C p1 , p 2 ¼
8 >
= 0 R(H8, 8 ; s ) ¼ H4jof , . . . , H4jof , H2jof , . . . , H2jof , 1, 1, 1, 1 : > > :|fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} ; 12 times
6 times
The off-frequency 4-point DHT has the matrix 2
3
1 1 1 1 7 6 1:4142 0 1:4142 0 7 6 H4jof ¼ 4 1 1 1 1 5 0 1:4142 0 1:4142
fn1 , n2 Cos
n1 ¼0 n2 ¼0
Example 19.28 In the N ¼ 8 case, the lattice X8,8 is divided by 22 orbits T 0 2 s0 . Twelve of these orbits have 4 points each, six orbits have 2 points each, and the other four orbits are 1-point. Therefore, by using the paired transformation, we obtain the following splitting of the 8 3 8-point DHT by the off-frequency DHTs:
N 1 X N 1 X
n1 p1 þ n2 p2 þ
p1 þ p 2 , p1 , p2 ¼ 0: (N 1), 2 (19:241)
where Cos(x) ¼ cos(px=N). The paired representation of the image f by the cosine transform is described as a complete set of paired splitting-signals, which are composed from the splitting-signals in the tensor representation, by substituting the first half of the signals from their second half. In other words, the paired splitting-signals corresponding to the generators (p1, p2) are defined as follows: 1. If e(p1, p2) ¼ 0, then n o ~0 ~0 ~0 ~fT 0 ¼ ~f 0 p1 , p2 ,0 , fp1 , p2 , 1 , fp1 , p2 ,2 , . . . , fp1 , p2 ,N=21 , p1 , p2
(19:242)
where ~ ~ ~f 0 p1 , p2 , t ¼ fp1 , p2 , t fp1 , p2 , Nt , t ¼ 0 : ðN=2 1Þ:
(19:243)
19-64
Transforms and Applications Handbook
2. If e(p1, p2) ¼ 1, then
Example 19.29
n o ~fT 0 ¼ ~f 0 1 , ~f 0 3 , . . . , ~f 0 1 , p 1 , p2 , p 1 , p2 , p1 , p2 , N=2 p1 , p2 2
2
(19:244)
2
where
In the N ¼ 8 case, we obtain the following splitting of the 8 3 8-point DCT by the paired transform:
R(C8, 8 ; s0 ) ¼
~ ~ ~f 0 p1 , p2 , tþ1 ¼ fp1 , p2 , tþ12 fp1 , p2 , Nt12 , 2
t ¼ 0 : ðN=2 1Þ: (19:245)
Therefore, the slitting of the 2-D DCT is described by the following pair of equations:
C(2mþ1)p1 , (2mþ1)p2 ¼
8 N=21 > X 1 > 0 > ~ > Cos m þ f t , N=2 > p1 , p2 , t > 2 > t¼0 > > > > > > > > if e(p1 , p2 ) ¼ 0,
N=21 > X > 1 1 > > ~f 0 > t þ , Cos m þ 1 N=2 > p1 , p2 , tþ2 > 2 2 > t¼0 > > > > > > : if e(p1 , p2 ) ¼ 1, m ¼ 0 : ðN=2 1Þ:
(19:246)
(
(
The basis functions of the DHdT take value 1 or 1 at each point. The DHdT has found useful applications in signal and image processing (image coding, enhancement, pattern recognition, and filtering). The two-dimensional DHdT of a 2-D sequence or image f ¼ f fn1 , n2 g of size N 3 N, where N ¼ 2r, r 1, is defined by Ap1 , p2 ¼ (AN, N f )p1 , p2 ¼
N1 X N1 X
fn1 ,n2 a(p1 ; n1 )a(p2 ; n2 ),
n2 ¼0 n1 ¼0
(19:249)
where p1, p2 ¼ 0 : (N 1). The transform is separable; it can be calculated by the row–column method by using the 1-D DHdT, Ap ¼ (AN fn )p ¼
N1 X
fn a(p; n),
n¼0
p ¼ 0 : (N 1),
where a(p; n) is the kernel of the transform, which is defined in Equation 19.4. The matrix [AN ] of the 1-D DHdT consists only of the elements 1 and can be constructed recursively: A A [AN ] ¼ N=2 N=2 , AN=2 AN=2
(A1 ¼ 1):
We here focus on the construction of the fast algorithm for calculating the 2-D DHdT, which is based on a concept of the paired transforms, which has been described above for the 2-D DFT. The paired transform reveals the 2-D DHdT and DFT, which means that the same splitting-signals
fTp,0 s ¼ fp,s,0 , fp,s,1 , . . . , fp,s,N=21 , (p, s) 2 JN,N ,
(19:247)
(19:250)
can be used for calculating both transforms. Namely, the following property is valid for the 2-D DHdT. Let us consider the given
19-65
Multidimensional Discrete Unitary Transforms
in Section 19.9 formula (19.193) of calculation of the 2-D DFT through the 1-D DFTs of the paired splitting-signals,
Example 19.30 The matrix of the 8-point DPT is defined as follows:
F(2mþ1)p, (2mþ1)s ¼
N=21 X
0 mt fp,s,t W t WN=2 , m ¼ 0 : ðN=2 1Þ,
t¼0
for the case when g.c.d(p, s) ¼ 1. The N=2-point DFT of the modified splitting-signal defines the 2-D DFT at frequencypoints of the subset Tp,0 s . If we now omit all twiddle factors Wt (i.e., the splitting-signals are not modified) and consider, in the right side of this formula, the N=2-point DHdT instead of the N=2-point DFT, we obtain the 2-D DHdT at frequency-points of Tp,0 s . In other words, the following is valid:
A(2mþ1)p, (2mþ1)s ¼
N=21 X
fp,0 s, t a(m; t),
t¼0
m ¼ 0 : (N=2 1): (19:251)
A similar result can easily be derived for the case when g.c.d (p, s) ¼ 2k, where k ¼ 1 : (r 1), when the N=2-point DHdT in the right side of Equation 19.251 is reduced to the N=2kþ1-point DHdT. The illustration of this property can easily be seen in the 1-D case. 19.11.3.1 1-D DFT and DHdT The 1-D N-point discrete paired transform (DPT) X 0N is defined by the following complete system of the paired functions [12]:
2p(n t) X 02k , 2k t (n) ¼ M cos , 2rk k ¼ 0 : (r 1),
X 00, 0 (n)
t ¼ 0 : (2rk1 1),
1, n ¼ 0 : (N 1),
(19:252)
where M is the real function which differs from zero only on the bounds of the interval [1, 1] and takes values M(1) ¼ 1, M(1) ¼ 1. The double numbering of the paired functions refers to the frequency (p ¼ 2k) and time (t). The paired transform is a transform of the discrete-time signal fn to the set of frequencytime signals, fn !
nn
o o f20k , 0 , f20k , 2k , f20k , 2k 2 , . . . , f20k , N=2kþ1 1 , k ¼ 0 : (r 1) ,
which splits the 2r-point DFT by the 2rk1-point DFTs, k ¼ 0 : (r 1). The components of these splitting-signals are calculated by
f20k , 2k t ¼ X 02k , 2k t fn ¼
r 2X 1
n¼0
2 0 3 2 3 1 0 0 0 1 0 0 0 X 01, 0 6 X 1, 1 7 6 0 1 0 0 0 1 0 0 7 7 6 7 6 X0 7 6 6 1 0 0 0 1 0 7 6 1, 2 7 6 0 0 7 7 6 0 0 6 X 1, 3 7 6 0 0 0 1 0 0 0 1 7 7: 6 X8 ¼ 6 0 7 ¼ 6 1 0 1 0 7 6 X 2, 0 7 6 1 0 1 0 7 6 0 7 6 0 1 0 1 0 1 7 6 X 2, 2 7 6 0 1 7 6 0 7 4 1 1 1 1 1 1 1 1 5 5 4 X 4, 0 0 1 1 1 1 1 1 1 1 X 0, 0
The first four basis paired functions correspond to the frequency p ¼ 1, the next two functions correspond to frequency p ¼ 2, and the last two functions correspond to the frequencies p ¼ 4 and 0, respectively. The process of composition of these functions from the corresponding cosine waves defined in the interval [0, 7] is illustrated in Figure 19.41. Let fn be the signal {1, 2, 2, 4, 5, 3, 1, 3}. The paired transform of this signal results in four splitting-signals as follows:
fn !
8 { 4, 1, 1, 1} > > < {3, 2} > { 3} > : {21}
The splitting of the 2r-point DFT into (r þ 1) short DFTs is described by F(2mþ1)2k ¼
2rk1 X1 t¼0
fp,0 t W2t rk W2mtrk1 , m ¼ 0 : (2rk1 1):
(19:253)
The set of 2r frequency-points X2r ¼ {0, 1, 2, . . . , 2r 1} is divided by (r þ 1) subsets, or orbits Tp0 ¼ f(2m þ 1)p mod 2r ; m ¼ 0: (2r1 =p 1)g, where p ¼ 2k, k ¼ 0 : (r 1), and T00 ¼ {0}. These subsets compose a partition s0 of X2r , and the 2r-point DFT is split as follows: RðF 2r ; s0 Þ ¼ fF 2r1 , F 2r2 , F 2r3 , . . . , F 2 , 1, 1g:
(19:254)
Using the similar splitting for each short transform F 2rk1 of this splitting, we obtain the full decomposition of the 2r-point DFT, by the paired transforms. The similar results are valid for the 1-D DHdT (up to a permutation),
A(2mþ1)2k ¼
2rk1 X1 t¼0
fp,0 t a(m, t), m ¼ 0 : (2rk1 1),
(19:255)
and the splitting of the 2r-point DHdT equals X 02k , 2k t (n)fn , t ¼ 0 : (2rk1 1):
RðA2r ; s0 Þ ¼ fA2r1 , A2r2 , A2r3 , . . . , A2 , 1, 1g:
(19:256)
19-66
Transforms and Applications Handbook
1 0 −1
1 0 −1 0
2
4
6
1 0 −1
0
2
2
4
4
6
6
1 0 −1
4
6
0
2
4
6
1 0 −1
0
2
4
6
0
2
4
6
0
2
4
6
0
2
4
6
0
2
4
6
0
2
4
6
1 0 −1 0
2
4
6
1 0 −1
1 0 −1 0
2
4
6
1 0 −1
1 0 −1 0
2
4
6
1 0 −1
1 0 −1 0
1 0 −1
2
1 0 −1 0
1 0 −1
0
2
4
6 1 0 −1
0
2
4
6
(a)
(b)
FIGURE 19.41 (a) Cosine waves and (b) discrete paired functions of the 8-point DPT.
where the twiddle factors W ¼ exp(2pj=8) ¼ 0.7071(1 j) and W3 ¼ 0.7071(1 þ j).
Example 19.31 Let N ¼ 8 and let fn be the signal of Example 19.30. According to Equation 19.253, the calculation of the 8-point DFT of fn can be written in the matrix form as 2 3 F7 8 9 3 2 3 2 1 > > 6 F3 7 > > [F ] > > 2 6 7 6 = < 7 j 0 6 7 66 7 7 6 F5 7 6 4 X diag 1 5 7 4 6 7 6 > > 7 1 > > 6F 7 6 > 7 1 ; : > 6 17 6 7 6 7¼6 1 7 6 F6 7 6 7 6 7 6 [F ] 7 2 6F 7 6 7 6 27 4 5 1 6 7 4 F4 5 1 F0 3 2 3 2 8 9 5:4142 þ j 1 > 1 > > > > > 6 2 7 6 2:5858 þ j 7 > > > > W 7 6 7 6 > > > > 7 6 7 6 > > > > 7 6 7 6 > > 2:5858 j j 2 > > 7 > > 6 7 6 > > < W 3 = 6 4 7 6 5:4142 j 7 7 6 0 6 7 diag X8 6 7 ¼ 6 7, > > 6 5 7 6 3 2j 7 1 > > > > 7 6 7 6 > > > 6 3 7 6 3 þ 2j 7 > > j > > > 7 6 7 6 > > > > 7 6 7 6 > > > > 5 415 4 3 1 > > > > : ; 21 3 1
The construction of the matrix [A8 ] of the 8-point DHdT is illustrated in Table 19.5. The calculation of the Hadamard transform of the signal fn is performed as 3 A7 3 6 A3!6 7 2 2 [A2 ] 7 6 6 A 7 64 1 5 X 04 6 5 7 6 7 6 6 6 A1!4 7 6 1 7¼6 6 7 6A [A2 ] 6 6!3 7 6 7 6 6 1 6 A2 7 4 7 6 4 A4!1 5 2
A0
2 3 2 3 3 1 6 2 7 6 7 7 6 7 6 7 6 2 7 6 3 7 7 6 7 6 7 7 7 6 7 7 6 6 7 47 7 0 6 7 ¼ 6 3 7, 7 X8 6 6 7 7 6 7 657 6 5 7 7 6 7 6 7 5 637 6 1 7 6 7 6 7 4 1 5 4 3 5 1 3
3
21
where the permutation of the components Ap is performed as p ¼ (p0, p1, p2) ! (p2, p1, p0), and [A2 ] ¼ [F 2 ] ¼
1 1 : 1 1
19-67
Multidimensional Discrete Unitary Transforms TABLE 19.5 n, p
Construction of the Hadamard Matrix 8 3 8
n0 n1 n2
[A8 ] ¼ (1)n0 p0 þn1 p1 þn2 p2
p0 p1 p2
7
111
111
6
011
011
1,
1, 1, 1,
5
101
101
1, 1,
1, 1,
1,
1, 1, 1, 1, 1,
4
001
001
1,
110
110
2 1
010 100
010 100
1, 1, 1,
1,
1,
1
1
1, 1, 1,
1
0
000
000
1,
1
1, 1, 1, 1, 1 1,
1, 1, 1, 1, 1, 1, 1, 1,
1,
1,
1, 1,
1, 1, 1,
3
1,
1,
1, 1
1,
1, 1, 1, 1 1, 1, 1, 1 1,
1,
Thus, the following decomposition is valid: 2
3
[A4 ] [A2 ]
6 6 [A8 ] ¼ 6 4
7 7 0 7 X8 5
1
22
[A2 ]
66 64 6 6 ¼6 6 6 6 4
1 1
1 3 7 0 5 X4
3
[A2 ] 1 1
7 7 7 7 0 7 X : 7 8 7 7 5
In the general N ¼ 2r case, where r 2, the following matrix decompositions hold for the 1-D DHdT and DFT: ½A ¼
"
½F ¼
"
2r
2r
r1 M k¼0
r1 M k¼0
!
#
!
#
½A2rk1 1 X 02r ,
½F 2rk1
1 D2r X 02r ,
(19:257)
where denotes the operation of the Kronecker sum of matrices and the diagonal matrix r
D2r ¼ diag {1, W, W 2 , W 3 , . . . , W 2 1 , 1, W 2 , W 4 , . . . , r
r
W 2 2 , 1, W 4 , W 8 , . . . , W 2 4 , 1, . . . , 1, 1}:
Thus for the calculation of the 2r-point DHdT, we can use the paired algorithm of the DFT, from which all diagonal matrices with twiddle factors are removed, or considered to be the identity matrices. The paired transform splits the mathematical structures of both transforms. In the 2-D case, the paired algorithm of the 2-D DFT can also be used for the calculation of the 2-D DHdT, by removing all twiddle factors, or considering them equal 1.
19.12 Conclusion The representation of the multidimensional signals and splitting of the unitary transforms of the signals by the tensor and paired transforms allows for developing effective methods of calculation
of the multidimensional transforms through the one-dimensional splitting-signals. The splitting-signals can be processed separately and in parallel; they carry the spectral information of the signals and define the images of the multidimensional signals along the different directions in the spatial domain. The paired transformations are unitary and transfer the n-dimensional signals from the spatial space to (n þ 1)-dimensional space which is the n-dimensional space of frequency-points together with the 1-D time interval. The basis functions of the paired transforms are defined by linear integrals (sums) along specific parallel directions in the spatial domain. We can also call the paired transforms the directional unitary transforms which are derived from the kernels of the multidimensional transforms, such as the Fourier and cosine transforms. Therefore the paired transforms can be used not only in effective calculation of the multidimensional transforms, but in such practical applications as the image enhancement, or computer tomography, when the 2-D or 3-D image is reconstructed from the parallel projections directly from the paired transforms. The images can be defined on the multidimensional rectangular lattices, and other types of lattices, such as hexagonal lattices were described for the 2-D Fourier transforms. In this chapter, we have focused on the discrete Fourier, the Hartley, the Hadamard, and the cosine transforms, but other transforms can also be revealed by 1-D splitting-signals. The important orthogonal transformation, the Haar transformation has not been considered in this chapter. However, an attentive reader would have noticed that the Haar transformation in the 1-D case can easily be derived from the paired transformation of order N ¼ 2r, r > 1. For that, a few permutations of columns and rows of the matrix of the paired transformation are required. In other words, the Haar transformation is a paired-like transformation which is present in the mathematical structure of the DFT [16]. In the two- and multidimensional cases, the paired transformations are not separable and exist not only for orders being powers of two, but for many other orders as well.
References 1. N. Ahmed and K.R. Rao, Orthogonal Transforms for Digital Image Processing, Springer-Verlag, New York, 1975. 2. L.A. Zalmonzon, Fourier, Walsh, and Haar Transforms and Their Application in Control, Communication and Other Fields, Nauka, Moscow, 1989. 3. A.K. Jain, Fundamentals of Digital Image Processing, Prentice Hall, Inc. Englewood Cliffs, NJ, 1989. 4. O. Ersoy, Fourier-Related Transform, Fast Algorithms and Applications, Prentice Hall, Englewood Cliffs, NJ, 1997. 5. O.K. Ersoy and C.H. Chen, Transform-coding of images with reduced complexity, CVGIP, 42(1), 19–31, Apr. 1988. 6. K.R. Rao and P. Yip, Discrete Cosine Transform—Algorithms, Advantages, Applications, Academic Press, London, U.K., 1990. 7. W.B. Pennebaker and J.L. Mitchell, JPEG Still Image Compression Standard, Van Nostrand Reinhold, New York, 1993.
19-68
8. G. Mandyam, N. Ahmed, and N. Magotra, Lossless image compression using the discrete cosine transform, JVCIR, 8(1), 21–26, Mar. 1997. 9. M.S. Moellenhoff and M.W. Maier, DCT transform coding of stereo images for multimedia applications, IndEle, 45(1), 38–43, Feb. 1998. 10. J.W. Cooley and J.W. Tukey, An algorithm for the machine computation of complex Fourier series, Math. Comput., 9(2), 297–301, 1965. 11. A.M. Grigoryan and S.S. Agaian, Split manageable efficient algorithm for Fourier and Hadamard transforms, IEEE Trans. Signal Process., 48(1), 172–183, Jan. 2000. 12. A.M. Grigoryan, 2-D and 1-D multipaired transforms: Frequency-time type wavelets, IEEE Trans. Signal Process., 49(2), 344–353, Feb. 2001. 13. H.J. Nussbaumer and P. Quandalle, Fast computation of discrete Fourier transforms using polynomial transforms, IEEE Trans. Acoust. Speech Signal Process., 27, 169–181, 1979. 14. H.J. Nussbaumer, Fast Fourier Transform and Convolution Algorithms, 2nd edn., Springer-Verlag, Berlin, Heidelberg, 1982. 15. A.M. Grigoryan, New algorithms for calculating the discrete Fourier transforms, Journal Vichislitelnoi Matematiki i Matematicheskoi Fiziki, Academy Science USSR, Moscow, 26(9), 1407–1412, 1986. 16. A.M. Grigoryan and S.S. Agaian, Multidimensional Discrete Unitary Transforms: Representation, Partitioning and Algorithms, Marcel Dekker Inc., New York, 2003. 17. A.M. Grigoryan and S.S. Agaian, Transform-based image enhancement algorithms with performance measure, Adv. Imaging Electron Phys., Acad. Press, 130, 165–242, May 2004. 18. F.T. Arslan and A.M. Grigoryan, Fast splitting alpharooting method of image enhancement: Tensor representation, IEEE Trans. Image Process., 15(11), 3375–3384, Nov. 2006. 19. A.M. Grigoryan, Two-dimensional Fourier transform algorithm, Izvestiya VUZ SSSR, Radioelectronica, USSR, Kiev, 27(10), 52–57, 1984. 20. A.M. Grigoryan, An optimal algorithm for computing the two-dimensional discrete Fourier transform, Izvestiya VUZ SSSR, Radioelectronica, USSR, Kiev, 29(12), 20–25, 1986. 21. A.M. Grigoryan and M.M. Grigoryan, Tensor representation of the two-dimensional discrete Fourier transform and new orthogonal functions, Avtometria, AS USSR Siberian Section, Novosibirsk, 1, 21–27, 1986. 22. A.M. Grigoryan, Algorithm for computing the discrete Fourier transform with arbitrary orders, Journal Vichislitelnoi Matematiki i Matematicheskoi Fiziki, AS USSR, 30(10), 1576–1581, 1991. 23. A.M. Grigoryan, Vectorial algorithms for computing the two-dimensional discrete Hartley transformation, Doklady Nationalnoy Akademii Nauk Armenii, AS RA, 1, 6–9, 1995.
Transforms and Applications Handbook
24. R.N. Bracewell, Fast Hartley transformation, Proc. IEEE, 72(8), 19–27, 1984. 25. R.N. Bracewell, Discrete Hartley transform, Oxford University Press, New York, 1988. 26. S. Boussakta and A.G.J. Holt, Fast multidimensional discrete Hartley transform using Fermat number transform, IEE Proc. G, 135(6), 253–257, 1988. 27. S. Boussakta and A.G.J. Holt, Calculation of the discrete Hartley transform via the Fermat number transform using a VLSI chip, IEE Proc. G, 135(3), 101–103, 1988. 28. D. Yang, New fast algorithm to compute two-dimensional discrete Hartley transform, Electron. Lett., 25(25), 1705– 1706, 1989. 29. A.M. Grigoryan, A novel algorithm for computing the 1-D discrete Hartley transform, IEEE Signal Process. Lett., 11(2), 156–159, Feb. 2004. 30. R.N. Bracewell, O. Buneman, H. Hao, and J. Villasenor, Fast two-dimensional Hartley transform, Proc. IEEE, 74, 1282–1283, 1986. 31. P. Duhamel and M. Vetterili, Improved Fourier and Hartley transform algorithms: Application to cyclic convolution of real data, IEEE Trans. Acoust. Speech Signal Process., 35 (5), 818–824, June 1987. 32. H.V. Sorensen, D.L. Jones, C.G. Burrus, and M.T. Heideman, On computing the discrete Hartley transform, IEEE Trans. Acoust. Speech Signal Process., 33(5), 1231–1238, Oct. 1985. 33. S. Boussakta, O.H. Alshibami, and M.Y. Aziz, Radix2 3 2 3 2 algorithm for the 3-D discrete Hartley transform, IEEE Trans. Signal Process., 49(12), 3145–3156, Dec. 2001. 34. A.M. Grigoryan and S.S. Agaian, Shifted Fourier transform based tensor algorithms for 2-D DCT, IEEE Trans. Signal Process., 49(9), 2113–2126, Sep. 2001. 35. Z. Cvetkovic and M.V. Popovic, New fast recursive algorithms for the computation of the discrete cosine and sine transforms, ICASSP ’91, Toronto, Ontario, Canada, p. 2201, 1991. 36. Z. Zhijin and Q. Huisheng, Recursive algorithms for discrete cosine transform, ICASSP ’96, Atlanta, GA, pp. 115– 118, 1996. 37. P. Duhamel and C. Guillemot, Polynomial transform computation of the two-dimensional DCT, In Proc. ICASSP’90, Albuguergue, NM, pp. 1515–1518, 1990. 38. Z. Wang, G. Jullien, and W. Miller, One- and twodimensional algorithms for length 15 and 30 discrete cosine transform, IEEE Trans. Circ. Syst. II Analog Dig. Signal Process., 43(2), 149–153, Feb. 1996. 39. H.V. Sorensen and C.S. Burrus, Efficient computation of the DFT with only a subset of input or output points, IEEE Trans. Signal Process., 41(3), 1184–1200, Mar. 1993. 40. M.T. Heidemann, Multiplicative Complexity, Convolution, and the DFT, Springer-Verlag, New York, 1988. 41. Z. Wang, Pruning the fast discrete cosine transform, IEEE Trans. Commun., 39(5), 640–643, May 1991.
Multidimensional Discrete Unitary Transforms
42. D.E. Dudgeon and R.M. Mersereau, Multidimensional Digital Signal Processing, Prentice Hall, Englewood Cliffs, NJ, 1984. 43. R.C. Staunton, Hexagonal sampling in image processing, Adv. Image Electron Phys., 107, 231–307, 1999. 44. D.P Petersen and D. Middleton, Sampling and reconstruction of wave-number-limited functions in N-dimensional Euclidean spaces, Inform. Control, 5, 279–323, 1962. 45. D.H. Hubel, Eye, Brain, and Vision, Scientific American Books, New York, 1988.
19-69
46. A.B. Watson and A.J. Ahumada Jr., A hexagonal orthogonaloriented pyramid as a model of image representation in visual cortex, IEEE Trans. Biomed. Eng., 36(1), 97–106, Jan. 1989. 47. J.L. Zapata and G.X. Ritter, Fast Fourier transform for hexagonal aggregates, J. Math. Imaging Vis., 12(3), 183– 197, Jun. 2000. 48. G. Bi and Y. Chen, Fast generalized DFT and DHT algorithms, Signal Process., 65, 383–390, 1998.
20 Empirical Mode Decomposition and the Hilbert–Huang Transform 20.1 Introduction................................................................................................................................. 20-1 20.2 Empirical Mode Decomposition and the Hilbert–Huang Transform ........................... 20-1
Albert Ayenu-Prah University of Delaware
Nii Attoh-Okine University of Delaware
Norden E. Huang National Central University
Drawbacks . Hilbert Transform and Hilbert–Huang Transform Recent Developments . End Effects
.
20.3 Bidimensional Empirical Mode Decomposition ................................................................. 20-6 Extrema Detection and Scattered Data Interpolation
.
Boundary Effects
20.4 Attempted Improvements on EMD....................................................................................... 20-8 20.5 HHT for Global Health Monitoring of Civil Infrastructure ............................................ 20-8 20.6 Applications and Potential Application of BEMD ............................................................. 20-9 20.7 Recommendations...................................................................................................................... 20-9 References .............................................................................................................................................. 20-10
20.1 Introduction This chapter discusses the empirical mode decomposition (EMD) and the bidimensional empirical mode decomposition (BEMD) as well as the Hilbert–Huang transform method (HHT). The HHT combines the EMD and the Hilbert spectral analysis; the Hilbert spectral analysis involves the Hilbert transform of the basis functions generated by the EMD. The HHT has been developed to handle nonstationary data, which are properties of almost all physical processes that are sampled for analysis. Traditionally, Fourier-based approaches have been the main analysis procedures for such physical processes, but stationarity must be assumed for Fourier-based methods. The HHT, by the nature of the method, presents a relative advantage over the Fourier analysis methods because it does not implicitly assume stationarity. The EMD has been extended to handle 2-D data, such as images, using the BEMD, which follows a similar procedure as the 1-D version.
20.2 Empirical Mode Decomposition and the Hilbert–Huang Transform Huang et al. (1998) introduced the HHT as a signal-processing tool that adaptively decomposes nonstationary signals into basis functions called intrinsic mode functions (IMF). The Hilbert transform of each IMF is well behaved, and the instantaneous frequency and instantaneous amplitude can be determined from the subsequent analytic signal that is formed from the IMF and its Hilbert transform. The instantaneous frequency and instantaneous amplitude may be used to plot an energy–frequency–time spectrum of the original signal.
The HHT consists of two parts: the EMD and the Hilbert spectral analysis (HSA). The EMD generally separates nonstationary data into locally non-overlapping time scale components. The signal decomposition process will break down the signal into a set of complete and almost orthogonal components, which are the IMFs. An IMF is a function that satisfies the following two conditions: .
.
The number of extrema and the number of zero-crossings must either equal or differ by at most one in whole data sets; The mean value of the envelope defined by the local maxima and the envelope defined by the local minima is zero at every point.
To begin the EMD, a function or signal is decomposed as follows. Identify all the local extrema; then connect all the local maxima by cubic spline as the upper envelope. Repeat the procedure for the local minima to produce the lower envelope. The upper and lower envelopes should include all the data. If the mean of the upper and lower envelopes is designated as m1 and the difference between the data and m1 is the first component h1, then x(t) m1 ¼ h1 :
(20:1)
The mean m1 is given by m1 ¼
LþU , 2
(20:2)
where U is the local maxima L is the local minima 20-1
20-2
Transforms and Applications Handbook
(20:3)
Construct upper and lower envelopes, and find mean
Subtract mean from original signal
Check inner loop residue for IMF qualification
After repeating the sifting process up to k times, h1k becomes an IMF; that is h1(k1) m1k ¼ h1k :
(20:4)
Treat inner loop residue as original signal
(20:5)
where r1 is the residue, and it contains information on longer period components; it is now treated as the new data and subjected to the same sifting process. (This is now the beginning of the outer loop, which will go on to the next inner loop for the next IMF.) The procedure is repeated for all subsequent rj ’s resulting in r1 c2 ¼ r2 ; . . . rn1 cn ¼ rn ,
(20:6)
where c2 to cn are the subsequent IMFs of the data. The inner and outer loops of the EMD can be pictured as in Figure 20.1. There are stopping criteria for the sifting process for IMFs since allowing sifting to go beyond a certain point may smooth out important signal variations and features that arise from the natural dynamics of the system: The IMF components need to retain enough physical sense of both amplitude and frequency modulations. This can be achieved by limiting the value of the sum of the difference (SD) computed from two consecutive sifting results as SD ¼
PT
jhk1 (t) hk (t)j2 : PT 2 t¼0 hk1 (t)
t¼0
(20:7)
A value of SD between 0.2 and 0.3 is usually preferable based on experimental analyses performed by Huang et al. (1998). To check that the number of zero-crossings is equal to, or differs by at most one from the number of extrema, an alternate stopping criterion is proposed by Huang et al. (2003). Sifting is
Not IMF
IMF
Store IMF
Subtract IMF from original signal, and treat outer loop residue as original signal
Let h1k ¼ c1 , the first IMF from the data. c1 should contain the finest scale or the shortest period component of the data. The process to generate one IMF may be considered as the inner loop. Now c1 is separated from the original data begun with as x(t) c1 ¼ r1 ,
Outer loop
h1 m11 ¼ h11 :
Original signal
Inner loop
Technically, h1 is supposed to be an IMF, except that some error might be introduced by the spline curve fitting process—in many cases there are overshoots and undershoots after the first round of processing; therefore, the sifting process has to be repeated many times. The sifting process serves two purposes (Huang et al., 1998): It eliminates riding waves (smaller waves that seem to ‘‘ride’’ bigger waves), and it makes the signal or profile more symmetric about the local zero-mean line. In the second round of sifting, h1 is treated as the data or the first component. Then a new mean is computed as before. If the new mean is m11, then
FIGURE 20.1
Pictorial depiction of EMD process.
stopped when the number of zero-crossings is equal to, or differs by at most one from the number of extrema for S successive sifting steps; the optimum value for S was found to be between 4 and 8. The optimum value for S came about while determining a confidence limit for the EMD. Traditionally, the Fourier spectral analysis has invoked the ergodic theory in computing the confidence limit, treating the temporal average as the ensemble average. The data span is cut into a certain number of sections, and the Fourier spectra found for each section; the confidence limit is then the statistical spread of the different spectra. However, for nonstationary processes, the ergodic assumption would not make much sense. Therefore, Huang et al. (2003) decomposed a data set with EMD using different stopping criteria—different S numbers, varying from 1 to 20. Since different stopping criteria can produce different numbers of IMFs, the intermittency criteria was invoked to force the same number of IMFs for each S-number used. Intermittency, which is an attribute of turbulent dynamical systems, is defined as sudden erratic changes in wave heights. It is not uncommon for data from natural physical systems to show intermittency. According to Huang et al. (2003), intermittency can introduce mode mixing, that is, having different time or spatial scales mixed in one IMF. This has the effect of producing additional, albeit spurious, variations in the IMFs and, therefore, in the values of instantaneous frequency. To deal with intermittency, a number, n1, is selected, which corresponds with the number of data points within a certain chosen data limit; only waves shorter than this limit are to be included in an IMF.
20-3
Empirical Mode Decomposition and the Hilbert–Huang Transform
Therefore, the upper and lower envelopes and their mean would be available to extract IMFs only when the distance between extrema is less than n1. The intermittency test can be difficult to invoke, for it is not trivial to choose n1; the test should be done only when serious mode mixing is detected after the data has first been processed by EMD. The intermittency test can then be applied, and waves of periods longer than a preset length scale can be ignored for successive IMFs; this has the effect of including waves of similar length in a single IMF. After getting IMFs using different S-numbers, the mean of specific IMFs are determined in order to get a range of standard deviations that will define the confidence limit. The whole EMD process is stopped by any of the following predetermining criteria: .
.
Either when the residue rn is a function having only one extremum, or When the residue rn becomes a monotonic function from which no IMF can be extracted.
Summing Equations 20.5 and 20.6 yields the following equation:
x(t) ¼
n X j¼1
c j þ rn ,
(20:8)
which indicates completeness, in that the sum of the IMFs and the residue recovers the original signal. cj is the jth IMF, and n is the number of sifted IMFs; rn can be interpreted as the general trend of the signal. A measure of orthogonality may be determined from two consecutive IMFs, Cf, and Cg, as follows (Huang et al., 1998): IOfg ¼
X Cf Cg Cf2 þ Cg2 t
(20:9)
where IOfg is the index of orthogonality between Cf and Cg, which must be as close to zero as possible. Figure 20.2a through d depict a pictorial flow of the sifting process. In the next step, the Hilbert transform is applied to each of the IMFs in order to compute instantaneous frequencies and instantaneous amplitudes so that the Hilbert amplitude spectra may be plotted. The Hilbert transform of a real-valued function x(t), which belongs to Lp, is given by
where a(t) and u(t) represent the instantaneous amplitude and instantaneous phase respectively. Now, a(t) ¼
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ y2 ,
u(t) ¼ tan1
(20:12)
y : x
(20:13)
By definition, the instantaneous frequency is given as v(t) ¼
du(t) : dt
(20:14)
From Equations 20.11 and 20.14, after Hilbert transformation, each IMF can be represented by Ð h i cj ¼ Re aj (t)ei vj (t)dt ,
(20:15)
and, therefore, the original data, x(t) can be recovered as x(t) ¼ Re
n X
aj (t)ei
j¼1
Ð
vj (t)dt
:
(20:16)
Equation 20.15 gives both the amplitude and frequency of each component as a function of time, and Equation 20.16 gives a frequency–time distribution of the amplitude, which is called the Hilbert spectrum, H(v, t). The corresponding Fourier representation would be as follows: x(t) ¼ Re
1 X
aj eivj t ,
(20:17)
j¼1
with both aj and vj constants. The residual trend is not included in Equation 20.16 since, according to Huang et al. (1998), its energy could be overpowering; rn should only be included if its inclusion can be well justified. Knowing the instantaneous frequencies and amplitudes of the IMFs, an energy–time–frequency spectrum, called the Hilbert spectrum, may be plotted for the signal, x(t), in terms of the IMFs. Following from Equation 20.15, which presents the Hilbert spectrum, a marginal spectrum can be defined as ðT
h(v) ¼ H(v, t)dt,
(20:18)
0
1 H(x(t)) ¼ y(t) ¼ P p
1 ð
1
x(t) dt, tt
(20:10)
where P is the Cauchy principal value. The function, x(t), and its Hilbert transform, y(t), form an analytic signal, z(t), given by z(t) ¼ x(t) þ iy(t) ¼ a(t)eiu(t) ,
(20:11)
where H(w, t) is used to represent the Hilbert spectrum. The marginal spectrum as given by Equation 20.18 gives an indication of the total energy contribution of each frequency value, v, over the data span; it is similar to the Fourier spectrum. The preceding discussion of the HHT shows that no a priori basis sets are defined for the procedure, and the problem of the Heisenberg uncertainty principle is not encountered. Table 20.1 compares the Fourier transform, the Wavelet transform, and the HHT (Table 20.1)
20-4
Transforms and Applications Handbook
500
800
400
600
300 400 200 100
200
0
0
–100
–200
–200 –400
–300 –400
0
20
40
60
80
100
120
140
160
180
200
(a)
–600 0 (b)
20
40
60
20
40
60
80
100
120
140
160
180
200
400
800
300
600
200 400
100
200
0
0
–100 –200
–200
–300 –400 –600 0 (c)
–400 20
40
60
80
100
120
140
160
180
200
–500 0 (d)
80
100
120
140
160
180
200
FIGURE 20.2 (a) Original signal. (b) Upper and Lower envelopes. (c) Mean envelope (dashed line). (d) IMF produced by subtracting mean envelope from original signal. TABLE 20.1
Comparison between Fourier, Wavelet, and HHT
Basis Frequency
Fourier
Wavelet
HHT
A priori Convolution: global, uncertainty
A priori Convolution: regional, uncertainty
Adaptive Differentiation: local, certainty
Presentation
Energy–frequency
Energy–time–frequency
Energy–time–frequency
Nonlinear
Not easily defined
Not easily defined
Not easily defined
Nonstationary
No
Yes
Yes
Feature extraction
No
Yes
Yes
Theoretical base
Theory complete
Theory complete
Empirical
Source: Modified from Huang, N.E. The Hilbert–Huang Transform in Engineering, Eds, Huang, N.E. and Attob-Okine, N.O., CRC Press, Boca Raton, FL, 1, 2005.
20.2.1 Drawbacks The HHT procedure is empirical and the most computationally intensive step is the EMD operation, which does not involve
convolution and other time-consuming operations (this makes HHT ideal for signals of large size). However, there are some drawbacks to the application of HHT. First, the EMD may generate undesired low- or high-amplitude IMFs at the
20-5
Empirical Mode Decomposition and the Hilbert–Huang Transform
20.2.2 Hilbert Transform and Hilbert–Huang Transform The major difference between the conventional Hilbert transform and HHT is the definition of instantaneous frequency. The instantaneous frequency has more physical meaning through its definition within the IMF component; meanwhile, the classical Hilbert transform of the original data might possess unrealistic features (Huang et al., 1998). This implies that the IMF represents a generalized Fourier expansion basis. It has variable amplitude and instantaneous frequency, which enable the expansion to accommodate nonstationary data. Furthermore, since the instantaneous frequency is a derivative, it is very local and can, therefore, describe intra-wave variations within the signal. Physically, the definition of instantaneous frequency has a true meaning for ‘‘monocomponent’’ signals, which has one frequency, or at most a narrow range of frequencies, varying as a function of time (narrow band). Since most data do not show these necessary characteristics, sometimes the Hilbert transform makes little physical sense in practical applications. Explaining this sense of physical meaning, Huang (2005) directly Hilbert transformed the length-of-day (LOD) data shown in Figure 20.3 and plotted the analytic function in complex phase plane. Instead of simple circles, the data showed haphazardly intertwined curves that looped around showing no apparent order as depicted in
4.5
Figure 20.4. Additionally, plotting the phase function and the instantaneous frequency did not yield any meaningful result, with the phase function showing random but finite jumps (Figure 20.5) and the instantaneous frequency plot showing equally likely positive and negative frequencies (Figure 20.6). After performing EMD on the LOD data, the annual cycle is extracted and plotted; it showed apparent order with near-circles in the polar representation (also shown in Figure 20.4). This illustrates why some preprocessing of the data is needed before the Hilbert transform is performed on the data; this preprocessing step is the EMD, which decomposes the signal into IMFs that have better-behaved Hilbert transforms.
2.5
1
0.5 0 –0.5 –0 –1.5
40
2 Real
3
4
5
Phase angle: LOD
25 20 15 10 5
0
0
FIGURE 20.3 LOD data.
1
30
0.5
–0.5 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 Time: Year
0
35
Phase angle: Radian
Deviation from 24 h; mil s
1.5
1
FIGURE 20.4 Analytic function in complex phase plane after Hilberttransforming LOD data; after EMD, the plotted annual cycle is also shown.
3.5
2
1.5
–2.5 –1
4
2.5
LOD Annual IMF
–2
Data: LOD
3
Complex phase diagram: LOD and annual IMF
2 Imaginary: Hilbert transform
low-frequency region, and bring up some undesired frequency components; second, the first IMF may cover a wide frequency range at the high-frequency region and therefore cannot satisfy the monocomponent definition well—it has been observed to contain most of the noise in the original signal, though; and third, the EMD operation often cannot separate some low-energy components from the analysis signal, therefore, those components may not appear in the frequency–time plane.
–5 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 Time: Year
FIGURE 20.5
Phase function of analytic function from LOD data.
20-6
Transforms and Applications Handbook
0.5
Instantaneous frequency a la Hahn: LOD
0.4
Frequency: Cycle/Year
0.3 0.2 0.1 0 –0.1 –0.2 –0.3 –0.4 –0.5 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 Time: Year
FIGURE 20.6 Instantaneous function from derivative of phase function without EMD.
20.2.3 Recent Developments A number of recent developments have emerged regarding the HHT (Huang, 2005). A normalized HHT was developed because of two theorems concerning the Hilbert transform: the product theorem for Hilbert transforms (Bedrosian, 1963) and the quadrature approximation to the Hilbert transform of modulated signals (Nuttall, 1966). According to the Bedrosian theorem, the Hilbert transform of the product of two signals, f(x) and g(x), can be defined as H[ f (x)g(x)] ¼ f (x)H[g(x)]
(20:19)
only if the Fourier spectra of the f(x) and g(x) are non-overlapping (disjoint) in frequency space and g(x) has a higher frequency content than f(x), or both f(x) and g(x) are analytic. It would not be entirely possible to define the phase function as in Equation 20.13 and represent an IMF as in Equation 20.15 unless the following equation holds true: H[a(t) cos u(t)] ¼ a(t)H[cos u(t)]:
(20:20)
Equation 20.20 implies that a(t) has to have a very low frequency content compared to cos u(t). Therefore, a way to satisfy this condition would be to normalize the function with respect to a(t), so that amplitude is always unity in the normalized function. Applying this normalization in the EMD, Huang (2005) proposed to find all the maxima of each IMF, connect the maxima by cubic spline to form an upper envelope, E(t), and then divide the IMF by E(t). In this way, the IMF is normalized with respect to amplitude. The second condition to satisfy is given by the Nuttall theorem, which gives a measure of the discrepancy between the Hilbert transform and the quadrature of a carrier wave with amplitude or phase modulation or both. Let the signal be x(t) with a Hilbert transform, xH(t); and let the quadrature of x(t) be
xq(t). Nuttall (1966) presents the discrepancy in terms of the difference in energy of xH(t) and the energy of xq(t); if the difference in energy is E, then the ratio of E to the energy of the signal gives a relative measure of error in approximating xH(t) by xq(t). This error measure is going to be constant over the whole data range, and according to Huang (2005), a constant error bound is not going to reveal the location of the error on the time axis of a nonstationary signal. Therefore, Huang (2005) proposed a variable error bound using the normalized IMF and also a new method to compute the instantaneous frequency through direct quadrature (Huang et al., in press). The squared amplitudes of the normalized IMF would equal one if the Hilbert transform were equal to the quadrature, and, therefore, the difference between the squared amplitude and unity should be zero; otherwise, the Hilbert transform cannot be exactly the quadrature. Therefore, the error is the difference between the squared normalized amplitude and unity, and is a function of time. According to Huang (2005), detailed comparisons gave satisfactory results. An error index calculated gave values that were 10% or less over the data span. Though the intermittency test could alleviate the mode mixing to a certain degree, it is no longer totally adaptive. A better method is to use the ensemble empirical mode decomposition (EEMD) (Wu and Huang, 2009), in which noise is introduced to help scale separation and achieve a truly dyadic filter effect.
20.2.4 End Effects The spline fitting for upper and lower envelopes can create problems at the ends of the data where large swings are prone to occur. These large swings can propagate into the data series and corrupt the whole signal leading to an ineffective EMD. In Huang et al. (1998), end effects are treated by adding characteristic waves at both ends of the signal that have the capacity to contain the wide swings that come from cubic spline fitting. Datig and Schlurmann (2004) implement a signal extension procedure that adds new maxima and minima to the front and rear of the signal, which new extrema are derived from the original time span of the signal. This has the effect of no information being canceled out and the original data series remaining unaffected. Rilling et al. (2003) mirrorized the extrema close to the edges in order to contain wide swings. Chen et al. (2007) used axis-symmetry signal extension to handle end effects while Cheng et al. (2007) used support vector regression machines to process end effects. So far, the best approach is the one used by Wu and Huang (2009), where a combination of linear extension based on the two neighboring extrema was used in conjunction of the end point value.
20.3 Bidimensional Empirical Mode Decomposition The potential of the 1-D EMD generated research interests in 2-D applications for image processing. Existing traditional methods are still Fourier-based and processing is global rather
20-7
Empirical Mode Decomposition and the Hilbert–Huang Transform
than local so that essential information may be lost in the image during processing. To avoid loss of information a 2-D version of the EMD has been recently developed. Algorithms have been developed in the literature to do two-dimensional sifting for BEMD (Damerval et al., 2005; Linderhed, 2004; Nunes et al., 2003a,b), and they generally follow that for the one-dimensional case, only modified to handle two-dimensional signals. Linderhed (2002) first introduced EMD in two dimensions, which is now popularly called the bidimensional empirical mode decomposition (BEMD); BEMD was used for image compression, using only the extrema of the IMFs in the coding scheme. Nunes et al. (2003a,b) developed a BEMD method for texture image analysis; BEMD was used for texture feature extraction and image filtering. The sifting process used is as follows: .
.
.
. .
Identify the extrema of the image, I, by morphological reconstruction based on geodesic operators Generate the 2-D envelope by connecting the maxima points with radial basis function (RBF) Determine the local mean, mi, by averaging the two envelopes Do I mi ¼ hi Repeat the process
For the envelope construction, the authors used RBF of the form
s(x) ¼ pm (x) þ
N X i¼1
li F(kx xi k),
(20:21)
where pm is a low degree polynomial, of the mth degree polynomial in d variables k:k denotes Euclidean norm li are RBF coefficients F is a real-valued function xi are the RBF centers The stopping criterion used is similar to that by Huang et al. (1998), using standard deviation as discussed in Section 20.2 above. Linderhed (2005) also developed a sifting process for 2-D time series. Although the stopping criterion for IMF extraction is relaxed, the stopping criterion for the whole EMD process is similar to that of Huang et al. (1998). The IMF stopping criterion is based on the condition that the extrema envelope is close enough to zero; therefore, there is no need to check for symmetry. The algorithm is similar to that of Nunes et al. (2003a,b). However, the author performs extrema detection by comparing a candidate data point with its eight nearest-connected neighbors, and suggests thin-plate splines, which are RBFs, for envelope construction. Two general types of BEMD are presented in the literature. One uses tensor products to generate upper and lower envelopes for rows and columns of an image (Liu and Peng, 2005), and the other (the more popular) uses neighboring-window technique for extrema detection, and two-dimensional envelopes for interpolation; for instance, the use of RBFs and Delaunay triangulation.
The former method is faster but does not take into account the geometry of the image. In the literature, the preferred method seems to be the latter, which uses two-dimensional lower and upper envelopes to interpolate. Just as in the 1-D EMD, intermittency can also pose problems in BEMD in terms of mode mixing. Nunes et al. (2005) introduced a modified algorithm for BEMD that included a treatment of intermittency. Similar to the 1-D, it also uses a period length criterion whereby a predetermined period length is set so that any period length above the predetermined length is ignored; subsequently waves of similar period lengths are included in corresponding IMFs.
20.3.1 Extrema Detection and Scattered Data Interpolation Detection of extrema has been achieved with methods including morphological reconstruction based on geodesic operators (Nunes et al., 2003a,b), and neighboring windows (Damerval et al., 2005; Linderhed, 2005). An important step in BEMD after extrema detection is constructing the upper and lower envelopes when sifting for IMFs; envelope construction is done with scattered data interpolation. Scattered data interpolation (SDI) is the construction of a function that interpolates data values known at only some specific, scattered points; it is a single-valued function. In general, for n-dimensional space, a function is sought that maps Rn into R; that is f : Rn ! R:
(20:22)
Therefore, for BEMD, the SDI function maps R2 into R. SDI functions are effective interpolants because of their meshless capability. While other methods may need regular meshes to work, SDI interpolants do not need a regular mesh; they work on irregularly spaced data. There are two general approaches to SDI: global and local approaches. The global approach considers all other points for each interpolated point, while local approaches consider only points within a certain radius of support of each interpolated point. There are generally five SDI methods classified under global or local approaches as shown in Table 20.2. The groupings in Table 20.2 are not rigid. Some global methods may be made local by slight modifications; CSRBFs are local variations of RBFs. Global approaches have higher computational costs for very large data points. Morse et al. (2001) identified some drawbacks when working with thin-plate RBFs, which are global methods.
TABLE 20.2
Scattered Data Interpolation Methods
Global
Local
RBFs
Compactly supported RBFs (CSRBFs)
Inverse-distance weighted methods (Shepard’s methods)
Triangulation-based methods Natural neighbor methods
20-8
Effective BEMD depends on proper scattered data interpolation of the extrema. The interpolant must have continuous second derivatives everywhere to ensure smoothness. Although overshoots should necessarily be avoided, few may sometimes persist that may generate spurious extrema, which can additionally exaggerate or shift existing extrema. However, their effect is indirect since the mean of the envelopes rather than the envelopes themselves are used in the sifting process; the essential modes are still well recovered. To overcome the problem of overshooting, a new method has been proposed recently by Xu et al. (2006). The authors use finite element basis functions to construct the local mean surface of the data instead of constructing it from the upper and lower envelopes. Damerval et al. (2005) used Delaunay triangulation on the extrema, and then performed piecewise cubic interpolation on triangles to build extrema envelopes. Linderhed (2005) investigated the issue of proper spline interpolation by using thin-plate smoothing splines and triangle-based cubic interpolation to generate upper and lower envelopes. For sparse data, thin-plate splines were smoother than triangle-based cubic interpolation. Results of comprehensive analyses performed by Bhuiyan et al. (2009) were not conclusive regarding the superiority of one SDI method over another when various methods were used in BEMD analyses of texture and real images. The appropriate SDI method would depend on the objective of the BEMD analysis.
20.3.2 Boundary Effects Similar to end effects in 1-D EMD, boundary effect issues are important in BEMD. In constructing extrema envelopes, extra care is needed around image boundaries in order to prevent wide swings and other artifacts from corrupting the decomposition process. Basically, extension of the boundary has been the main objective of various authors while treating boundary effects in BEMD. Liu and Peng (2005) used texture synthesis to process image boundaries. A modified form of nonparametric-samplingbased texture synthesis is used to extend the image before BEMD is performed; finally, the corresponding parts of the 2-D IMFs are extracted from the extended versions. Linderhed (2004) added extra points at the borders to the set of extrema points; the extra points are placed at the corners of the image and also some equally spaced along the borders.
20.4 Attempted Improvements on EMD Constructing extrema envelopes may usually result in overshoots and undershoots due to the nature of the cubic spline fitting process. In addition, adverse end effects may propagate inward into the signal during sifting, corrupting the whole resulting signal. In order to do a mathematical study of the EMD, Chen et al. (2006) introduced an innovative way of finding the mean of 1-D signals using B-splines, thereby avoiding the use of upper and lower envelopes and their attendant problems. Building on the idea by Chen et al. (2006), Xu et al. (2006) developed a new method of finding the mean of a 2-D signal by using finite
Transforms and Applications Handbook
elements for 2-D EMD without constructing extrema envelopes. Frei and Osorio (2006) also introduced a new method of nonlinear and nonstationary time series analysis that finds the mean of the signal without extrema envelopes and generates basis functions similar to the IMF, but without iteration.
20.5 HHT for Global Health Monitoring of Civil Infrastructure HHT is being used in global health monitoring techniques whereby signals from infrastructure are analyzed to determine the presence of damage. Traditionally, Fourier analysis has been the signal analysis method of choice; however, Fourier methods cannot resolve signals locally, forcing the analysis to depend on averaging to determine a single parameter. Attempts at achieving temporal localization compromise frequency resolution and vice versa. HHT is being used in determining mode shapes present in a complicated signal produced by a structure through sensors, and in feature identification. Signals produced by civil infrastructure systems are mostly from nonlinear systems and are nonstationary. In the literature, HHT has been used in various infrastructure damage detection procedures. Xu and Chen (2004) use empirical mode decomposition (EMD) for damage detection. The study is motivated by the fact that current vibration-based structural damage detection methods are global rather than local so that the exact time instants of sudden damage events cannot be accurately known. These methods assume that measured modal parameters or properties derived from modal parameters are indicative of the physical characteristics and behavior of the structure, and therefore, changes in the modal parameters or their properties indicate changes in the physical behavior of the structure: the behavior of the structure is assumed to be linear. Experimental investigations are carried out on a three-story shear building model to identify structural damage caused by changes in structural stiffness using EMD. Changes in structural stiffness is induced by two pretensioned springs connecting the first floor to fixed steel frames away from the building; during vibration of the building the springs are released to simulate a sudden change in structural stiffness. This arrangement is based on the assumption that most damage occurs at the lower floors of a building under seismic excitation. The vibrations are measured with accelerometers installed on each floor in the building. Signals from the accelerometers are analyzed with EMD to detect exact damage instants; using intermittency check, the exact instant of damage appears as a spike in the first IMF of the accelerometers. Power spectral density analysis could not reveal the exact instant of damage since the Fourier transform is a global method. The spatial distribution of accelerometers is important in detecting structural damage; since the signal response of sudden damage is of high frequency and high decay rate, positioning accelerometers close to damage location enhances effective damage detection. However, it is not trivial to place accelerometers such that they will be close enough to detect locally occurring damage events.
Empirical Mode Decomposition and the Hilbert–Huang Transform
Chen et al. (2007) used HHT in a vibration-based damage detection technique to detect damage in composite wingbox structures. A damage feature index vector, extracted based on HHT, is used for damage detection. Damage is simulated in a structural dynamic model developed with vibration analysis and finite element methods. The signal obtained from the vibration analysis is decomposed into IMFs; a comparison of instantaneous frequencies of undamaged and damaged wingbox signals reveals obvious changes. A nondimensional index matrix is composed from normalized instantaneous frequency values of undamaged and damaged wingbox IMFs; elements of the matrix are functions of the ratio of normalized instantaneous frequencies from undamaged and damaged IMFs. Traditionally, variation of structural vibration natural frequencies and mode shapes are used as parameters for damage detection. However, these parameters are unable to detect small damage events in the wingbox structures. In addition, the time-domain response signals of the damaged and undamaged states do not show any noticeable differences. But a comparison of IMFs of damaged and undamaged states does not show any observable differences, too. Therefore, IMFs are unable to detect damage directly. However, Hilbert transforming IMFs and comparing undamaged and damaged states reveal noticeable differences in instantaneous frequencies. Variations in instantaneous frequencies between undamaged and damaged states used in a damage feature index matrix can help in an online structural damage detection scheme. Yang et al. (2004) detect damage time instants and locations by EMD and HHT using a model ASCE four-story benchmark building. Using EMD, an appropriate intermittency frequency is set such that sharp spikes reveal damage time instants and locations in the first IMF, and represent discontinuities in structural stiffness. The intermittency frequency is lower than the frequency of the discontinuity but higher than the highest frequency in the acceleration measurement of the structure. The effectiveness of damage detection is dependent on factors such as signal-to-noise ratio and damage severity. A second method based on EMD, random decrement technique (RDT), and Hilbert transform is used to detect damage time instants, natural frequencies, and damping ratios. Zhang et al. (2003) use HHT to analyze dynamic and earthquake motion recordings. HHT is found to be suitable for analyzing nonstationary dynamic and earthquake motion recordings; it also found to be better than Fourier transform in this regard. However, although IMFs may contain some important inherent signal information, the physical meaning of IMFs is not clear.
20.6 Applications and Potential Application of BEMD BEMD has been used for texture analysis (Nunes et al., 2005) and image compression (Linderhed, 2004). Recently, Sinclair and Pegram (2005) have used it for rainfall analysis and nowcasting. Computer scientists, mathematicians, and electrical engineers have usually used the method for image analyses.
20-9
The potential application for BEMD is in presmoothing of images before feature detection techniques are applied; this can pave the way for a hybrid method of edge detection that involves the BEMD and an edge detector that does not have a presmoothing step. Images usually tend to be noisy and so filtering out noise is essential to make the image ready for further analysis. The first few IMFs from BEMD usually contain most of the noise in the original image; therefore, removing them and reconstructing the image with the remaining IMFs tends to denoise the image. The number of IMFs needed to be removed depends on the level of noise in the image; very noisy images require more high frequency IMFs removed than do less noisy images. This method has been applied in a study of pavement cracks using two popular edge detection techniques: the Sobel and the Canny edge detectors (Ayenu-Prah and Attoh-Okine, 2008). The Canny edge detector has a prefiltering step in which images are denoised with a Gaussian filter before edge detection is accomplished. This detection method is computationally more expensive due to the convolution processes required in Gaussian smoothing. The Sobel edge detection method has no prefiltering step; however, it is more susceptible to noise. Therefore, the BEMD is used to first filter the images before the Sobel method is applied. An advantage BEMD has over Gaussian filtering is that it does not involve any convolution process, and it is a local method of denoising. This method has been applied to asphalt concrete and Portland cement concrete (PCC) pavement images to detect cracks. The asphalt images tend to be noisier than the PCC images, which is not unusual since PCC pavements tend to be generally smoother than asphalt surfaces. Therefore, for PCC images the first three IMFs are removed while only the first IMF is removed for asphalt images; this decision is arrived at after several trials. Results from the combination method of BEMD=Sobel are compared with that from the Canny edge detector; BEMD=Sobel method tends to be a very effective technique, easily comparable to the Canny method.
20.7 Recommendations Fourier analysis has been around for almost two centuries and has subsequently become very dominant in signal processing; it has also been used to model many scientific phenomena such as heat transfer and wave propagation. For HHT to be considered a sound alternative, stakeholders have to know and appreciate the efficacy of the HHT in treating nonstationary signals. It is not the intention that HHT can or will replace Fourier transform; however, it is being presented as a powerful signal processing alternative that is relatively more efficient in dealing with nonstationary signals. Technology is getting to the point where accuracy is becoming ever more paramount as scientists and engineers continually operate in the nano realm; it is, therefore, reasonable to expect the possibility of the insufficiency of a stationary assumption at some point in the near future. HHT holds promise for the future of signal processing when stationary assumptions may likely not suffice. However, the staying power of Fourier analysis has been guaranteed by a sound mathematical
20-10
foundation, while HHT essentially remains an empirical method without the requisite mathematical support. For HHT to capture a wider support of stakeholders, mathematicians would be needed to develop a mathematical basis for the procedure without necessarily invoking assumptions that may compromise the a posteriori appeal of the method. Of concern also is the treatment of the end effects in 1-D EMD and the boundary effects in the BEMD; errors at the ends and boundaries during interpolation tend to propagate inward, corrupting the whole signal in the process. In addition, extrema points may shift or may be exaggerated or both; however, this phenomenon ultimately tends to average out by the end of the decomposition since essentially the procedure works with the mean of the envelopes rather than with the envelopes themselves. In any case, despite positive and encouraging but varying efforts at mitigating end and boundary effects such as adding characteristic waves at ends of signals and boundary extension in images, a move toward a universal method of processing the ends and boundaries during interpolation should be most sought after. While the original inventors of the HHT and a number of subsequent authors have endorsed the cubic spline for envelope construction, there remain questions about the most preferable scattered data interpolation method for BEMD envelope construction. Although the authors of this article have participated in a comprehensive study to determine an appropriately suitable method of scattered data interpolation, the results have not been sufficiently conclusive for endorsing one particular method over a number of candidate RBF methods; at best, it is observed that RBF methods generally seem to work best. A persisting challenge has been the physical meaning of the IMFs before Hilbert-transforming them. Using structural models, researchers have attempted to analyze with EMD data acquired from the models after excitation. Modal parameters for the structure are known before the excitation force is applied, and, therefore, are compared with modal parameters after excitation in order to try and ultimately get the physical meaning of the IMFs. While the effort is highly commendable, and is in the right direction, usually the modal parameters for existing real-life structures are not known before, say, an earthquake strikes. Therefore, assuming the structure is instrumented to record data during vibration, analyzing the data with EMD would require interpreting each IMF without the benefit of knowledge of any beforeexcitation modal parameters; the task becomes more challenging in that regard. The ability to pick out relevant IMFs from any particular decomposition is a good first step toward interpreting them; however, more research needs to be performed in order to establish physical meaning for the IMFs.
References Ayenu-Prah, A. and Attoh-Okine, N. Evaluating pavement cracks with bidimensional empirical mode decomposition. EURASIP Journal on Advances in Signal Processing, 2008, Article ID 861701, 7pp, 2008.
Transforms and Applications Handbook
Bedrosian, E. A product theorem for Hilbert transforms. Proceedings of the IEEE, 51(5), 868–869, 1963. Bhuiyan, S., Attoh-Okine, N. O., Barner, K. E., and AyenuPrah, A. Bidimensional empirical mode decomposition using various interpolation techniques. Advances in Adaptive Data Analysis, 1(2), 309–338, 2009. Chen, Q., Huang, N., Riemenschneider, S., and Xu, Y. A B-spline approach for empirical mode decompositions. Advances in Computational Mathematics, 24, 171–195, 2006. Chen, H. G., Yan, Y. J., and Jiang, J. S. Vibration-based damage detection in composite wingbox structures by HHT. Mechanical Systems and Signal Processing, 21, 307–321, 2007. Damerval, C., Meignen, S., and Perrier, V. A fast algorithm for bidimensional EMD. IEEE Signal Processing Letters, 12(10), 701–704, October 2005. Datig, M. and Schlurmann, T. Performance and limitations of the Hilbert-Huang transformation (HHT) with an application to irregular water waves. Ocean Engineering, 31, 1783– 1834, 2004. Frei, M. G. and Osorio, I. Intrinsic time-scale decomposition: Time-Frequency-Energy analysis and real-time filtering of non-stationary signals. Proceedings of the Royal Society A, FirstCite Early Online Publishing, London, U.K., 2006. Huang, N. E. Introduction to Hilbert-Huang transform and some recent developments. The Hilbert-Huang Transform in Engineering, Eds. Norden E. Huang and Nii O. AttohOkine, CRC Press, Boca Raton, FL, pp. 1–23, 2005. Huang, N. E., Shen, Z., Long, S. R., Wu, M. C., Shih, H. H., Zheng, Q., Yen, N-C., Tung, C. C., and Liu, H. H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proceedings of the Royal Society London A, 454, 903–995, 1998. Huang, N. E., Wu, M. C., Long, S. R., Shen, S. S., Qu, W., Gloersen, P., and Fan, K. L. A confidence limit for the empirical mode decomposition and Hilbert spectral analysis. Proceedings of the Royal Society London A, 459, 2317– 2345, 2003. Huang, N. E., Wu, Z., Long, S. R., Arnold, K. C., Chen, X., and Blank, K. On instantaneous frequency. Advanced Adaptive data Analysis, 2, in press. Linderhed, A. 2-D empirical mode decomposition—In the spirit of image compression. Proceedings SPIE, Wavelet and Independent Component Analysis Applications IXI, Orlando, FL, Vol. 4738, 2002. Linderhed, A. Image compression based on empirical mode decomposition. Proceedings of the International Conference on Image and Graphics, Hong Kong, China, pp. 430–433, 2004. Linderhed, A. Variable sampling of the empirical mode decomposition of two-dimensional signals. International Journal of Wavelets, Multiresolution and Information Processing, 3(3), 435–452, 2005. Liu, Z. and Peng, S. Boundary processing of bidimensional EMD using texture synthesis. IEEE Signal Processing Letters, 12(1), 33–36, 2005.
Empirical Mode Decomposition and the Hilbert–Huang Transform
Morse, B. S., Yoo, T. S., Rheingans, P., Chen, D. T., and Subramanian, K. R. Interpolating implicit surfaces from scattered surface data using compactly supported radial basis functions. SMI 2001, International Conference on Shape Modelling and Applications, Genova, Italy, May 2001. Nunes, J. C., Guyot, S., and Delechelle, E. Texture analysis based on local analysis of the bidimensional empirical mode decomposition. Machine Vision and Applications, 16, 177–188, 2005. Nuttall, A. H. On the quadrature approximation to the Hilbert transform of modulated signals. Proceedings of the IEEE, 54(10), 1458–1459, 1966. Rilling, G., Flandrin, P., and Goncalves, P. On empirical mode decomposition and its algorithms. Proceedings IEEE EURASIP Workshop on Nonlinear Signal Processing, Grado, Italy, 2003. Sinclair, S. and Pegram, G. G. S. Empirical mode decomposition in 2-D space and time: A tool for space-time rainfall
20-11
analysis and nowcasting. Hydrology Earth System Science Discuss, 2, 289–318, 2005. Wu, Z. and Huang, N. E. Ensemble mode decomposition: A noise-assisted data analysis method. Advanced Adaptive data Analysis, 1, 1–41, 2009. Xu, Y., Liu, B., Liu, J., and Riemenschneider, S. Two-dimensional empirical mode decomposition by finite elements. Proceedings of the Royal Society A, 462, 3081–3096, 2006. Xu, Y. L. and Chen, J. Structural damage detection using empirical mode decomposition: Experimental investigation. Journal of Engineering Mechanics, 130(11), 1279–1288, 2004. Yang, J. N., Lei, Y., Lin, S., and Huang, N. Journal of Engineering Mechanics, 130(1), 85–95, 2004. Zhang, R. R., Ma, S., Safak, E., and Hartgell, S. Journal of Engineering Mechanics, 129(8), 861–875, 2003.
Appendix A: Functions of a Complex Variable* A.1 A.2 A.3 A.4 A.5 A.6 A.7 A.8 A.9
Alexander D. Poularikas University of Alabama in Huntsville
Basic Concepts ..............................................................................................................................A-1 Integration . Derivative of an Analytic Function W(z) Laurent’s Theorem
Taylor’s Theorem
.
Sequences and Series ...................................................................................................................A-8
Comparison Test . Limit Comparison Test Uniform Convergence (Weierstrass M-Test)
. .
D’Alembert’s Test . Root Test . Analyticity of a Sequence of Functions
Power Series.................................................................................................................................. A-9 Analytic Continuation.............................................................................................................. A-11 Singularities of Complex Functions ...................................................................................... A-11 Theory of Residues ................................................................................................................... A-14 Aids to Integration.................................................................................................................... A-16
Transformation of Contour
The Bromwich Contour........................................................................................................... A-19
Finite Number of Poles
.
Branch Points and Branch Cuts
Evaluation of Definite Integrals ............................................................................................. A-25
Evaluation of the Integrals of Certain Periodic Functions (0 to 2p) . Evaluation of Integrals with Limits 1 and þ1 . Certain Infinite Integrals Involving Sines and Cosines . Miscellaneous Definite Integrals
A.10 Principal Value of an Integral ................................................................................................ A-31 A.11 Integral of the Logarithmic Derivative................................................................................. A-32
Examples of multiple-valued functions are
A.1 Basic Concepts
W ¼ z n n not an integer W ¼ log z
A complex variable z defined by
W ¼ sin
(A:1)
z ¼ x þ jy
assumes certain values over a region Rz of the complex plane. If a complex quantity W(z) is so connected with z that each z in Rz corresponds with one value of W(z) in Rw, then we say that W(z) is a single-valued function of z (A:2)
W(z) ¼ u(x, y) þ jy(x, y)
which has a domain Rz and a range Rw (see Figure A.1). The function W(z) can be single valued or multiple valued. Examples of single-valued functions include W ¼ a0 þ a1 z þ a2 z 2 þ þ an z n
n integer
z
1
z
Definition A.1 A function W(z) is continuous at a point z ¼ l of Rz if, for each number e > 0, however small, there exists another number d > 0 such that whenever jz
W¼e
.
lj > d then jW(z)
W(l)j < e
(A:3)
The geometric representation of this equation is shown in Figure A.1.
Definition A.2 A function W(z) is analytic at a point z if, for each number e > 0, however small, there exist another number d > 0 such that whenever W(z) W(l) dW(l) N That is, a convergent sequence is one whose terms approach arbitrarily close to the limit L as n increases. If the series does not converge, it is said to diverge.
then for 0 l < 1 series (Equation A.28) converges absolutely, for l > 1 series (Equation A.28) diverges, and for l ¼ 1 an additional test is required.
A.2.4 Root Test Consider the sequence
THEOREM A.3 In order for a sequence {zn} of complex numbers to be convergent, it is necessary and sufficient that for all d > 0 there exists a number N(d) such that for all n > N and all p ¼ 1, 2, 3, . . . the inequality jzn þ p znj < d is fulfilled. The sum of an infinite sequence of complex numbers z0, z1, . . . is given by S ¼ z0 þ z1 þ z2 þ ¼
1 X
zn
(A:28)
n¼0
Consider the partial sum sequence of n terms, which is designated Sn. The infinite series converges to the sum S if the partial sum sequence Sn converges to S. That is, the series converges if for Sn ¼
n X n¼0
zn lim Sn ¼ S n!1
(A:29)
When the partial sum Sn diverges, the series is said to diverge.
rn ¼
p ffiffiffiffiffiffiffi n jzn j
If this sequence converges to l as n approaches infinity, then the series (Equation A.28) converges absolutely if l < 1 and diverges if l > 1.
A.2.5 Uniform Convergence (Weierstrass M-Test) If ju (z)j Mn, where Mn is independent of z in a region U and P1 P1n n¼1 Mn converges, then n¼1 un (z) is uniformly convergent in U.
Example A.6 Show that 1 < jzj < 2.
P1
1 n¼1 n2 þz 2
is uniformly convergent in the interval
SOLUTION jn2 þ z 2 j jn2 j jz 2 j n2 4 12n2 for n > 2 (the convergence is not affected by dropping the first two terms of the
A-9
Appendix A: Functions of a Complex Variable P 1 2 2 and the series 1 n¼3 n2 conjn2 þ z 2 j n2 verges. From the M test with Mn ¼ n22 implies that the series converges uniformly. series). Therefore,
A.2.6 Analyticity of a Sequence of Functions If the functions of sequence {fk(z)} are analytic in a region U and the sum F(z) ¼
1 X
W(z) ¼ a0 þ a1 (z
fk (z)
k¼1
is uniformly convergent, then F(z) is analytic in U. Proof Because F(z) is uniformly convergent for any e, we can find N such that jF(z) Sk(z)j < e for all n > N, where P Sk ¼ partial sum ¼ 1 k¼1 fk (z). Because F(z) is uniformly convergent, it implies that fk(z) are continuous and, hence, F(z) is continuous. Integrating within the region U, we obtain (integration is performed counterclockwise) þ þ n þ X F(z)dz f (z)dz < e dz ¼ e‘(C) k k¼1 C
C
that any analytic function has a very strong interconnected structure, and that by studying the function in a small vicinity of the point z ¼ z0, we can precisely predict what happens at the point z ¼ z0 þ Dz0, which is a finite distance from the point of study. If z0 ¼ 0, the expansion is said to be about the origin and is called a Maclaurin series. A power series of negative powers of (z z0),
C
where ‘(C) isÞthe length of the contour. Since e ! 0 as n ! 1 Þ P implies that C F(z)dz ¼ 1 k¼1 C fk (z)dz ¼ 0, since fk(z)’s are analytic. Hence, F(z) is also analytic.
an R ¼ lim n!1 a
nþ1
W(z) ¼ a0 þ a1 (z z0 ) þ a2 (z z0 )2 þ z0 )n
n¼0
1 dn W(z) an ¼ n! dz n z¼z0
Proof
þ
(A:32)
if the limit exists
(A:33)
(A:34)
For a fixed value z, apply the ratio test, where zn ¼ an (z
(A:31)
is a Taylor series that is expanded about the point z ¼ z0, where z0 is a complex constant. That is, the Taylor series expands an analytic function as an infinite sum of component functions. More precisely, the Taylor series expands a function W(z), which is analytic in the neighborhood of the point z ¼ z0, into an infinite series whose coefficients are the successive derivatives of the function at the given point. However, we know that the definition of a derivative of any order does not require more than the knowledge of the function in an arbitrarily small neighborhood of the point z ¼ z0. This means, therefore, that the Taylor series indicates that the shape of the function at a finite distance z0 from z is determined by the behavior of the function in the infinitesimal vicinity of z ¼ z0. Thus, the Taylor’s series implies
2
1 ffiffiffiffiffiffiffi if the limit exists Rþ ¼ lim p n!1 n ja j n
(A:30)
where the coefficients an are given by
z0 )
A positive power series converges absolutely in a circle of radius Rþ centered at z0 where jz z0j < Rþ; it diverges outside of this circle where jz z0j > Rþ. The value of Rþ may be zero, a positive number, or infinity. If Rþ ¼ infinity, the series converges everywhere, and if it is equal to zero the series converges only at z ¼ z0. The radius Rþ is found from the relation
or by
an (z
þ a2 (z
THEOREM A.4
þ
A series of the form
¼
1
is called a negative power series. We first focus attention on the positive power series (Equation A.30). Clearly, this series converges to a0 when z ¼ z0. To ascertain whether it converges for other values of z, we write
A.3 Power Series
1 X
z0 )
z0 )n
That is, znþ1 anþ1 (z z ¼ a (z n
n
z0 )nþ1 anþ1 ¼ jz z0 )n an
z0 j
For the power series to converge, the ratio test requires that anþ1 jz lim n!1 a n
z0 j < 1
or jz
an ¼ Rþ z0 j < lim n!1 anþ1
That is, the power series converges absolutely for all z that satisfy this inequality. It diverges for all z for which jz z0j > Rþ. The value of Rþ specified by Equation 3.5 is reduced by applying the root test.
A-10
Appendix A: Functions of a Complex Variable
Example A.7 Determine the region of convergence for the power series W(z) ¼
1 ¼ 1 z þ z2 z3 þ 1þz
If a function has a singularity at z ¼ z0, it cannot be expanded in a Taylor series about this point. However, if one deletes the neighborhood of z0, it can be expressed in the form of a Laurent series. The Laurent series is written W(z) ¼ þ
SOLUTION
¼
n
We have an ¼ ( 1) , from which ( 1)n ¼ j1j Rþ ¼ lim n!1 ( 1)nþ1
The series converges for all z for which jzj < 1. Hence, this expansion converges for any value of z within a circle of unit radius about the origin. Note that there will be at least one singular point of W(z) on the circle of convergence. In the present case, the point z ¼ 1 is a singular point.
1 X
a (z an (z
2
z0 )
þ
a
1
(z
z0 )
(A:37)
If a circle is drawn about the point z0 such that the nearest singularity of W(z) lies on the circle, then Equation A.37 defines an analytic function everywhere within this circle except at its P z0 )n is regular at z ¼ z0. The center. The portion 1 n¼0 an (z P 1 n portion n¼ 1 an (z z0 ) is not regular and is called the principal part of W(z) at z ¼ z0. The region of convergence for the positive series part of the Laurent series is of the form z0 j < Rþ
(A:39)
z0 j > R
The evaluation of Rþ and R proceeds according to the methods already discussed. Hence, the region of convergence of the Laurent series is given by those points common to Equations A.38 and A.39 or for
SOLUTION We have an ¼ 1=n! from which (n þ 1)! ¼ lim (n þ 1) ¼ 1 Rþ ¼ lim n!1 n! n!1
R < jz
þ
The circle of convergence is specified by R ¼ infinity; hence, W(z) ¼ ez converges for all finite values of z.
z0 j < Rþ
(A:40)
If R > Rþ, the series converges nowhere. The annular region of convergence for a typical Laurent series is shown in Figure A.8.
THEOREM A.5
Diverge Converge
A negative power series (Equation A.32) converges absolutely outside a circle of radius R centered at z0, where jz z0j > R ; it diverges inside of this circle where jz z0j < R . The radius of convergence is determined from
or by
(A:38)
while that for the principal part is given by jz
1 X z2 z3 1 n þ þ ¼ z 2! 3! n! n¼1
z0 ) 2 þ
z0 ) n
jz
W(z) ¼ ez ¼ 1 þ z þ
z0 ) þ a2 (z
n¼ 1
Example A.8 Determine the region of convergence for the power series
þ a0 þ a1 (z
anþ1 if the limit exists R ¼ lim n!1 an R ¼ lim
n!1
p ffiffiffiffiffiffiffi n jan j if the limit exists
z0
(A:35)
Diverge
R+
(A:36)
Proof The proof of this theorem parallels that of Theorem A.4.
R–
FIGURE A.8
A-11
Appendix A: Functions of a Complex Variable
in powers of z0 is
Example A.9 Consider the Laurent series W(z) ¼ 8 n < 1 an 3 : n 2
P
n
an z n where
f (z) ¼
for n ¼ 0, 1, 2, . . . for n ¼ 1, 2, . . .
Determine the region of convergence.
SOLUTION By Equations A.33 and A.38 we have R þ ¼ 3. By Equations A.35 and A.38 we have R ¼ 2. Hence, the series converges for all z for which 2 < jzj < 3. No convenient expression for obtaining the coefficients of the Laurent series. However, because there is only one Laurent expansion for a given function, the resulting series, however derived, will be the appropriate one. For example, 1 1 1 e1=z ¼ 1 þ þ 2 þ 3 þ z 2!z 3!z
z2 z4 þ 2! 4!
¼
z z3 þ 2! 4!
1 z
(A:42)
In this case, the Laurent expansion includes only one term 1=z in descending powers of z, but an infinite number of terms in ascending powers of z. That is, a 1 ¼ 1 and a n ¼ 0 if n 6¼ 1.
z
1 1
z
1 2j
þ
1 2j
¼
for jz j < 1
1 1
1 2j
z0
þ
A singularity has already been defined as a point at which a function ceases to be analytic. Thus, a discontinuation function has a singularity at the point of discontinuity, and multivalued functions have a singularity at a branch point. There are two important classes of singularities that a continuous, single-valued function may possess.
z0 ¼ z
A function has an essential singularity at z ¼ z0 if its Laurent expansion about the point z0 contains an infinite number of terms in inverse powers of (z z0).
A function has a nonessential singularity or pole of order m if its Laurent expansion can be expressed in the form
W(z) ¼
Choose z0 ¼ j=2, and the Taylor expansion of f (z) ¼
1 3 2j
A.5 Singularities of Complex Functions
Definition
1
z2
If two functions f1(z) and f2(z) are analytic in a region D and equal in a region D’ within D, they are equal everywhere in D.
The Taylor theorem shows that if a function f(z) is given by a power in z, it can also be represented as a power series in z z0 ¼ f[(z z0) þ z0] where z0 is any point within the original circle of convergence, and this series will converge within any circle about z0 that does not pass beyond the original circle of convergence. Actually, it may converge within a circle that does not pass beyond the original circle of convergence. Consider, for example, the function f (z) ¼ 1 þ z þ z þ ¼
þ 1 2 1 2j
THEOREM A.6
Definition
2
0
z0
This series must converge and be equal to the original function if jz0 j < 1=2, because j is the point of circle jzj ¼ 1 nearest to j=2, a requirement of Taylor’s pffiffiffi theorem. Actually this series converges if jz0 j < 1 12 j ¼ 12 5: Suppose that the considered series represented no previously known function. In this case, the new Taylor series would define values of an analytic function over a range of z where no function is defined by the original series. Then we can extend the range of definition by taking a new Taylor series about a point in the new region. This process is called analytic continuation. In practice, when continuation is required, the direct use of the Taylor series is laborious and is seldom used. Of more convenience is the following theorem.
A.4 Analytic Continuation
1
1
1 þ 1 2j
(A:41)
is obtained by replacing z by 1=z in the Maclaurin expansion of exp(z). Note that in this case the coefficients of all positive powers of z in the Laurent expansion are zero. As a second illustration, consider the function W(z) ¼ (cos z)=z. This is found by dividing the Maclaurin series for cos z by z, with the result cos z 1 ¼ 1 z z
1
1 j 2
1 X
an (z
z0 )n
(A:43)
n¼ m
Note that the summation extends from m to infinity and not from minus infinity to infinity; that is, the highest inverse power of (z z0) is m. An alternative definition that is equivalent to this but somewhat simpler to apply is the following: if limz!z0 ½(z z0 )m W(z) ¼ c,
A-12
Appendix A: Functions of a Complex Variable
a nonzero constant (here m is a positive number), then W(z) is said to possess a pole of order m at z0. The following examples illustrate these definitions:
the circle jz z0j ¼ d does not contain another singularity. If no such d exists, the point z0 is known as a nonisolated singularity.
1. exp(1=z) (see Equation A.41) has an essential singularity at the origin. 2. cos z=z (see Equation A.42) has a pole of order 1 at the origin. 3. Consider the function.
(Poles) If lim (z z0 )n W(z) ¼ constant 6¼ 0, z!z0 where n is positive, then the point z ¼ z0 is called a pole of order n. If n ¼ 1, z0 is called a simple pole.
W(z) ¼
ez (z 4) (z 2 þ 1) 2
Note that functions of this general type exist frequently in the Laplace inversion integral. Because ez is regular at all finite points of the z-plane, the singularities of W(z) must occur at the points for which the denominator vanishes; that is, for (z 4)2 (z2 þ 1) ¼ 0 or
z ¼ 4, þ j, j
Definition
Example A.10 It is interesting to study the variation of f(z) close to the pole. For example, the function W(z) ¼ 1=z ¼ (1=r)e ju has a simple pole at zero. For any specific angle u1 the modulus jW(z)j increases to infinity as r ! 0, and this is true for all the angles from 0 to 2p.
Definition
(Removable Singularities) The point z ¼ z0 is a removable singularity of W(z) if lim W(z) exists. z!z0
By the second definition above, it is easily shown that W(z) has a second-order pole at z ¼ 4, and first-order poles at the two points þ j and j. That is, ez e4 6¼ 0 ¼ 2 2 z!4 17 (z 4) (z þ 1) ez ej lim (z j) 6¼ 0 ¼ 2 2 z!j (z 4) (z þ 1) (j 4)2 2j lim (z 4)2
4. An example of a function with an infinite number of singularities occurs in heat flow, wave motion, and similar problems. The function involved is W(z) ¼ 1= sinh az The singularities in this function occur when sinh az ¼ 0 or az ¼ jsp, where s ¼ 0, 1, 2, . . . . That each of these is a firstorder pole follows from lim
z!j(sp=a)
z
j
sp 1 0 ¼ a sinh az 0
This can be evaluated in the usual manner by differentiating numerator and denominator (L’Hospital rule) to find lim
z!j(sp=a)
1 1 1 ¼ ¼ 6¼ 0 a cosh az a cosh jsp a cos sp
Definition (Branch Points) Multiple-valued contain singular points known as the branch points.
functions
Example A.11 Investigate the function W(z) ¼ z1=2.
SOLUTION pffiffiffi z ¼ r 1=2 cos 12u þ j sin 12u (see pffiffiffiffiffiffiffiffiffiffiffiffiffiffi Figure A.9) where z ¼ x þ jy, r ¼ x 2 þ y 2 , and u ¼ tan 1(y=x). pffiffiffi If we increase u by 2p, we obtain W ¼ z ¼ cos 12u þ p þ 1 pffiffi j sin 2u þ p ¼ r cos 12u þ j sin 12u which is evident from Figure A.9b. This implies that W(z) has two values, one value for 0 u p and the other from p u 2p. This indicates that W (z) is not analytic on the positive real axis when the angle ranges from 0 u 2p. If we create a barrier (or cut) to exist along 0x (see Figure A.9c) then u cannot take the values 0, 2np, n ¼ 1, 2, . . . . Then for the angle 0 < u < 2p W is single valued and continuous and, therefore, analytic. This angle is known as the principal branch of the function. pffiffiffi The origin 0 is called the branch point. To make W ¼ z unique on each branch, the barrier must start from the branch point. The angular position of the barrier is arbitrary. In polar form we have W ¼
Example A.12 Investigate phase change in relation to branch points.
SOLUTION
Definition (Isolated Singularities) The point z ¼ z0 is an isolated singularity of W(z) if we can always find d such that
If the contour is that shown in Figure A.10a for the function W(z) ¼ z1=2, then as z varies on the contour from A to B it
A-13
Appendix A: Functions of a Complex Variable jy
jy
ju z plane
W plane
0 0)
1 2 1 3 x þ x 2 3
loge (1 þ x) ¼ x
1 4 x þ ( 1 < x < 1) 4 1 1 1 1) ¼ 2 þ 3 þ 5 þ n 3n 5n
loge (n "
3 x 1 x þ 2a þ x 3 2a þ x 5 1 x þ (a > 0, a < x < þ1) þ 5 2a þ x
loge (a þ x) ¼ loge a þ 2
1þx x3 x 5 x2n 1 þ , ¼ 2 x þ þ þ þ loge 3 5 2n 1 1 x loge x ¼ loge a þ
(x
a) a
a)2
(x
2a2
þ
(x
a)3 3a3
1
(0) þ Rn , 1)!
sin x ¼ x
x 3 x5 þ 3! 5!
x7 þ 7!
(all real values of x)
cos x ¼ 1
x2 x4 þ 2! 4!
x6 þ 6!
(all real values of x)
B-3
Appendix B: Series and Summations
x3 2x5 17x7 62x9 þ þ þ þ 3 15 315 2835 22n (22n 1)Bn 2n 1 þ x þ , (2n)!
tan x ¼ x þ
x2
1)
p 2
1 1 þ x 3x2
1 1 þ 5x5 7x7
(x
1 arg cothx ¼ þ 3 þ 5 þ 7 þ þ x 3x 5x 7x (2n þ 1)x2nþ1
Arithmetic Progression of the first order (first differences constant), to n terms, a þ (a þ d) þ (a þ 2d) þ (a þ 3d) þ þ fa þ (n 1
na þ n(n 1)d 2 n
(1st term þ nth term): 2
1)d g
The arithmetic mean of a number of positive quantities is ^ their geometric mean, which in turn is ^ their harmonic mean. n X n 1 þ 2 þ 3 þ þ n ¼ (n þ 1) ¼ k 2 k¼0
n 12 þ 22 þ 32 þ þ n2 ¼ (n þ 1)(2n þ 1) 6 n X n k2 ¼ (2n2 þ 3n þ 1) ¼ 6 k¼0 n2 (n þ 1)2 4 n X n2 ¼ (n2 þ 2n þ 1) ¼ k3 4 k¼0
13 þ 23 þ 33 þ þ n3 ¼
1 þ 3 þ 5 þ 7 þ 9 þ þ (2n
1 þ 8 þ 16 þ 24 þ 32 þ þ 8(n 1 þ 3x þ 5x2 þ 7x3 þ ¼
Geometric Progression, to n terms, a þ ar þ ar 2 þ ar 3 þ þ ar n
1) ¼ n2 ¼
1
a(1 r n )=(1 r)
1 1 1 1 , , , , a a þ d a þ 2d a þ (n 1)d are in Harmonic Progression. The Arithmetic Mean of n quantities is 1 (a1 þ a2 þ a3 þ þ an ): n The Geometric Mean of n quantities is (a1 a2 a3 . . . an )1=n : Let the Harmonic Mean of n quantities be H. Then 1 1 1 1 1 1 þ þ þ þ ¼ : H n a1 a2 a3 an
1 þ 32 x þ 52 x 2 þ 7 2 x 3 þ ¼ a½1 a (1 þ a) ¼
n X
(2k þ 1) 1)2
1þx (1 x)2
1 þ 2 2 x þ 32 x 2 þ 4 2 x 3 þ ¼
If r2 < 1, the limit of the sum of an infinite number of terms is a=(1 r). The reciprocals of the terms of a series in arithmetic progression of the first order are in Harmonic Progression. Thus,
k¼0
1) ¼ (2n
1 þ ax þ (a þ b)x2 þ (a þ 2b)x3 þ ¼ 1 þ
a(r n 1)=(r 1):
2X n 1
ax þ (b a)x2 (1 x)2
1þx (1 x)3
1 þ 6x þ x2 (1 x)3
n (n þ 1)an þ nanþ1 X ¼ kak 2 (1 a) k¼0
(n þ 1)2 an þ (2n2 þ 2n (1 a)3
1)anþ1
k2 ak
k¼0
a (1
a)2
¼
1 X k¼0
kak jaj < 1
1 X a þa ¼ k2 ak jaj < 1 (1 a)3 k¼0 2
n2 anþ2
Appendix C: Definite Integrals
1 ð 0
n ð1 1 1 þ m1 1 n1 1 Y xn1 ex dx ¼ log dx ¼ x n m¼1 1 þ mn
0
1 ð 0
ð1 0
n 6¼ 0, 1, 2, 3, . . .
(Gamma Function)
a)m (b
a
t n pt dt ¼
n! ( log p)nþ1
(n ¼ 0, 1, 2, 3, . . . and p > 0)
1 ð
t n1 e(aþ1)t dt ¼
G(n) (a þ 1)n
(n > 0, a > 1)
1 ð
(m > 1, n > 1)
G(n) ¼ (n
1)! if n ¼ integer > 0
1 ð 1 G ¼2 e 2 0
t2
dt ¼
1 ð
0
1 ð
0
(Beta function)
1 ð 0
xm 1 (1
x)n 1 dx ¼
1 ð 0
p cot pp
xm 1 dx G(m)G(n) ¼ (1 þ x)mþn G(m þ n)
ða 0
(p < 1)
p) ¼ G(p)G(1
(0 < p < 1)
p)
(0 < m < n)
xa dx a þ 1 G aþ1 b G c ¼ m G(c) (m þ xb )c b c a>
aþ1 b
1, b > 0, m > 0, c > aþ1 b
dx pffiffiffi ¼ p (1 þ x) x a dx p ¼ , a2 þ x 2 2
G(m)G(n) B(m, n) ¼ B(n, m) ¼ G(m þ n) where m and n are any positive real numbers
(p < 1)
xm 1 dx p ¼ 1 þ xn n sin mp n
1 ð
0
x)n 1 dx ¼ B(m, n)
dx ¼ x)xp
1, b > a)
xp 1 dx p ¼ 1þx sin pp
0
where n is an integer and > 0
xm 1 (1
(1
1 ð
1) pffiffiffiffi p,
1, n >
(m > 1)
¼ B(p, 1
pffiffiffiffi p ¼ 1:7724538509 . . .
1 1 3 5 7 (2n G nþ ¼ 2 2n
1 ð
0
p sin np
G(m þ 1) G(n þ 1) G(m þ n þ 2)
dx ¼ p csc pp (1 þ x)xp
0
1 n G(n þ 1) x log dx ¼ x (m þ 1)nþ1 m
n) ¼
0
a)mþnþ1
dx 1 ¼ xm m 1
0
G(n) G(1
ð1
x)n dx ¼ (b
(m >
0
G(n) is finite if n > 0; G(n þ 1) ¼ nG(n)
ð1
(x
0
¼ G(n), 1 ð
ðb
x m a2
if a > 0; 0,
if a ¼ 0;
p , if a < 0 2
8 1 mþnþ1 m þ 1 n þ 2 > > B a , > > 2 2 2 > > > > < n mþ1 nþ2 x2 2 dx ¼ G G > > 1 mþnþ1 2 2 > > a > > m þ n þ 3 2 > > : G 2
C-1
C-2
pffiffiffiffi dx p G n1 p ¼ ð1 x n Þ n G n1 þ 12
ð1 0
p xm dx p G mþ1 n 1 p ¼ ð1 x n Þ n G mþ1 n þ2
ð1 0
ð1 0
ð1 0
ð1
Appendix C: Definite Integrals
G(p þ 1)G mþ1 m 2 p 2 x (1 x ) dx ¼ 2G p þ mþ3 2
G(p þ 1)G mþ1 n x (1 x ) dx ¼ nG p þ 1 þ mþ1 n
0
m
n p
xm dx 2 4 6 (m 1) p ¼ 3 5 7m (1 x2 ) 1 3 5 (m 1) p ¼ 2 4 6m 2 pffiffiffiffi mþ1 p G 2 ¼ 2 G m2 þ 1
(n > 0)
1 ð 0
(m þ 1, n > 0)
1 ð 0
(p þ 1, m þ 1 > 0)
(p þ 1, m þ 1, n > 0)
(m an odd integer > 1) (m an even, positive integer) (m any value > 1)
¼
p 2ab(a þ b)
(a, b > 0)
(n an even integer, n 6¼ 0) (n an odd integer, n 6¼ 1) (n > 1)
cos x dx ¼1 x tan x dx p ¼ x 2
0
0
dx p ¼p (1 þ x) x
1 ð
1 ð 0
xp 1 dx pap 1 ¼ sin pp aþx
ðp
1 ð 0
dx p ¼ p 1þx p sin
1 ð 0
xp dx pp ¼ (1 þ ax)2 apþ1 sin pp
1 ð 0
xp dx p ¼ 1 þ x2 2 cos
1 ð 0
xp 1 dx p ¼ 1 þ xq q sin
1 ð
xm 1 dx G(m)G(n) mþn ¼ G(m þ n) (1 þ x)
0
þ
x2 Þ
1 ð
1 ð
1 ð
þ
x2 Þðb2
sin mx dx p ¼ , if m > 0; if m ¼ 0; x 2
0
0
dx ða2
(a > 0; n ¼ 2, 3, . . . )
1
1 ð
xp 1 dx p p ¼ ¼ 1þx sin (p pp) sin pp
0
0
(0 < p < 1)
p 2a2n
3) 2)
8 p=2 ð > > > > > ( cosn x)dx > > > > > 0 > > > > p=2 1 > ð < 3 5 7 (n 1) p , 2 4 6 8 (n) 2 ( sinn x)dx ¼ > > > 2 4 6 8 (n 1) 0 > > , > > 1 3 5 7 (n) > > > pffiffiffiffi > nþ1 > > > p G 2 , > : 2 G n2 þ 1
1 ð
(0 < p < 1)
dx 1 3 5 (2n ¼ ða2 þ x2 Þn 2 4 6 (2n
0
sin ax sin bx dx ¼
ðp 0
p , if m < 0 2
cos ax cos bx dx ¼ 0, (a 6¼ b; a, b integers)
p p
(p > 1)
p=a ð 0
(0 < p < 1)
pp 2
pp q
xm 1 dx G(m)G(n) ¼ (a þ bx)mþn an bm G(m þ n)
(0 < p < q)
0
ðp
½sin (ax)½cos (bx)dx ¼
1 ð
sin x cos mx dx ¼ 0, if m < x
0
( 1 < p < 1)
ðp ½sin (ax)½cos (ax)dx ¼ ½sin (ax)½cos (ax)dx ¼ 0
0
¼
(m, n > 0)
1 ð
(a, b, m, n > 0)
ðp
0
0
2a a2
, if a b2 or zero if a
sin mx dx ¼
ðp 0
b is even
1 or m > 1, ¼
p , if m ¼ 1; 4
p , if m2 < 1 2
sin ax sin bx pa dx ¼ x2 2
2
b is odd,
cos2 mx dx ¼
(a b) p 2
C-3
Appendix C: Definite Integrals 1 ð 0
ð
p=2 ð
sin2 x dx p ¼ x2 2
0
cos mx p dx ¼ ejmj 1 þ x2 2
1 ð 0
1 ð 0
cos (x2 )dx ¼ sin x dx pffiffiffi ¼ x
p=2 ð 0
1 ð 0
2p ð 0
1 ð 0
1 ð 0
1 ð 0
sin (x2 )dx ¼
cos x dx pffiffiffi ¼ x
1 2
1 ð
(a < 1)
(a > b 0)
dx 2p ¼ pffiffiffi 1 þ a cos x 1 a2
(a2 < 1)
0
p=2 ð 0
ða
dx p(a2 þ b2 ) 2 ¼ 2 2 4a3 b3 sin x þ b cos x)
(a, b > 0)
1 n m
x dx ¼ B , 2 2 2 (m and n positive integers)
p=2 ð 0
p=2 ð 0
p=2 ð 0
( sin2nþ1
2 4 6 (2n) u)du ¼ 1 3 5 (2n þ 1)
( sin2n u)du ¼
1 3 5 (2n 1) p
2 4 (2n) 2
3 pffiffiffiffiffiffiffiffiffiffiffi (2p)2 cos u du ¼ 2 G 14
b 1
xh 1 dx
1 P q
x ððaÞ
ym dy
c½1
(a, b > 0)
q 1
p
ðxaÞð ðbyÞ k
zn 1 dz
0
0
p=2 ð
2
0
p a log 2 b
ÐÐÐ I ¼ R xh 1 ym 1 z n 1 dy, where R denotes the region of space bounded by the coordinate planes and that portion of the surface (x=a)p þ (y=b)q þ (z=c)k ¼ 1, which lies in the first octant and where h, m, n, p, q, k, a, b, c, denote positive real numbers is given by
0
sinn1 x cosm1
dx ¼
c 2 G b 2ca2 2c b G b
ðp 0
p=2 ð
(0 < h < 1)
h m n ah bm cn G p G q G k
¼ pqk G h þ m þ n þ 1 p q k
dx p ¼ a2 sin2 x þ b2 cos2 x 2ab
(a2
tan 1 (bx) x
0
cos ax cos bx b dx ¼ log x a
p=2 ð
p 2 cos hp 2
The area enclosed by a curve defined through the equation xb=c þ yb=c ¼ ab=c where a > 0, c a positive odd integer, and b a positive even integer is given by
rffiffiffiffi p 2
dx p ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a þ b cos x a 2 b2
tan 1 (ax)
0
rffiffiffiffi p 2
dx cos1 a ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ a cos x 1 a2
( tanh u)du ¼
dx p ¼ a2 sin2 x þ b2 cos2 x 2ab
(ab > 0)
dx p ¼ 2 2 sin x þ b cos x ab
(ab > 0)
a2
p=2 ð 0
2
sin2 x dx ¼ 2 2 a sin x þ b2 cos2 x ¼
(n ¼ 1, 2, 3 . . . )
p=2 ð 0
cos2 x dx ¼ 2 2 a sin x þ b2 cos2 x
(n ¼ 1, 2, 3 . . . )
¼ p=2 ð 0
(a2
p=2 ð 0
a2
dx þ b2 ctn2 x
p 2a(a þ b) p=2 ð 0
b2
dx þ a2 tan2 x
p 2b(a þ b)
dx p (a2 þ b2 ) 2 ¼ 2 2 4 a 3 b3 sin x þ b cos x) 2
(a, b > 0)
(a, b > 0)
(ab > 0)
C-4
p=2 ð 0
p=2 ð 0
1 ð 0
1 ð 0
Appendix C: Definite Integrals
sin2 x dx p ¼ 2 2 (a sin x þ b2 cos2 x)2 4a3 b cos x dx p ¼ (a2 sin2 x þ b2 cos2 x)2 4ab3 1 ð
sin (a2 x2 )dx ¼ 2
px dx ¼ sin 2
0
1 ð 0
p p cos (a2 x2 )dx ¼ p 2a 2
px 1 dx ¼ cos 2 2
0
1 ð
1 p p cos (x )dx ¼ G 1 þ cos p 2p
0
p
p m2 p sin a2 x2 cos mx dx ¼ 2 sin 4 4a 2a p
1 ð
sin2p mx 1 3 5 (2p dx ¼ 2 x 2 4 6 (2p
1 ð
sin3 mx 3 dx ¼ m2 p x3 8
0
2 2
sin mx cos nx dx ¼ p=2 x ¼ p=4 ¼0
1 ð
sin mx sin nx 1 mþn dx ¼ log x 2 m n
1 ð
cos mx cos nx dx ¼ 1 x
0
0
1 ð 0
3) jmjp 2) 2
(2a ¼ m > 0)
(n ^ m > 0)
pn 2
(m ^ n > 0)
¼ 1 ð
(a > 0)
(a > 0)
(p ¼ 2, 3, 4, . . . )
sin2 ax sin mx m þ 2a dx ¼ log jm þ 2aj x2 4
0
2a
m 4
log jm
0
cos mx p dx ¼ e a2 þ x 2 2a
1 ð 0
sin2 mx p dx ¼ (1 2 2 a þx 4a
1 ð 0
cos2 mx p dx ¼ (1 þ e a2 þ x2 4a
1 ð 0
x sin mx p dx ¼ e a2 þ x 2 2
1 ð
sin mx p dx ¼ 2 (1 x(a2 þ x2 ) 2a 1 ð
(m > 0)
0
ma
1 ð 0
(m ¼ n > 0) (n > m > 0)
(m > n > 0)
1 ð 0
0
)
2ma
(a > 0; m ^ 0)
)
(a > 0; m ^ 0)
(a ^ 0; m > 0)
e
sin mx sin nx p dx ¼ e a2 þ x 2 2a p e 2a
cos mx cos nx p dx ¼ e a2 þ x 2 2a p e 2a
x sin mx cos nx p dx ¼ e a2 þ x 2 2 ¼
1 ð
2ma
ma
¼
(m > 0)
(a > 0; m ^ 0)
e
¼ (m > n > 0)
m log m 2
2aj
1 ð
0
(m > 2a > 0)
sin mx sin nx pm dx ¼ x2 2
(a > 0)
(p > 1)
0
1 ð
(ab > 0)
(p > 1)
p m2 p cos a x cos mx dx ¼ cos 4 4a2 2a
0
p 8 ¼0
þ
(2a > m > 0)
¼
(Fresnel’s integrals)
1 ð
0
0
2
1 p sin (xp )dx ¼ G 1 þ sin p 2p
1 ð
sin2 ax sin mx p dx ¼ x 4
2
1 ð
0
(ab > 0)
1 ð
ma
)
ma
na
(a > 0; m ^ 0)
sinh na
(a > 0; m ^ n ^ 0)
sinh ma
(a > 0; n ^ m ^ 0)
ma
cosh na
(a > 0; m ^ n ^ 0)
na
cosh ma
(a > 0; n ^ m ^ 0)
ma
cosh na
(a > 0; m > n > 0)
p e 2
na
cos mx p dx ¼ 3 (1 þ ma)e 4a (a2 þ x2 )2
sinh ma (a > 0; n > m > 0) ma
(a, m > 0)
C-5
Appendix C: Definite Integrals 1 ð 0
x sin mx pm ma dx ¼ e 4a (a2 þ x2 )2
1 ð
x2 cos mx p (1 ma)ema 2 dx ¼ 2 2 4a (a þ x )
0
1 ð 0
1 ð 0
1 ð 0
2
0
1 ð 0
1 ð
sin mx pmp1 dx ¼ xp 2 sin pp 2 G(p) 1 eax dx ¼ a e
ax
0
1 ð 0
e x
bx
xn eax dx ¼
e
a2 x2
0
1 ð 0
ð1
2
m
ax
x e
0
1 ð
eð
1 ð
e
a^
m
m >0 2
^a > 0
(m > 0)
nx
0
1 ð 0
e nx pffiffiffi dx ¼ x
(m > 0)
pffiffiffiffi p 2
(a 0)
rffiffiffiffi p n
rffiffiffiffi p n
ax
cos mx dx ¼
a a2 þ m 2
(a > 0)
1 ð
e
ax
sin mx dx ¼
m a2 þ m 2
(a > 0)
1 ð
xe
ax
1 ð
xe
ax
1 ð
n
0
(a, b > 0)
#
e
0
b a
m X ar r! r¼0
rffiffiffiffi p a
1 ð
0
(0 < p < 2; m > 0)
e
a
1)
2a
Þ dx ¼ e
pffiffiffi 1 xdx ¼ 2n
0
(a > 0)
dx ¼ log
" m! dx ¼ mþ1 1 a
a2 x2
x2
1 3 5 (2n 2nþ1 an
½sin (bx)dx ¼
(a2
2ab þ b2 )2
(a > 0)
½cos (bx)dx ¼
a 2 b2 (a2 þ b2 )2
(a > 0)
ax
x e
0
½sin (bx)dx ¼
n! (a
ib)nþ1 (a þ ib)nþ1 2(a2 þ b2 )nþ1
(i2 ¼
¼ 1 ð
m a> >0 2
2
p cos mx p p dx ¼ p x (2m)
p sin mx p dx ¼ (2pm) x x
1 ð
pa2 2
1 ð
0
x2n eax dx ¼
0
sin2 ax sin mx pam pm2 dx ¼ 8 x3 2
1 ð
1 ð 0
1 cos mx pjmj dx ¼ x2 2
sin mx p dx ¼ x
0
(a, m > 0)
sin ax cos mx m
dx ¼ a x2 2 2 m
^a^0 ¼0 2
1 ð
x e
2 x2
0
p
¼
0
(a, m > 0)
pffiffiffiffi p dx ¼ 4
1 ð
2
G(n þ 1) anþ1 n! anþ1
1 pffiffiffiffi 1 1 dx ¼ p¼ G 2a 2a 2
xex dx ¼
1 2
(n > 1, a > 0)
1 ð
n
ax
x e
0
½cos (bx)dx ¼
n! (a
(i2 ¼
(n positive integer, a > 0)
(a > 0)
ib)nþ1 þ (a þ ib)nþ1 2(a2 þ b2 )nþ1
1 ð
e
0
1 ð 0
e
ax
sin x dx ¼ cot x
a2 x2
1
a
pffiffiffiffi p b22 e 4a cos bx dx ¼ 2a
1, a > 0)
1, a > 0) (a > 0)
(ab 6¼ 0)
C-6 1 ð
e
Appendix C: Definite Integrals
t cos f b1
t
0
1 ð
½sin (t sin f)dt
e
t cos f b 1
t
0
e
axc
x
0
1 ð
0
1 ð 0
1 ð 0
1 ð
1
0
1 ð 0
1 ð
0
1
eax x dx eax e
1
ax
b > 0,
p 2
< f < p2
(a, b, c > 0)
p
(ap)
(a > 0)
pffiffiffiffi b2 p e dx ¼ x2 2a
2ab
(a, b > 0)
(a > 0)
¼
p2 6a2
bx
(a > 0)
b dx ¼ log a
(a, b > 0)
cos mx dx ¼
1 b2 þ m 2 ln 2 a2 þ m 2
(a, b > 0)
1 ð
e
ax
cos2 mx dx ¼
a2 þ 2m2 a(a2 þ 4m2 )
(a > 0)
1 ð
e
ax
sin2 mx dx ¼
a(a2
2m2 þ 4m2 )
(a > 0)
e
ax
1 4m2 ln 1 þ 2 a 4
(a > 0)
0
1 ð 1 ð 1 ð
1 ð
sin2 mx dx ¼
x ax
e
2
sin mx dx ¼ m tan
x2
0
¼1
e x
ax
e
ax
e
0
1
2m a
a 4m2 ln 1 þ 2 a 4
sin mx sin nx dx ¼ 2 a þ (m sin mx cos nx dx ¼
(a > 0)
2amn n)2 a2 þ (m þ n)2
(a > 0)
m(a2 þ m2 n2 ) {a2 þ (m n)2 }{a2 þ (m þ n)2 } (a > 0)
1 ð
ax
e
0
cos mx cos nx dx ¼
a(a2 þ m2 þ n2 ) {a2 þ (m n)2 }{a2 þ (m þ n)2 } (a > 0)
dx log 2 ¼ þ1 a
(a > 0)
x dx p2 ¼ þ 1 12a2
(a > 0)
eax
e
ax
e
sin mx sin nx dx ¼
x
1 ð
e
1 ð
xe
a2 x2
0
ax
x
1 ð 0
eax
sin mx dx ¼ tan
1
m a
(a > 0)
e
1 ð
ax
x
cos mx dx ¼ 1
e
e
ax
x e
ð1
cos mxÞ dx ¼
1 m2 ln 1 þ 2 a 2
ax
x
ðcos mx
cos nxÞdx ¼
1 a2 þ n2 ln 2 a2 þ m2
(a > 0)
p e 2a
sin mx dx ¼
1 a2 þ (m þ n)2 log 2 4 a þ (m n)2 m2 =(4a2 )
p m p e 4a3
m2 =(4a2 )
p m
erf 2 2a
p 1=2 p e ax fa þ (a2 þ m2 )g p p cos mx dx ¼ 1=2 p 2 2 x (a þ m ) 2
1 ð
e
0
ax
sin
p
(mx)dx ¼
p
(pm) p e 2a a
(a > 0)
(a > 0)
(a > 0)
(a > 0)
1 ð 0
(a > 0)
p
sin mx dx ¼
a2 x2
x
0
cos mx dx ¼
a2 x2
0
0
1 ð
0
dx
0
1 ð
dx ¼
exp a2 x2
0
1 ð
< f < p2
0
1 b dx ¼ log c a
ax2
e x2
0
1 ð
p 2
bx
x
0
0
1 ð
bxc
e
b > 0,
½cos (t sin f)dt
¼ [G(b)] cos (bf) 1 ð
e
0
¼ [G(b)] sin (bf) 1 ð
ax
e
m=(4a)
(a > 0)
(a, m > 0)
C-7
Appendix C: Definite Integrals 1 ð
p p eax p p cos (mx)dx ¼ p em=(4a) x a
1 ð
eax sin (px þ q)dx ¼
1 ð
eax cos (px þ q)dx ¼
0
0
0
a sin q þ p cos q a 2 þ p2
(a > 0)
a cos q p sin q a 2 þ p2
(a > 0)
t b1 cos t dt ¼ [G(b)] cos
1 ð
bp b1 t ( sin t) dt ¼ [G(b)] sin 2
0
ð1 0
1 ln x
ln
ln
1 x 1 x
12
dx ¼
n
pffiffiffiffi p 2
1 2
dx ¼
ð1
x ln (1 þ x)dx ¼
ð1 0
ð1 0
(0 < b < 1)
ð1
(0 < b < 1)
xq )dx pþ1 ¼ ln ln x qþ1
if m þ 1 > 0, n þ 1 > 0
(p þ 1 > 0, q þ 1 > 0)
pffiffiffiffi dx qffiffiffiffiffiffiffiffiffiffiffiffi ¼ p 1 ln ( x )
1 ð
ln
0
3 4
x)dx ¼
ln x dx ¼ 1þx ln x dx ¼ 1 x
p=2 ð
ðp
1 4
p2 12
x e þ1 p2 dx ¼ x 1 4 e
p2 6
p=2 ð 0
p=2 ð 0
ð1
1 þ x dx p2 ln ¼ 4 1 x x
p 8
ln sec x dx ¼
0
p=2 ð 0
ln cos x dx ¼
ln csc x dx ¼
sin x ln sin x dx ¼ ln 2
1 ð
dx p ¼ cosh ax 2a
1 ð
x dx p2 ¼ 2 sinh ax 4a
1 ð
eax cosh bx dx ¼
0
p ln 2 2
1
ln tan x dx ¼ 0
ln (a b cos x)dx ¼ p log
0
p ln 2 2
p2 ln 2 2
ðp
0
2
ln sin x dx ¼
p=2 ð
x ln sin x dx ¼
0
ln x dx ¼ 1 x2
0
0
0
ð1 0
(xp
p=2 ð
dx ¼ n!
x ln (1
0
ð1
0
pffiffiffiffi p
ð1 0
n 1 G(n þ 1) x ln dx ¼ , x (m þ 1)nþ1
n
( ln x) dx ¼ (1) n!
ð1 0
0
p ln 2 2
m
0
n
0
0
bp 2
ð1
0
ð1 ð1
ln x dx pffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 1 x2
0
1 ð 0
(a, m > 0)
ð1
a a 2 b2
aþ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! a2 b2 2
(a b)
(0 jbj < a)
C-8 1 ð
e
Appendix C: Definite Integrals
ax
0
b sinh bx dx ¼ 2 a b2
ð1
exu x2 x3 x4 du ¼ g þ ln x x þ þ u 2 2! 3 3! 4 4! þ1 1 1 1 where g ¼ lim 1 þ þ þ þ ln z z!1 2 3 z ¼ 0:5772157 . . .
p=2 ð 0
(0 jbj < a)
p=2 ð 0
" pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p 2 1 k2 sin x dx ¼ 1 2
,
(0 < x < 1)
" 2 dx p 1 2 13 2 4 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ k þ k 1þ 2 24 1 k2 sin2 x 2 # 135 2 6 k þ , if k2 < 1 þ 246
1 ð
e
0 1 ð 0 1 ð 0
x
ln x dx ¼
1 1 e
x
1 1 x 1þx
g¼
2 1 2 k 2
135 246
2
k6 5
13 24 #
2
k4 3
, if k2 < 1
0:5772157 . . .
1 e x dx ¼ g ¼ 0:5772157 . . . (Euler’s constant) x e
x
dx ¼ g ¼ 0:5772157 . . .
Appendix D: Matrices and Determinants D.1 D.2 D.3 D.4 D.5 D.6 D.7 D.8 D.9 D.10 D.11
General Definitions..................................................................................................................... D-1 Addition, Subtraction, and Multiplication ............................................................................ D-2 Recognition Rules and Special Forms .................................................................................... D-2 Determinants................................................................................................................................ D-3 Singularity and Rank .................................................................................................................. D-5 Inversion........................................................................................................................................ D-5 Traces ............................................................................................................................................. D-7 Characteristics Roots and Vector ............................................................................................ D-8 Conditional Inverses................................................................................................................... D-9 Matrix Differentiation .............................................................................................................. D-11 Statistical Matrix Forms........................................................................................................... D-13
D.1 General Definitions
1.7
1.1 A matrix is an array of numbers consisting of m rows and n columns. It is usually denoted by a boldface capital letter, e.g., A
X
M:
b a: 1.8
1.2 The (i, j) element of a matrix is the element occurring in row i and column j. It is usually denoted by a lowercase letter with subscripts, e.g., aij sij mij : Exceptions to this convention will be stated where required. 1.3 A matrix is called rectangular if m (number of rows) 6¼ n (number of columns). 1.4 A matrix is called square if m ¼ n. 1.5a In the transpose of a matrix A, denoted by A0 , the element in the jth row and ith column of A is equal to the element in the ith row and jth column of A0 . Formally, (A0 )ij ¼ (A)ji where the symbol (A0 )ij denotes the (i, j)th element of A0 . 1.5b The Hermitian conjugate of a matrix A, denoted by AH or Ay, is obtained by transposing A and replacing each element by its conjugate complex. Hence, if ak1 ¼ uk1 þ i yk1 , then
A matrix with m rows and one column is called a column vector and is usually denoted by boldface, lowercase letters, e.g.,
A matrix with one row and n columns is called a row vector and is usually denoted by a primed, boldface, lowercase letter, e.g., a0 c0 m0 :
A matrix with one row and one column is called a scalar and is usually denoted by a lowercase letter, occasionally italicized. 1.10 The diagonal extending from upper left (NW) to lower right (SE) is called the principal diagonal of a square matrix. 1.11a A matrix with all elements above the principal diagonal equal to zero is called a lower triangular matrix. 1.9
Example 2
0
0
6 T ¼ 4 t21
t22
7 0 5 is lower triangular:
t31
(AH )kl ¼ ukl i ykl , where typical elements have pffiffiffiffiffiffi ffi been denoted by (k, l) to avoid confusion with i ¼ 1. 1.6a A square matrix is called symmetric if A ¼ A0 . 1.6b A square matrix is called Hermitian if A ¼ AH.
3
t11
t32
t33
1.11b The transpose of a lower triangular matrix is called an upper triangular matrix. 1.12 A square matrix with all off-diagonal elements equal to zero is called a diagonal matrix, denoted by the letter D with a subscript indicating the typical element in the principal diagonal. D-1
D-2
Appendix D: Matrices and Determinants
2.3c In general, matrix multiplication is not commutative:
Example 2
a1 Da ¼ 4 0 0
3 0 0 5 is diagonal: a3
0 a2 0
AB 6¼ BA: 2.3d Matrix multiplication is associative:
D.2 Addition, Subtraction, and Multiplication
A(BC) ¼ (AB)C:
2.1 Two matrices A and B can be added (subtracted) if the number of rows (columns) in A equals the number of rows (columns) in B.
2.3e The distributive law for multiplication and addition holds as in the case of scalars: (A þ B)C ¼ AC þ BC
AB¼C
C(A þ B) ¼ CA þ CB:
implies
2.4 In some applications, the term-by-term product of two matrices A and B of identical order is defined as
aij bij ¼ cij , i ¼ 1, 2, . . . m j ¼ 1, 2, . . . n 2.2 Multiplication of a matrix or vector by a scalar implies multiplication of each element by the scalar. If
C ¼ A*B where
B ¼ gA,
cij ¼ aij bij :
then bij ¼ gaij for all elements. 2.3a Two matrices A and B can be multiplied if the number of columns in A equals the number of rows in B. 2.3b Let A be of order (m 3 n) (have m rows and n columns) and B of order (n 3 p). Then the product of two matrices C ¼ AB is a matrix of order (m 3 p) with elements cij ¼
n X
aik bkj :
k¼1
The states that cij is the scalar product of the ith row vector of A and the jth column vector of B.
Example
3 2
4 3
2 1 2 4 0 1 6
2 1 3
3 4 15 2 5¼ 4 9
16 4
14 11
2.5 (ABC)0 ¼ C0 B0 A0 . 2.6 (ABC)H ¼ CHBHAH. 2.7 If both A and B are symmetric, then (AB)0 ¼ BA. Note that the product of two symmetric matrices is generally not symmetric.
D.3 Recognition Rules and Special Forms 3.1 A column (row) vector with all elements equal to zero is called a null vector and is usually denoted by the symbol 0. 3.2 A null matrix has all elements equal to zero. 3.3a A diagonal matrix with all elements equal to one in the principal diagonal is called the identity matrix I. 3.3b g I, i.e., a diagonal matrix with all diagonal elements equal to a constant g, is called a scalar matrix. 3.4 A matrix that has only one element equal to one and all others equal to zero is called an elementary matrix (EL)ij.
Example 2
e.g.,
c23 ¼ ½ 2
3
3 4 7 6 1 4 2 5 9 2
¼ 2 ( 4) þ 3 2 þ (1) 9 ¼ 11
(EL)23
0 6 60 6 ¼6 60 6 40 0
0 0 0
0 1 0
0 0 0
0 0
0 0
0 0
The order of the matrix is usually implicit.
3 0 7 07 7 07 7 7 05 0
D-3
Appendix D: Matrices and Determinants
The symbol j is reserved for a column vector with all elements equal to 1. 3.5b The symbol j0 is reserved for a row vector with all elements equal to 1. 3.6 An expression ending with a column vector is a column vector. 3.5a
Example ABx ¼ y (It is assumed that rule 2.3a is satisfied, else matrix multiplication would not be defined.)
3.15b The vector j0 A is a row vector whose elements are the column sums of A. 3.15c The scalar j0 Aj is the sum of all elements in A. Schematically, A j0 A
ABCD ¼ E, then eij ¼
Example
An expression beginning with a row vector and ending with a column vector is a scalar:
Example
k
l
aik bkl clm dmj :
m
Example If
a0 Bc ¼ g:
3.9a
XXX
The second subscript of an element must coincide with the first of the next one. Reordering and transposing may be required.
y0 (A þ BC) ¼ d0 :
3.8
Aj j0 Aj
3.16a If B ¼ DwA, then bij ¼ wiaij. 3.16b If B ¼ ADw, then bij ¼ aijwj. 3.17 Interchanging summation and matrix notation: If
As expression beginning with a row vector is a row vector:
3.7
If Q is a square matrix, the scalar x0 Qx is called a quadratic form. If Q is nonsymmetric, one can always find a symmetric matrix Q* such that
eij ¼ ¼
1 (Q*)ij ¼ (qij þ qji ): 2 3.9b If Q is a square matrix, the scalar xHQx is called a Hermitian form. 3.10 A scalar x0 Qy is called a bilinear form. P 3.11 The scalar x0 x ¼ xi2 , i.e., the sum of squares of all elements of x. P 3.12 The scalar x0 y ¼ xi yi , i.e., the sum of products of elements in x by those in y. x and y have the same number of elements. P 3.13 The scalar x0 Dw x ¼ wi xi2 is called a weighted sum of squares. P 3.14 The scalar x0 Dw y ¼ wi xi yi is called a weighted sum of products. 3.15a The vector Aj is a column vector whose elements are the row sums of A.
k
l
XXX k
l
akl bki cjm dml
m
bki akl dml cjm ,
m
then
x0 Qx ¼ x0 Q*x where
XXX
E ¼ B0 AD0 C0 :
3.18a A0 A is a symmetric matrix whose (i, j) element is the scalar product of the ith column vector and the jth column vector of A. 3.18b AA0 is the symmetric matrix whose (i, j) element is the scalar product of the ith row vector and the jth row vector of A.
D.4 Determinants 4.1a
A determinant jAj or det(A) is a scalar function of a square matrix defined in such a way that jAj jBj ¼ jABj and
4.1b jAj ¼ jA0 j.
a11 a21
a12 ¼ a11 a22 a12 a21 : a22
D-4
Appendix D: Matrices and Determinants
4.10 (Laplace development)
4.2 a11 a21 a31
a12
a11 a21 an1
a12 a22 an2
a22 a32
a13 a23 ¼ a11 a22 a33 þ a12 a23 a31 þ a13 a21 a32 a33
a13 a22 a31 a11 a23 a32 a12 a21 a33 :
4.3
a1n a2n X ¼ (1)d a1i1 a2i2 anin anm
where the sum is over all permutations i1 6¼ i2 6¼ in
jAj ¼ ai1 cof i1 (A) þ ai2 cof i2 (A) þ þ ain cof in (A) ¼ a1j cof 1j (A) þ a2j cof 2j (A) þ þ anj cof nj (A)
for any row i or any column j. 4.11 Numerical evaluation of the determinant of a symmetric matrix. Note: If A is nonsymmetric, from A0 A or AA0 by rule 3.18, obtain its determinant, and take the square root. (‘‘Forward Doolittle Scheme,’’ ‘‘left side’’) Let p11 ¼ a11 , p12 ¼ a12 ¼ a21 , p1n ¼ a1n p11 p12 p13 p1n 1 u12 u13 u1n
and d denotes the number of exchanges necessary to bring the sequence (i1, i2, . . . in) back into the natural order (1, 2, . . . n). 4.4 If two rows (columns) in a matrix are exchanged, the determinant will change its sign. 4.5 A determinant does not change its value if a linear combination of other rows (columns) is added to any given row (column).
a22
a23
p22 1
p23 u23 a33 p33 1
Example
where
a11 b21 a31 a41
a12 b22 a32 a42
a13 b23 a33 a43
a14 a11 b24 a21 ¼ a34 a31 a41 a44
a12 a22 a32 a42
a13 a23 a33 a43
a14 a24 a34 a44
b2i ¼ a2i þ g1 a1i þ g3 a3i þ g4 a4i , i ¼ 1, 2, 3, 4, g1, g3, g4 arbitrary.
4.6 If the ith row (column) equals (a constant times) the jth row (column) of a matrix, its determinant is equal to zero (i 6¼ j). 4.7 If, in a matrix A, each element of a row (column) is multiplied by a constant g, the determinant is multiplied by g. 4.8 jgAj ¼ gnjAj assuming that A is of order (n 3 n). 4.9 The cofactor of a square matrix A, cofij(A), is the determinant of a matrix obtained by striking the ith row and jth column of A and choosing positive (negative) sign if i þ j is even (odd):
2 4 6 cof 23 4 6 1 2 1
3 3 2 4 7 5 5 ¼ 2 1 3
¼ (2 þ 8) ¼ 10:
a2n
a3n
p2n u2n
p3n u3n
ann pnn 1
i ¼ 1, 2, . . . n i ¼ 2, 3, . . . n
u1i ¼ p1i =p11 p2i ¼ a2i u12 p1i u2i ¼ p2i =p22
i ¼ 3, 4, . . . n
p3i ¼ a3i u13 p1i u23 p2i
u3i ¼ p3i =p33 pki ¼ aki u1k p1i u2k p2i uk1, k pk1, i i ¼ k, k þ 1, . . . n k ¼ 2, 3, . . . n
uki ¼ pki =pkk
If, at some stage, pkk ¼ 0, reordering of rows and columns may be required. If the matrix is positive-definite (see 8.16) (always true for AA0 or A0 A; see rule 10.24), none of the pkk will be zero. The pii are called pivots. Then
jAj ¼
Example 2
n Y
pii :
i¼1
Further, if A is partitioned
A¼
A11 A012
A12 A22
D-5
Appendix D: Matrices and Determinants
where A11 is of order (k 3 k), then
jA11 j ¼
n Y
5.3 If A has rank r and if A1 is a nonsingular submatrix consisting of r rows and columns of A, then A1 is called a basis of A. 5.4a The determinant of a square singular matrix is 0. 5.4b The determinant of a nonsingular matrix is 6¼ 0. 5.5 rank(AB) min[rank(A), rank(B)]. 5.6 rank(AA0 ) ¼ rank(A0 A) ¼ rank(A). 5.7 jA0 Aj ¼ jAA0 j ¼ jAj2 if A is square. 5.8 jA0 Aj ¼ jAA0 j 0 for every A with real elements.
pii :
i¼1
(Numerical examples: see 6.14.)
D.5 Singularity and Rank
D.6 Inversion
5.1 A matrix A is called singular if there exists a vector x 6¼ 0 such that Ax ¼ 0 or A0 x ¼ 0. Note x 6¼ 0 if a single element of x is unequal to 0. If a matrix is not singular, it is called nonsingular. 5.2 If a matrix A1 can be formed by selection of r rows and columns of A such that A1x 6¼ 0 or A0 1x 6¼ 0 for every x 6¼ 0, and if addition of an (r þ 1)st row and column would produce a singular matrix, r is called the rank of A.
Regular case, nonsingular matrices 6.1 If A is square and nonsingular (jAj 6¼ 0), there exists a unique matrix A1 such that AA1 ¼ A1A ¼ I. 6.2 (ABC)1 ¼ C1B1A1 (provided that all inverses exist). 6.3 (A1)0 ¼ (A0 )1. 6.4 Ax ¼ b is a system of linear equations. If A is square and nonsingular, there exists a unique solution x ¼ A1 b:
Example 6.5 6.6 6.7 6.8
3
2
2 61 6 A¼4 3 1
4 6 3 7 7 7 7 13 5 1 1
(gA)1 ¼ (1=g)A1. jA1j ¼ 1jAj. D1 w ¼ D1=w where D is a diagonal matrix. If A ¼ B þ uv 0 ,
Note that
[ 1,
1,
2
2 4 1 ]4 1 3 3 7
3 6 7 5 ¼ [0 13
2
3
then 0
A1 ¼ B1 lyz0
0]
where
and
[ 1,
1,
2 1 ]4 1 1
4 3 1
x1 x2
6 7 5 ¼ [0 1
y ¼ B1 u, z0 ¼ v 0 B1 , 0
0]
and l ¼ 1=(1 þ z0 u):
but 2 1
4 3
6¼
0 0
Example 6.8.1 2
4 63 6 A¼4 2 1
or [ x1
2 x2 ] 1
4 6 [ 0, ¼ 3
0]
2 4 9 12 4 11 2 4
3 5 15 7 7 10 5 10
This matrix can be written as for any arbitrary [ x1 ,
x2 ] 6¼ [ 0,
Hence, the matrix has rank 2.
0 ]:
2
3 60 6 40 0
0 3 0 0
0 0 3 0
3 2 3 1 0 6 7 07 7 þ 6 3 7[ 1 05 425 1 5
2
4 5 ] ¼ B þ uv0
D-6
Appendix D: Matrices and Determinants
B1
2
1=3 6 0 6 ¼4 0 0
0 1=3 0 0
0 0 1=3 0
3 1=3 6 1 7 7 y ¼ B1 u ¼ 6 4 2=3 5 1=5 2
z0 ¼ v0 B1 ¼ [ 1=3
2=3
3 0 0 7 7 0 5 1=5
6.11 (Partitioning of determinants) Let B jAj ¼ D
Then
4=3
1]
z0 u ¼ 1=3 1 þ 2=3 3 þ 4=3 2 þ 1 1 ¼ 6
C E
(structure as in Equation 6:10):
jAj ¼ jEj j(B CE1 D)j ¼ jBj j(E DB1 C)j: 6.12. Let A ¼ B þ UV
l ¼ 1=7 2
3
2
3
1=3 0 0 0 1=3 6 7 6 7 6 0 1=3 0 7 6 7 0 7 6 6 1 7 1 A ¼6 7 (1=7)6 7[ 1=3 2=3 4=3 1 ] 6 0 6 2=3 7 0 1=3 0 7 4 5 4 5 0 0 0 1=5 1=5 2 3 100 10 20 15 6 7 6 15 75 60 45 7 6 7 ¼ (1=315)6 7 6 10 20 65 30 7 4 5 3
6
12
54
(This rule is especially useful if all off-diagonal elements are equal; then u ¼ kj and v0 ¼ j0 and B is diagonal.)
6.9 Let B (elements bij) have a known inverse, B1 (elements bij). Let A ¼ B except for one element ars ¼ brs þ k. Then the elements of A1 are aij ¼ bij
kbir bsj : 1 þ kbsr
6.10 (Partitioning) Let
A ¼ (p) (q)
(p) (q) (letters in parentheses B C denote order of the submatrices): D E
Let B1 and E1 exist. Then A1 ¼
X Z
(the special case for k ¼ 1 is treated in 6.8). Then A1 ¼ B1 YLZ where Y ¼ B1 U(n k) Z ¼ VB1 (k n)
and L(k k) ¼ [I þ ZU]1 6.13 Let aij denote the elements of A and aij those of A1. Then aij ¼ cof ji (A)=jAj where cof is the determinant defined in 4.9. 6.14 ‘‘Doolittle’’ Method of inverting symmetric matrices (see also 4.11). Let p11 ¼ a11 , p12 ¼ a12 ¼ a21 , . . . p1n ¼ a1n ¼ an1 : Forward Solution p11 1
Y U
where X ¼ (B CE1 D)1
U ¼ (E DB1 C)1 Y ¼ B1 CU Z ¼ E1 DX:
where B(n 3 n) has an inverse U is of order (n 3 k), with k usually very small V is of order (k 3 n)
p12 p13 u12 u13 a22 a23 p22 p23 1 u23 a33 p33 1
p1n u1n a2n p2n u2n a3n p3n u3n ann pnn 1
1 u1I 0 p2I u2I 0 p3I u3I 0 p nI u nI
1 p2II p2II 0 p3II u3II 0 pnII unII
1 p3III u3III 0 1 pnIII pnN unIII unN
D-7
Appendix D: Matrices and Determinants
i ¼ 1, 2, . . . n, I i ¼ 2, 3, . . . n, I, II
u1i ¼ p1i =p11 P2i ¼ a2i u12 p1i
u2i ¼ p2i =p22 p3i ¼ a3i u13 p1i u23 p2i
i ¼ k, k þ 1, . . . n, I, II, . . . K
k ¼ 2, 3, . . . n
uki ¼ pki =pkk
j refers to Arabic, J refers to Roman numerals The elements of A1 are aij anj ¼ unJ
j ¼ 1, 2, . . . n; J ¼ I, II, . . . N
an1j ¼ un1, J un1, n anj
j ¼ 1, 2, . . . (n 1);
J ¼ I, II, . . . (N 1)
nj
¼ un2, J un2, n a
j ¼ 1, 2, . . . (n 2);
un2, n1 an1, j
J ¼ I, II, . . . (N 2)
ank, j ¼ unk, J unk, n anj unk, n1 a
n1, j
unk, nkþ1 ankþ1, j
Elements in u2 ¼ elements in p2 divided by p22( ¼ 4). Enter row a3. p33 p31 p3II p3III
Backward Solution
a
p22 ¼ 40 1:2 30 ¼ 4
p23 ¼ 6 1:2 (10) ¼ 6 p21 ¼ 0 1:2 1 ¼ 1:2 psII ¼ 1
i ¼ 3, 4, . . . n, I, II, III
u3i ¼ p3i =p33 pki ¼ aki u1k p1i u2k p2i uk1, k pk1, i
n2, j
Enter row a2.
j ¼ 1, 2, . . . (n k); J ¼ I, II, . . . (N k)
k ¼ 1, 2, . . . (n 1), and aji ¼ aij.
¼ 17 (0:4) (10) 1:5 6 ¼ 4 ¼ 0 (0:4) 1 1:5 (1:2) ¼ 2:2 ¼ 0 1:5 1 ¼ 1:5 ¼1
Elements in u3 ¼ elements in p3 divided by p33( ¼ 4). Copy the right-hand side of the last (third) u-row as the last column below the double line. a21 ¼ 0:3 1:5 0:55 ¼ 1:125
a22 ¼ 0:25 1:5 (0:375) ¼ 0:8125
a23 ¼ 0 1:5 0:25 ¼ 0:375 (check against a32 ): These are entered in the next to last (second) column below. a11 ¼ 0:04 (0:4) 0:55 1:2 (1:125) ¼ 1:61 a12 ¼ 0 (0:4) (0:375) 1:2 0:8125 ¼ 1:125 (check against a21 )
a13 ¼ 0 (0:4) (0:25) 1:2 (0:375) ¼ :55 (check against a31 ):
6.15 A matrix is called orthogonal is A0 ¼ A1 (or AA0 ¼ I).
Numerical Example 6.14.1
D.7 Traces
Invert the matrix 2 a1 u1
25 1 a2 P2 u2
10 6 5 17 30 10 1 1:2 0:4 0:04 40 6 0 1 4 6 1:2 1 1 1:5 0:3 0:25 a3 17 0 0 2:2 1:5 p3 4 0:55 0:375 u3 1
25 4 30 10
1:61 1:125 0:55
3
30 40 6
P 7.1 If A is a square matrix, then the trace of A is tr A ¼ i aii i.e., the sum of the diagonal elements. 7.2 If A is of order (m 3 k) and B of order (k 3 m) then tr(AB) ¼ tr(BA). 7.3 If A is of order (m 3 k), B of order (k 3 r), and C of order (r 3 m), then
1 1 0:25
1:125 0:55 0:8125 0:375 0:375 0:25
Enter row a1. Elements in u1 ¼ elements in a1 divided by a11( ¼ 25).
tr(ABC) ¼ tr(BCA) ¼ tr(CAB): 7.3a If b is a column vector and c0 a row vector, then tr(Abc0 ) ¼ tr(bc0 A) ¼ c0 Ab: since the trace of a scalar is the scalar. 7.4 tr(A þ gB) ¼ tr A þ g tr B, where g is a scalar. 7.5 tr(EL)ijA ¼ tr A(EL)ij ¼ aji, where (EL)ij is an elementary matrix as defined in 3.4.
D-8
Appendix D: Matrices and Determinants
7.6 tr(EL)ijA(EL)rsB ¼ ajrbsi. (These rules are useful in matrix differentiation) 7.7 The trace of the second order of a square matrix A is the n matrices of order sum of the determinants of all 2 (2 3 2) that can be formed by intersecting rows i and j with columns i and j. a12 a11 a13 þ a22 a31 a33 a11 a1n a22 a23 þ þ þ an1 ann a32 a33 a22 a2n a þ þ n1, n1 þ þ a a a nn
n, n1
ai1 i2 ai2 i2 aik i2
25
30
6 A¼6 4 30
40
10
10
7 6 7 5 17
6
tr A ¼ 25 þ 40 þ 17 ¼ 82
tr3 A ¼ jAj ¼ 25 4 4 ¼ 400
an1, n : a nn
ai1 ik ai2 ik aik ik
where the sum extends over all combinations of n elements taken k at a time in order i1 < i2 < . . . < ik :
(cf. 6.14 and procedure stated in 4.11) Hence, l3 82l2 þ 1069l 400 ¼ 0: The solution (by Newton iteration) are l1 ¼ 65:86108 l2 ¼ 15:75339 l3 ¼ 0:38553:
These are the characteristic roots of A.
8.4 ch(A þ gI) ¼ g þ ch(A). 8.5 ch(AB) ¼ ch(BA) except that AB or BA may have additional roots equal to zero. 8.6 ch(A1) ¼ 1=ch(A). 8.7 If l1, l2, . . . , ln are the roots of A, then X i
7.9 Rules 7.2 and 7.3 (cyclic exchange) are valid for trace of kth order. 7.10 trn A ¼ jAj if A is of order (n 3 n).
X i n, then A(1) is of order (m 3 n) and AA(1) ¼ I (n 3 n). Then A(1) is called an inverse to the right. In this case,
Example 9.7.1 The system xþy ¼2 2x þ 2y ¼ 4 is consistent:
Example 9.7.2 The system
A(1) A 6¼ I: 9.3c For a square, singular matrix, AA(1) 6¼ I and A(1) A 6¼ I.
xþy ¼2
2x þ 2y ¼ 5 is inconsistent: for no pair of values (x, y) will satisfy this system.
Example 9.3.1 9.8 If, in a system of equations (rectangular or square)
2 3 3 A ¼ 425 1
Ax ¼ b
The row vector [1=3 0 0] is an inverse from the left. The row vector ½x
y
(1 3x 2y)
is a conditional inverse of the above matrix A for any values of x and y. It is called the generalized inverse of A.
Example 9.3.2 2
1 2 A ¼ 42 5 3 7
3 3 65 9
A conditional inverse is
A
( 1)
2
5 ¼4 2 0
3 2 0 1 05 0 0
Here it was obtained by inversion of the basis (the 2 3 2 matrix in the upper left-hand corner) and replacement of the other elements in zeros.
9.4 A square matrix A is called idempotent if AA ¼ A2 ¼ A. 9.5 AA( 1) and A( 1) A are idempotent. 9.6 All characteristic roots of idempotent matrices are either zero or one. 9.7 A system of linear equations (m equations in n unknowns) Ax ¼ b is called consistent if there exists some solution x that satisfies the equation system.
AA( 1)b ¼ b for some conditional inverse A( 1), then AA( 1)b ¼ b for every conditional inverse of A, and Ax ¼ b is consistent. Conversely, if AA( 1)b 6¼ b for some conditional inverse A( 1), then AA( 1)b 6¼ b for every conditional inverse of A, and Ax ¼ b is inconsistent. 9.9 If Ax ¼ b is consistent, then x ¼ A( 1)b is a solution (generally a different one for each A( 1)). 9.10 Let y(p 3 1) be a set of linear functions of the solutions x (n 3 1) of a consistent system of equations Ax ¼ b, given by the relation y ¼ Cx. They y ¼ Cx is called unique if the same values of y will result regardless of which solution x is used.
Example 9.10.1 3x þ 4y þ 5z ¼ 22 xþyþz ¼6
is a consistent system. One solution would be x¼3
y¼2
z ¼ 1:
x¼2
y¼4
z ¼ 0:
[7
2 3 x 11 ]4 y 5 ¼ u z
Another solution is
The linear function
9
(7x þ 9y þ 11z ¼ u) will have the same value (50) regardless of which of the two (or any other) solutions is substituted. Thus, u is unique.
D-11
Appendix D: Matrices and Determinants
9.11 Let Ax ¼ b be a consistent system of equations. For Cx ¼ y to be a unique linear combination of the solution x, it is necessary and sufficient that CA(1)A ¼ C. If this relation holds for some A(1), it will hold for every conditional inverse of A. If it is violated for some A(1), it will be violated for every A(1), and y will be nonunique. 9.12 Let A be of rank r and select r rows and r columns that form a basis of A. Then a conditional inverse of A can be obtained as follows: Invert the (r 3 r) matrix, place the inverse (without transposing) into the r rows corresponding to the column numbers and the r columns corresponding to the row numbers of the basis, and place zero into all remaining elements. Thus, if A is of order (5 3 4) and rank 3, and if rows 1, 2, 4 and columns 2, 3, 4 are selected as a basis, A(1), of order (4 3 5), will contain the inverse elements of the basis in rows 2, 3, 4 and columns 1, 2, 4, and zeros elsewhere. (See Example 9.3.2.) 9.13 If A is a square, singular matrix of order (n 3 n) and rank r, let M be a matrix of order [n 3 (n r)] and K another matrix of order [(n r) 3 n] chosen in such a way that A þ MK is nonsingular. Then (A þ MK)1 is a conditional inverse of A.
Example 9.13.1 3 3 1 1 1 6 1 3 1 1 7 7 A¼6 4 1 1 3 1 5 1 1 1 3
3 :29625 :0225 0 :035 6 :0225 :065 0 :01 7 7 6 7 ¼ A(1) 6 4 0 0 0 0 5 :035 :01 0 :04 2
D.10 Matrix Differentiation 10.1a If the elements of a matrix Y(m 3 n) are functions of a scalar, x, the expression qY=qx denotes a matrix of order (m 3 n) with elements qyij=qx. 10.1b If the elements of a column (row) vector y(y0 ) are functions of a scalar, x, the expression qY=qx
denotes a column (row) vector with elements qyi=qx. 10.2a If y is a scalar function of m 3 n variables, xij, arranged into a matrix X, the expression
2
is of order (4 3 4) and rank 3. Take M ¼ j (column vector of ones) and K ¼ j0 (row vector of ones). Then A þ MK ¼ A þ jj0 ¼ 4I. Hence (1=4) I is a conditional inverse of A.
9.14 The ‘‘Doolittle’’ method (see 6.14) can be employed to obtain a conditional inverse of a symmetric matrix. If, at any stage, the leading element of the p-row is zero, that cycle is disregarded.
Example 9.14.1 Invert, conditionally, the matrix 2
4 6 2 A¼6 4 2 4
4 1
2 2 17 11 11 10 6 1
2 2 4 1 :5 :5 1 :25 17 11 6 0 16 12 4 :5 1 :75 :25 :03125 10 1 0 0 30 25 1
0 :875 :035
3 4 6 7 7 1 5 30
0 :25 :01
qY=qX denotes a matrix with elements qy=qxij. (Note: Partial differentiation is performed with respect to the element in row i and column j of X. If the same xvariable occurs in another place as, e.g., in a symmetric matrix, differentiation with respect to the distinct (repeated) variable is performed in two stages.)
Example 10.2.1 If y ¼ j0 Xj (sum of all elements of a square matrix), qy=qX is a matrix of ones. If X is symmetric, one can introduce a new notation xij ¼ xji ¼ zij. Then qy=qzij ¼ (qy=qxij )(qxij =qzij ) þ (qy=qxji )(qxji =qzij ) ¼ 1 þ 1 ¼ 2 (if i 6¼ j) ¼ 1 (if i ¼ j)
10.2b If y is a scalar function of n variables, xi, arranged into a column (row) vector x(x0 ), the expression qy=qx
1 1 :0625 0 1 0 1 0 1 0 :04
(qy0 =qx)
(qy=qx0 )
denotes a column (row) vector with elements qy=qxi. 10.3 If y is a column vector with m elements, each a function of n variables, xi, arranged into a row vector x0 , the expression qy=qx0 denotes a matrix with m rows and n columns, with elements qyi=qxj. 10.4 qY=qyij ¼ (EL)ij (see definition of (EL) in 3.4).
D-12
Appendix D: Matrices and Determinants
10.5 qUV=qx ¼ (qU=qx)V þ U(qV=qx). 10.6 qAY=qx ¼ A(qY=qx) (if elements of A are not functions of x). 10.7 qY0=qyij ¼ (EL)ji. 10.8 qA0 YA=qx ¼ A0 (qY=qx)A. 10.9 qY0 AY=qx ¼ (qY0=qx)AY þ Y0 A(qY=qx). 10.10 qa0 x=qx ¼ a. 10.11 qx0 x=qx ¼ 2x. 10.12 qx0 Ax=qx ¼ Ax þ A0 x. 10.13 (Chain Rule No. 1) qy=qx0 ¼ (qy=qz0 )(qz=qx0 ). 10.14 qAx=qx0 ¼ A. 10.15 q tr X=qX ¼ I. 10.16 q tr AX=qX ¼ q tr XA=qX ¼ A0 . 10.17 q tr AXB=qX ¼ A0 B0 . 10.18 q tr X0 AX=qX ¼ AX þ A0 X. 10.19 q log jXj=qX ¼ (X0 )1 (log to base e). 10.20 qY1=qx ¼ Y1(qY=qx)Y1. 10.21 (Chain Rule No. 2) qy=qx ¼ tr(qy=qZ) (qZ0 =qx) where y and x are scalars. The scalar y is a function of m 3 n variables zij, and each of the zij is a function of x.
dx1 dx2 . . . dxm ¼ jqy=qx0 j1 dy1 dy2 . . . dym : 10.23 For a scalar y (a function of m variables xi) to attain a stationary value, it is necessary that qy=qx ¼ 0: 10.24 For a stationary value to be a minimum (maximum) it is necessary that q(qy=qx)=qx0
(q(qy=qx)=qx0 )
be a positive-definite matrix for the value of x satisfying 10.23.
Example 10.24.1 Find the values of b that minimize u ¼ x0 x (the sum of squares of xi) where x ¼ y Ab (with y and A known and fixed).
Example 10.21.1 Obtain log jR FF0 j=qF, where R is symmetric. By Chain Rule No. 2: q log jR FF0 j=qfij ¼ tr[q log jR FF0 j=q(R FF0 )] [q(R FF0 )=qfij ] (since R and FF0 are symmetric) ¼ tr(R FF0 )1 [q(R FF0 )=qfij ] (by 10:19) ¼ tr(R FF0 )1 [(qF=qfij )F0 F(qF0 =qfij )] (by 10:5) ¼ tr(R FF0 )1 [(EL)ij F0 F(EL)ji ] (by 10:4 and 10:7) 0 1
10.22 jqy=qx0 j ¼ J(y; x) is called the Jacobian or functional determinant used in variable transformation of multiple integrals. Formally, if y is a column vector with m elements, each function of m variables xi arranged into a row vector x0 ,
0
0 1
¼ tr(R FF ) (EL)ij F tr(R FF ) F(EL)ji ¼ tr(EL)ij F0 (R FF0 )1 tr(EL)ji (R FF0 )1 F (by 7:3) ¼ [F0 (R FF0 )1 ]ji [(R FF0 )1 F]ij , where [ ]ij denotes the (i, j) element of the matrix in brackets (by 7.5),
qu=qb0 ¼ (qu=qx0 )(qx=qb0 ) (by Chain Rule No:1) ¼ 2x0 A (by 10:11 and 10:14) Hence, qu=qb ¼ 2A0 x ¼ 2A0 (y Ab) Hence, for a stationary value, by 10.23, it is necessary that ^ ¼ A0 y A0 Ab ^ denotes the values that make u stationary. Now, where b q(qu=qb)=qb0 ¼ 2q(A0 Ab)=qb0 ¼ 2A0 A: If A has real elements, and if A0 A is nonsingular, then it is positive-definite (since, given an arbitrary real x 6¼ 0, x0 A0 Ax ¼ z0 z, with z ¼ Ax; thus, this is a sum of squares). ^ minimizes u. Hence, b
¼ [(R FF0 )1 F]ij [(R FF0 )1 F]ij (since R FF0 is symmetric) ¼ 2[(R FF0 )1 F]ij :
Hence, by definition 10.2a. q log jR FF0 j=qF ¼ 2(R FF0 )1 F:
10.25 (Generalized Newton Iteration) Let x00 be an initial estimate (m elements) of the roots of the m equations f(x0 ) ¼ 0
D-13
Appendix D: Matrices and Determinants
where the m elements of the column vector f are each functions of x1, x2, . . . xm. Then an improved root is x1 ¼ x0
Q1 0 f
0 x0 ,
where Q0 is the matrix of derivatives qf=qx0 evaluated 0 at consists of evaluating f x0 , x ¼ x0. The usual procedure 0 then solving Q0 u ¼ f x0 for u. Then x1 ¼ x0 u.
Example 10.23.1
Then, x3 ¼ x2 u ¼ 1:3998 y3 ¼ y2 y ¼ 2:3001 (The exact roots are x ¼ 1.4 and y ¼ 2.3).
D.11 Statistical Matrix Forms 11.1 Let E denote the expectation operator, and let y be a set of p random variables. Then
Solve
E(y) ¼ m 3
2
2
f1 (x, y) ¼ x x y þ y 3:526 ¼ 0
f2 (x, y) ¼ x 3 þ y 3 14:911 ¼ 0 2 3x 2xy 2y x 2 Q¼ 3x 2 3y 2
states that E(yi) ¼ mi (i ¼ 1, 2, . . . , p). 11.2 Let var denote variance. Then var(y) ¼ S
Take x0 ¼ 1, y0 ¼ 2 f1 (x0 , y0 ) ¼ 0:526 f2 (x0 , y0 ) ¼ 5:911 1 3 Q0 ¼ 3 12 u þ 3y ¼ 0:526 3u þ 12y ¼ 5:911
yields u ¼ 0:55, y ¼ 0:36:
Then, x1 ¼ x0 u ¼ 1:55 y1 ¼ y0 y ¼ 2:36 f1 (x1 , y1 ) ¼ 0:0976
f2 (x1 , y1 ) ¼ 1:9572 0:1085 2:3175 Q1 ¼ 7:2075 16:7088 0:1085u þ 2:3175y ¼ 0:0976 7:2075u þ 16:7088y ¼ 1:9572 yields u ¼ 0:157, y ¼ 0:049:
Then, x2 ¼ x1 u ¼ 1:393 y2 y1 y ¼ 2:311 f1 (x2 , y2 ) ¼ 0:3337
f2 (x2 , y2 ) ¼ 0:13443 0:61710 2:68155 Q2 ¼ 5:82135 16:02216 0:61710u þ 2:68155y ¼ 0:03337
5:82135u þ 16:02216y ¼ 0:13443 yields u ¼ 0:0068, y ¼ 0:0109:
11.3 11.4 11.5 11.6 11.7 11.8 11.9
denotes a p 3 p symmetric matrix whose elements are cov (yi, yj) and whose diagonal elements are var(yi), where cov denotes covariance. E(Ay þ b) ¼ AE(y) þ b ¼ Am þ b. var(Ay þ b) ¼ A var(y)A0 ¼ ASA0 . cov(y, z0 ) denotes a matrix with elements cov(yi, zj). cov(z, y0 ) ¼ [cov(y, z0 )]0 . cov(Ay þ b, z0 C þ d0 ) ¼ A cov(y, z0 )C. var(y) ¼ E(yy0 ) E(y) E(y0 ). cov(y, z0 ) ¼ E(yz0 ) E(y) E(z0 ). (Expected ‘‘sum of squares’’) E(y0 Qy) ¼ tr[Q var(y)] þ E(y 0 )QE(y):
11.10 If a matrix Q is symmetric and positive-definite, one can find a lower triangular matrix T (with positive diagonal terms, for uniqueness) such that TT0 ¼ Q. The matrices T and T1 can be obtained from the Doolittle pattern (6.14) (Gauss elimination or square-root method) as follows: In each cycle, divide the p-row (left- and right-hand side) by pffiffiffiffiffi pii (instead of pii for the u-row). Thus, obtain rows designated as t-rows. The left-hand side (Arabic subscripts) is T0 , and the right-hand side (Roman subscripts) is T1. 11.11 If a coordinate system x is oblique, and if the cosines between reference vectors (scalar products of basis vectors of unit length) are stated in a symmetric matrix Q, then T1x ¼ y is an orthogonal system, where T is obtained from Q by 11.10. 11.12 The likelihood function of a sample of size n from a multivariate normal distribution (p responses), with common variance–covariance matrix S(p 3 p), and with means or main effects replaced by maximum-likelihood or least-squares estimates, can be written as log L ¼
np n n log 2p log jSj trS1 S 2 2 2
D-14
Appendix D: Matrices and Determinants
P where (p p) is the common variance–covariance matrix, and S is its maximum-likelihood estimate (matrix of sums of squares and products due to error, divided by sample size n) and log is base e. 11.13 If S has a structure under a model or null hypothesis, and if elements of S are to be estimated, by maximum-likelihood, two cases can be distinguished: (11.14) S1 has the same structure (intraclass correlation, mixed model, compound symmetry, factor analysis). (11.15) S1 has a different structure (autocorrelation, Simplex structure). 11.14 If the structure of S and S1 are identical, and if u and y are elements (or functions of elements) of S1, then estimates of S can be obtained from the relations (usually requiring Newton iteration; see 10.25): n q log L=qu ¼ tr A(S S) 2 1
where A ¼ qS =qu is frequently an elementary matrix (see 3.4, especially rules 7.5 and 7.6). n n q2 log L=qu qy ¼ tr (qA=qy)(S S) þ tr AS1 BS1 2 2 where B ¼ qS1=qy. These rules are useful to obtain Newton iterations and asymptotic variance-covariance matrices of the estimates. The log is base e.
11.15 If the structures of S and S1 are different, then an estimate of S can be obtained from the relations n q log L=qx ¼ trA(S1 Q), 2 where Q ¼ S1 SS1 and A ¼ qS=qx (see comments in 11:14): n q2 log L=qxqy ¼ tr(qA=qy)(S1 Q) 2 n n 1 1 þ trAS B(S Q) trAQBS1 , 2 2 where B ¼ qS=qy x and y are elements (or functions of elements) of S. The comments of 11.14 apply, but the iterative procedure is considerably more complex. The log is base e.
Appendix E: Vector Analysis E.1 E.2 E.3 E.4 E.5 E.6 E.7 E.8 E.9 E.10 E.11 E.12 E.13 E.14 E.15
Definitions...................................................................................................................................... E-1 Vector Algebra .............................................................................................................................. E-1 Vectors in Space ........................................................................................................................... E-1 The Scalar, Dot, or Inner Product of Two Vectors V1 and V2 ......................................... E-2 The Vector or Cross Product of Vectors V1 and V2 ........................................................... E-2 Scalar Triple Product................................................................................................................... E-3 Vector Triple Product ................................................................................................................. E-3 Geometry of the Plane, Straight Line, and Sphere ............................................................... E-3 Differentiation of Vectors........................................................................................................... E-5 Geometry of Curves in Space.................................................................................................... E-6 Differential Operators—Rectangular Coordinates................................................................ E-7 Transformation of Integrals....................................................................................................... E-8 Gauss’ Theorem (Green’s Theorem) ....................................................................................... E-9 Stokes’ Theorem ........................................................................................................................... E-9 Green’s Theorem .......................................................................................................................... E-9
E.1 Definitions
E.3 Vectors in Space
Any quantity that is completely determined by its magnitude is called a scalar. Examples of such are mass, density, temperature, etc. Any quantity that is completely determined by its magnitude and direction is called a vector. Examples of such are velocity, acceleration, force, etc. A vector quantity is represented by a directed line segment, the length of which represents the magnitude of the vector. A vector quantity is usually represented by a boldface letter such as V. Two vectors V1 and V2 are equal to one another if they have equal magnitudes and are acting in the same directions. A negative vector, written as V, is one that acts in the opposite direction to V, but is of equal magnitude to it. If we represent the magnitude of V by y, we write V ¼ y. A vector parallel to V, but equal to the reciprocal of its magnitude, is written as V1 or 1=V. The unit vector V=y (y 6¼ 0) is that vector which has the same direction as V, but which has a magnitude of unity (sometimes represented as V0 or ^v).
A plane is described by two distinct vectors V1 and V2. Should these vectors not intersect one another, then one is displaced parallel to itself until they do (Figure E.1). Any other vector V lying in this plane is given by
E.2 Vector Algebra The vector sum of V1 and V2 is represented by V1 þ V2. The vector sum of V1 and V2, or the difference of the vector V2 from V1, is represented by V1 V2. If r is a scalar, then rV ¼ Vr and represents a vector r times the magnitude of V, in the same direction as V if r is positive, and in the opposite direction if r is negative. If r and s are scalars and V1, V2, V3 vectors, then the following rules of scalars and vectors hold: V1 þ V2 ¼ V2 þ V1 (r þ s)V1 ¼ rV1 þ sV1 ; r(V1 þ V2 ) ¼ rV1 þ rV2 V1 þ (V2 þ V3 ) ¼ (V1 þ V2 ) þ V3 ¼ V1 þ V2 þ V3 :
V ¼ rV1 þ sV2 : A position vector specifies the position in space of a point relative to a fixed origin. If, therefore, V1 and V2 are the position vectors of the points A and B, relative to the origin O, then any point P on the line AB has a position vector V given by V ¼ rV1 þ (1 r)V2 : The scalar ‘‘r’’ can be taken as the parametric representation of P since r ¼ 0 implies P ¼ B and r ¼ 1 implies P ¼ A (Figure E.2). If P divides the line AB in the ratio r: s, then V¼
r s V1 þ V2 : rþs rþs
The vectors V1, V2, V3, . . . , Vn are said to be linearly dependent if there exist scalars r1, r2, r3, . . . , rn, not all zero, such that r1 V1 þ r2 V2 þ þ rn Vn ¼ 0: A vector V is linearly dependent on the set of vectors V1, V2, V3, . . . , Vn if V ¼ r1 V1 þ r2 V2 þ r3 V3 þ þ rn Vn : Three vectors are linearly dependent if and only if they are coplanar. E-1
E-2
Appendix E: Vector Analysis
E.4 The Scalar, Dot, or Inner Product of Two Vectors V1 and V2
V1
This product is represented as V1 V2 and is defined to be equal to y1y2 cos u, where u is the angle from V1 to V2, i.e., V1 V2 ¼ y1 y2 cos u: The following rules apply for this product: V1 V2 ¼ a1 a2 þ b1 b2 þ c1 c2 ¼ V2 V1
V2
It should be noted that scalar multiplication is commutative:
FIGURE E.1
(V1 þ V2 ) V3 ¼ V1 V3 þ V2 V3 B (r = 0)
V1 (V2 þ V3 ) ¼ V1 V2 þ V1 V3 :
0>r
If V1 is perpendicular to V2, then V1 V2 ¼ 0, and if V1 is parallel to V2, then V1 V2 ¼ y1 y2 ¼ rw21 . In particular,
1>r >0
i i ¼ j j ¼ k k ¼ 1, A (r = 1)
V2
and i j ¼ j k ¼ k i ¼ 0:
V1 r>1 0
FIGURE E.2
All points in space can be uniquely determined by linear dependence on three base vectors, i.e., three vectors any one of which is linearly independent of the other two. The simplest set of base vectors are the unit vectors along the coordinate Ox, Oy, and Oz axes. These are usually designated by i, j, and k, respectively. If V is a vector in space and a, b, and c are the respective magnitudes of the projections of the vector along the axes, then V ¼ ai þ bj þ ck and pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi y ¼ a2 þ b2 þ c2
and the direction cosines of V are
cos a ¼ a=y, cos b ¼ b=y, cos g ¼ c=y: The law of addition yields V1 þ V2 ¼ (a1 þ a2 )i þ (b1 þ b2 )j þ (c1 þ c2 )k:
E.5 The Vector or Cross Product of Vectors V1 and V2 This product is represented as V1 3 V2 and is defined to be equal to y1y2(sin u)1, where u is the angle from V1 to V2 and 1 is a unit vector perpendicular to the plane of V1 and V2 and so directed that a righthand screw driven in the direction of 1 would carry V1 into V2, i.e., V1 V2 ¼ y1 y2 ( sin u)1 and tan u ¼
jV1 V2 j : V1 V2
The following rules apply for vector products: V1 V2 ¼
V2 V1
V1 (V2 þ V3 ) ¼ V1 V2 þ V1 V3 (V1 þ V2 ) V3 ¼ V1 V3 þ V2 V3 V1 (V2 V3 ) ¼ V2 (V3 V1
V3 (V1 V2 )
i i ¼ j j ¼ k k ¼ 01 (zero vector) ¼0 i j ¼ k, j k ¼ i, k i ¼ j:
E-3
Appendix E: Vector Analysis
If V1 ¼ a1 i þ b1 j þ c1 k, V2 ¼ a2 i þ b2 j þ c2 k, V3 ¼ a3 i þ b3 j þ c3 k, then i j k V1 V2 ¼ a1 b1 c1 ¼ (b1 c2 b2 c1 )i þ (c1 a2 c2 a1 )j a 2 b2 c 2 þ (a1 b2
E.7 Vector Triple Product The product V1 3 (V2 3 V3) defines the vector triple product. Obviously, in this case, the brackets are vital to the definition: V1 (V2 V3 ) ¼ (V1 V3 )V2 (V1 V2 )V3 j k i b1 c1 a1 , ¼ b2 c2 c2 a2 a2 b2 b3 c3 c3 a3 a3 b3
a2 b1 )k:
It should be noted that since V1 3 V2 ¼ V2 3 V1, the vector product is not commutative.
E.6 Scalar Triple Product There is only one possible interpretation of the expression V1 V2 3 V3 and that is V1 (V2 3 V3), which is obviously a scalar. Further, V1 (V2 3 V3) ¼ (V1 3 V2) V3 ¼ V2 (V3 3 V1)
i.e., it is a vector, perpendicular to V1, lying in the plane of V2, V3. Similarly, i b c 1 1 (V1 V2 ) V3 ¼ b2 c 2 a3
a1 b1 c1 ¼ a2 b2 c2 a3 b3 c3 ¼ y1 y2 y3 cos f sin u where u is the angle between V2 and V3 f is the angle between V1 and the normal to the plane of V2 and V3 This product is called the scalar triple product and is written as [V1V2V3]. The determinant indicates that it can be considered as the volume of the parallelepiped whose three determining edges are V1, V2, and V3. It also follows that cyclic permutation of the subscripts does not change the value of the scalar triple product so that
j c1 c 2
a1 a 2
b3
a1 b1 a b 2 2 c3 k
V1 (V2 V3 ) þ V2 (V3 V1 ) þ V3 (V1 V2 ) 0: If V1 3 (V2 3 V3) ¼ (V1 3 V2) 3 V3, then V1, V2, V3 form an orthogonal set. Thus, i, j, k form an orthogonal set.
E.8 Geometry of the Plane, Straight Line, and Sphere The position vectors of the fixed points A, B, C, D relative to O are V1, V2, V3, V4, and the position vector of the variable point P is V. The vector form of the equation of the straight line through A parallel to V2 is V ¼ V1 þ rV2
[V1 V2 V3 ] ¼ [V2 V3 V1 ] ¼ [V3 V1 V2 ]
or (V V1 ) ¼ rV2 or (V V1 ) V2 ¼ 0,
but [V1 V2 V3 ] ¼
[V2 V1 V3 ], etc:
while that of the plane through A perpendicular to V2 is
and
(V [V1 V1 V2 ] 0, etc:
The equation of the line AB is
Given three noncoplanar reference vectors V1, V2, and V3 the reciprocal system is given by V1*, V2*, and V3*, where 1 ¼ y1 y1* ¼ y2 y2* ¼ y3 y3* 0 ¼ y1 y2* ¼ y1 y3* ¼ y2 y1*, etc: V1* ¼
V2 V3 V3 V1 V1 V2 , V2* ¼ , V3* ¼ : [V1 V2 V3 ] [V1 V2 V3 ] [V1 V2 V3 ]
The system i, j, k is its own reciprocal.
V1 ) V2 ¼ 0:
V ¼ rV þ (1
r)V2
and those of the bisectors of the angles between V1 and V2 are V1 V2 V¼r y1 y2 or V ¼ r (^v1 þ ^v2 ):
E-4
Appendix E: Vector Analysis
The perpendicular from C to the line through A parallel to V2 has as its equation V ¼ V1 V3 ^v2 (V1
(If this plane contains the point C, then r ¼ V3 V2, and if it passes through the origin, then r ¼ 0.) Given two planes
V3 )^v2 :
V V1 ¼ r
The condition for the intersection of the two lines,
V V2 ¼ s,
V ¼ V1 þ rV3
any planes through the line of intersection of these two planes is given by
V ¼ V2 þ sV4
V (V1 þ lV2 ) ¼ r þ ls
[(V1 þ V2 )V3 V4 ] ¼ 0:
where l is a scalar parameter. In particular, l ¼ y1=y2 yields the equation of the two planes bisecting the angle between the given planes. The plane through A parallel to the plane of V2, V3 is
and
is
The common perpendicular to the above two lines is the line of intersection of the two planes [(V
V1 )V3 (V3 V4 )] ¼ 0
[(V
V2 )V4 (V3 V4 )] ¼ 0
V ¼ V1 þ rV2 þ sV3 or
and (V V1 ) V2 V3 ¼ 0 or
and the length of this perpendicular is [VV2 V3 ]
[(V V2 )V3 V4 ] : jV3 V4 j
so that the expansion in rectangular Cartesian coordinates yields
The equation of the line perpendicular to the plane ABC is
(x a1 ) a2 a3
V ¼ V1 V2 þ V2 V3 þ V3 V1 , and the distance of the plane from the origin is
j(V2
[V1 V2 V3 ] V1 ) (V3
V1 )j
:
In general, the vector equation
b2 b3
c1 ) c2 ¼ 0 c3
(V xi þ yj þ zk),
[(V V1 )(V1 V2 )V3 ] ¼ 0
V1 V2 : y2
[VV2 V3 ] [VV1 V3 ] [V1 V2 V3 ] ¼ 0: The plane through the three points A, B, and C is V ¼ V1 þ s(V2 V1 ) þ t(V3 V1 ) or
The distance from A, measured along a line parallel to V3, is V1 V2 V2 ^v3
b1 ) (z
or
defines the plane that is perpendicular to V2, and the perpendicular distance from A to this plane is
r
(y
which is obviously the usual linear equation in x, y, and z. The plane through AB parallel to V3 is given by
V V2 ¼ r
r
[V1 V2 V3 ] ¼ 0
or
r
V1 V2 y2 cos u
where u is the angle between V2 and V3.
V ¼ rV1 þ sV2 þ tV3
(r þ s þ t 1)
or [(V V1 )(V1 V2 )(V2 V3 )] ¼ 0
E-5
Appendix E: Vector Analysis
or [VV1 V2 ] þ [VV2 V3 ] þ [VV3 V1 ] [V1 V2 V3 ] ¼ 0:
The equation of the tangent plane at D, on the surface of sphere (V V2 ) (V V2 ) ¼ y21 , is (V
For four points A, B, C, D to be coplanar, then rV1 þ sV2 þ tV3 þ uV4 0 r þ s þ t þ u: The following formulae relate to a sphere when the vectors are taken to lie in three-dimensional space and to a circle when the space is two-dimensional. For a circle in three dimensions, take the intersection of the sphere with a plane. The equation of a sphere with center O and radius OA is (not V ¼ V1 )
V2 ) ¼ 0
or V V4
V2 (V þ V4 ) ¼ y21
y22 :
The condition that the two circles (V V2 ) (V V2 ) ¼ y21 and (V V4 ) (V V4 ) ¼ y23 intersect orthogonally is clearly (V2
V V ¼ y21
V4 ) (V4
V4 ) (V2
V4 ) ¼ y21 þ y23 :
The polar plane of D with respect to the circle
or (V V2 ) (V V2 ) ¼ y21 is V V4 V2 (V þ V4 ) ¼ y21 y22 :
V1 ) (V þ V1 ) ¼ 0,
(V
while that of a sphere with center B radius y1 is V2 ) (V
(V
V2 ) ¼
y21
or
Any sphere through the intersection of the two spheres (V (V V2 ) ¼ y21 and (V V4 ) (V V4 ) ¼ y23 is given by (V
2V2 ) ¼ y21
V (V
y22 :
V2 ) (V
V2 ) þ l(V
r ¼ 2a cos u while in three-dimensional Cartesian coordinates, it is x2 þ y2 þ z 2
V1 ) (V
V2 ) ¼ 0:
The square of the length of the tangent from C to the sphere with center B and radius y1 is given by (V3
V2 ) (V3
V2 ) ¼ y21 :
The condition that the plane V V3 ¼ s is tangential to the sphere (V V2 ) (V V2 ) ¼ y21 is (s
V3 V2 ) (s
V3 V2 ) ¼ y21 y23 :
V4 ) ¼
1 2 y 2 1
y22
y23 þ y24 :
E.9 Differentiation of Vectors If V1 ¼ a1i þ b1j þ c1k and V2 ¼ a2i þ b2j þ c2k and if V1 and V2 are functions of the scalar t, then d dV1 dV2 þ þ , (V1 þ V2 þ ) ¼ dt dt dt
2(a2 x þ b2 y þ c2 x) ¼ 0:
The equation of a sphere having the points A and B as the extremities of a diameter is (V
V (V2
2V2 ) ¼ 0:
Note that in two-dimensional polar coordinates this is simply
V4 ) ¼ y21 þ ly23 ,
while the radical plane of two such spheres is
If the above sphere passes through the origin, then V (V
V4 ) (V
V2 )
where dV1 da1 db1 dc1 ¼ iþ jþ k, etc: dt dt dt dt d dV1 dV2 V2 þ V1 (V1 V2 ) ¼ dt dt dt d dV1 dV2 V2 þ V1 (V1 V2 ) ¼ dt dt dt dV dy V ¼y : dt dt In particular, if V is a vector of constant length then the righthand side of the last equation is identically zero showing that V is perpendicular to its derivative.
E-6
Appendix E: Vector Analysis
^ b
The derivatives of the triple products are d [V1 V2 V3 ] ¼ dt
dV1 dV2 V2 V3 þ V1 V3 dt dt dV3 þ V1 V2 dt
l
the unit binormal, i.e., the unit vector that is ^ at the point A. parallel to ^t n the torsion of the curve at A.
Frenet’s formulae: d^t ¼ k^ n ds d^ n ^ ¼ k^t þ lb ds ^ db ¼ l^ n ds
and d dV1 {V1 (V2 V3 )} ¼ (V2 V3 ) þ V1 dt dt dV2 dV3 V3 þ V1 V2 : dt dt
The following formulae are also applicable: Unit tangent:
E.10 Geometry of Curves in Space s V1 V1 þ dV1 ^t
the length of arc, measured from some fixed point on the curve (Figure E.3). the position vector of the point A on the curve. the position vector of the point P in the neighborhood of A. the unit tangent to the curve at the point A, measured in the direction of s increasing.
The normal plane is that plane which is perpendicular to the unit tangent. The principal normal is defined as the intersection of the normal plane with the plane defined by V1 and V1 þ dV1 in the limit as dV1 0. ^ n
r du k
the unit normal (principal) at the point A. The ^ is called the osculating plane defined by ^t and n plane (alternatively, plane of curvature or local plane). the radius of curvature of A. the angle subtended at the origin by dV1. du 1 ¼ : ¼ ds r
^t ¼
dV1 : ds
Equation of the tangent: V1 ) ^t ¼ 0
(V or
V ¼ V1 þ q^t: Unit normal: ^¼ n
1d 2 V1 : kds2
Equation of the normal plane: (V
V1 ) ^t ¼ 0:
Equation of the normal: (V
^¼0 V1 ) n
or A
ˆb
ˆt P
nˆ
n: V ¼ V1 þ r^
s increasing
Unit binormal: ρ
s=0 V1
^ ¼ ^t n ^: b V1 + δV1
Equation of the binormal: (V
δθ O
FIGURE E.3
^¼0 V1 ) b
or ^ V ¼ V1 þ ub
E-7
Appendix E: Vector Analysis
or
S+d
S
S + dS
2
dV1 d V1 2 : ds ds
V
V ¼ V1 þ w
V +d
n
Equation of the osculating plane: ^] ¼ 0 V1 )^tn
[(V
S S
or
(V
2 dV1 d V1 ¼ 0: V1 ) ds ds2
A geodetic line on a surface is a curve, the osculating plane of which is everywhere normal to the surface. The differential equation of the geodetic is
^dV1 d2 V1 ¼ 0: n
E.11 Differential Operators— Rectangular Coordinates dS ¼
qS qS qS dx þ :dy þ dz: qx qy qz
V O
FIGURE E.4
and the directional derivative of S in the direction of U is U =S ¼ U (=S) ¼ (U =)S: Similarly, the directional derivative of the vector V in the direction of U is (U =)V: The distributive law holds for finding a gradient. Thus, if S and T are scalar functions, =(S þ T) ¼ =S þ =T: The associative law becomes the rule for differentiating a product:
By definition, = del i
q q q þj þk qx qy qz
q2 q2 q2 =2 Laplacian 2 þ 2 þ 2 : qx qy qz
qS qS qS i þ j þ k: dx dy dz
Grad S defines both the direction and magnitude of the maximum rate of increase of S at any point. Hence the name gradient and also its vectorial nature. =S is independent of the choice of rectangular coordinates. =S ¼
qS ^ n qn
^ is the unit normal to the surface S ¼ constant, in the where n direction of S increasing. The total derivative of S at a point having the position vector V is given by (Figure E.4) qS ^ dV n qn ¼ dV =S
dS ¼
If V is a vector function with the magnitudes of the components parallel to the three coordinate axes, Vx, Vy, Vz, then = V div V
If S is a scalar function, then =S grad S
=(ST) ¼ S=T þ T=S:
qVx qVy qVz þ þ : qx qy qz
The divergence obeys the distributive law. Thus, if V and U are vectors functions, then = (V þ U) ¼ = V þ = U
= (SV) ¼ (=S) V þ S(= V) = (U V) ¼ V (= U) U (= V): As with the gradient of a scalar, the divergence of a vector is invariant under a transformation from one set of rectangular coordinates to another: = V curl V (sometimes = ^ V or rot V) qVy qVx qVz qVy qVx qVz iþ jþ k qy qz qz qx qx qy i j k q q q ¼ qx qy qz : Vx Vy Vz
E-8
Appendix E: Vector Analysis
The curl (or rotation) of a vector is a vector that is invariant under a transformation from one set of rectangular coordinates to another
^ ^ f In spherical coordinates ^r, u, h1 ¼ 1, h2 ¼ r, h3 ¼ r sin u ^a q q ¼ ^r , qr h1 qu
= (U þ V) ¼ = U þ = V = (SV) ¼ (=S) V þ S(= V)
= (U V) ¼ (V =)U (U =)V þ U(= V) V(= U) grad(U V) ¼ =(U V) ¼ (V =)U þ (U =)V þ V (= U) þ U (= V):
^ q ^ q b f ¼ , h2 qy r qu
^ ^c q q f ¼ : h3 qw r sin u qf
The general expressions for grad, div, and curl, together with those for =2 and the directional derivative, are in orthogonal curvilinear coordinates given by ^ qS ^c qS ^a qS b þ þ h1 qu h2 qy h3 qw V1 qS V2 qS V3 qS (V =)S ¼ þ þ h1 qu h2 qy h3 qw
1 q q q =V¼ (h2 h3 V1 ) þ (h3 h1 V2 ) þ (h1 h2 V3 ) h1 h2 h3 qu qy qw
^a q q =V¼ (h3 V3 ) (h2 V2 ) h2 h3 qy qw
^ q q b þ (h1 V1 ) (h3 V3 ) qu h3 h1 qw
^c q q þ (h2 V2 ) (h1 V1 ) h1 h2 qu qy 1 q h h qS q h3 h1 qS 2 3 =2 S ¼ þ h1 h2 h3 qu h1 qu qy h2 qy q h1 h2 qS þ : qw h3 qw =S ¼
If V ¼ V x i þ Vy j þ Vz k = V ¼ =Vx i þ =Vy j þ =Vz k and = V ¼ =Vx i þ =Vy j þ =Vz k: The operator r can be used more than once. The number of possibilities where r is used twice are = (=u) div grad u = (=u) curl grad u =(= V) grad div V = (= V) div curl V = (= V) curl curl V: The surface PRS u ¼ const., and the face of the curvilinear figure immediately opposite this is u þ du ¼ const. etc. In terms of these surface constants P ¼ P(u, y, w) Q ¼ Q(u þ du, y, w)
and
PQ ¼ h1 du
R ¼ R(u, y þ dy, w) PR ¼ h2 dy S ¼ S(u, y, w þ dw)
PS ¼ h3 dw
where h1, h2, and h3 are functions of u, y, and w. In rectangular Cartesians i, j, k,
E.12 Transformation of Integrals the distance along some curve ‘‘C’’ in space and is measured from some fixed point S a surface area V a volume contained by a specified surface ^t the unit tangent to C at the point P ^ the unit outward pointing normal n F some vector function ds is the vector element of curve (¼ ^t ds) ^ dS). dS is the vector element of surface (¼ n s
Then (Table E.1)
h1 ¼ 1, h2 ¼ 1, h3 ¼ 1: ^a q q ¼i , qx h1 qu
^ q q b ¼j , qy h2 qy
^c q q ¼k : qz h3 qw
h1 ¼ 1, h2 ¼ r, h3 ¼ 1: ^a q q ¼ ^r , h1 qu qr
(c)
F ^tds ¼
ð
F ds
(c)
and when
^ k, ^ In cylindrical coordinates ^r, f,
^ q ^ q b f ¼ , h2 qy r qf
ð
^c q q ¼ k^ : h3 qw qz
F ¼ =f, ð
(c)
(=f) ^tds ¼
ð
(c)
df:
E-9
Appendix E: Vector Analysis TABLE E.1
Formulas of Vector Analysis Rectangular Coordinates
Conversion to rectangular coordinates
Cylindrical Coordinates
Spherical Coordinates
x ¼ r cos w y ¼ r sin w z ¼ z
Gradient
qf qf qf iþ jþ k qx qy qz qAx qAy qAz þ þ =A¼ qx qy qz
=f ¼
Divergence
i q = A ¼ qx Ax
Curl
Laplacian
=2 f ¼
j q qy
Ay
k q qz Az
q2 f q2 f q2 f þ þ qx2 qy2 qz 2
x ¼ r cos w sin u y ¼ r sin w sin u
=f ¼
z ¼ r cos u qf 1 qf 1 qf =f ¼ rþ uþ f qr r qu r sin u qw
1 q qf 1 q2 f =2 f ¼ r þ 2 r qr qr r qw2
1 q(r 2 Ar ) 1 q(Au sin u) þ r2 qr r sin u qu 1 qAw þ r sin u qw r u f r 2 sin u r sin u r =A¼ q q qrq qu qw Ar rAu rAw sin u 1 q 2 qf 1 q qf =2 f ¼ 2 r sin u þ 2 r qr qr r sin u qu qu
qf 1 qf qf rþ fþ k qr r qw qz 1 q(rAr ) 1 qAw þ =A¼ r qr r qw qAz þ qz 1 r r f 1r k q q q = A ¼ qr qw qz A rA A w
r
þ
=A¼
z
q2 f qz 2
þ
r2
1 q2 f 2 sin f qw2
E.13 Gauss’ Theorem (Green’s Theorem) E.14 Stokes’ Theorem When S defines a closed region having a volume V ððð
(= F)dV ¼
(y)
ðð
^ )dS ¼ (F n
(S)
ðð
When C is closed and bounds the open surface S, ðð
F dS
(S)
^ (= F)dS ¼ n
ð
F ds
ð
fds:
(c)
(S)
and ðð
also ððð (y)
(=f)dV ¼
ðð
(S)
f^ n dS
(S)
and ððð (y)
(= F)dV ¼
ðð
(S)
(^ n =f)dS ¼
E.15 Green’s Theorem ðð
(^ n F)dS:
(c)
(S)
(=f =u)dS ¼ ¼
ðð
(a)
ðð
(S)
f^ n (=u)dS ¼
^(=u)dS ¼ un
ððð (y)
ððð
f(=2 u)dV
(y)
u(=2 f)dV:
Appendix F: Algebra Formulas and Coordinate Systems F.1 F.2 F.3 F.4 F.5 F.6 F.7 F.8 F.9 F.10 F.11 F.12 F.13
Arithmetic Progression ............................................................................................................... F-1 Geometric Progression................................................................................................................ F-1 Harmonic Progression ................................................................................................................ F-1 Factorials ........................................................................................................................................ F-1 Permutations ................................................................................................................................. F-2 Combinations ................................................................................................................................ F-2 Quadratic Equations .................................................................................................................... F-2 Cubic Equations............................................................................................................................ F-2 Trignometric Solution of the Cubic Equation....................................................................... F-2 Quartic Equation .......................................................................................................................... F-3 Partial Fractions............................................................................................................................ F-3 Polar Coordinates in a Plane..................................................................................................... F-5 Rectangular Coordinates in Space............................................................................................ F-6 If jrj < 1, then the sum of an infinite geometrical progression converges to the limiting value
F.1 Arithmetic Progression* An arithmetic progression is a sequence of numbers such that each number differs from the previous number by a constant amount, called the common difference. If a1 is the first term, an the nth term, d the common difference, n the number of terms, and sn the sum of n terms. n an ¼ a1 þ (n 1)d, sn ¼ [a1 þ an ], 2 n sn ¼ [2a1 þ (n 1)d]: 2 The arithmetic mean between a and b is given by
aþb . 2
a1 , 1r
1 rn 1r rn 1 ¼ a1 r1 a1 ran ¼ 1r ran a1 ¼ r1
an ¼ a1 r n1 ; sn ¼ a1
(r 6¼ 1)
* It is customary to represent an by l in a finite progression and refer to it as the last term.
s1 ¼ lim
n!1
a1 (1 r n ) a1 ¼ 1r 1r
The geometric mean between a and b is given by
F.3 Harmonic Progression
pffiffiffiffiffi ab.
A sequence of numbers whose reciprocals form an arithmetic progression is called an harmonic progression. Thus, 1 1 1 1 , ,, , a1 a1 þ d a1 þ 2d a1 þ (n
F.2 Geometric Progression* A geometric progression is a sequence of numbers such that each number bears a constant ratio, called the common ratio, to the previous number. If a1 is the first term, an the nth term, r the common ratio, n the number of terms, and sn the sum of n terms.
1)d
,,
where 1 1 ¼ an a1 þ (n
1)d
forms a harmonic progression. The harmonic mean between 2ab a and b is given by . aþb If A, G, H, respectively, represent the arithmetic mean, geometric mean, and harmonic mean between a and b, then G2 ¼ AH.
F.4 Factorials pffiffiffiffiffiffiffiffiffi ffn ¼ n! ¼ en nn 2pn, approximately: F-1
F-2
Appendix F: Algebra Formulas and Coordinate Systems
F.5 Permutations If M ¼ nPr ¼ Pn:r denotes the number of permutations of n distinct things taken r at a time, M ¼ n(n 1)(n 2) . . . (n r þ 1) ¼
n! (n r)!
F.6 Combinations n If M ¼n Cr ¼ Cn:r ¼ denotes the number of combinations r of n distinct things taken r at a time, n(n 1)(n 2) . . . (n r þ 1) n! ¼ r! r!(n r)! n ¼ 1. By definition 0
If p, q, r are real, then: b2 a 3 If þ > 0, there will be one real root and two conjugate 4 27 imaginary roots. b2 a3 If þ ¼ 0, there will be three real roots of which at least 4 27 two are equal. b2 a 3 If þ < 0, there will be three real and unequal roots. 4 27
F.9 Trigonometric Solution of the Cubic Equation The form x3 þ ax þ b ¼ 0 with ab 6¼ 0 can always be solved by transforming it to the trigonometric identity
M¼
4 cos3 u
x3 þ ax þ b m3 cos3 u þ am cos u þ b 4 cos3 u 3 cos u cos (3u) 0: Hence,
ax2 þ bx þ c ¼ 0:
4 3 cos (3u) ¼ ¼ , m3 am b
Then x¼
b
from which follows that
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b2 4ac : 2a
rffiffiffiffiffiffiffi a m¼2 , 3
If a, b, and c are real, then: If b2 If b2 If b2
4ac is positive, the roots are real and unequal. 4ac is zero, the roots are real and equal. 4ac is negative, the roots are imaginary and unequal.
A cubic equation, y3 þ py2 þ qy þ r ¼ 0 may be reduced to the form, x3 þ ax þ b ¼ 0:
cos (3u) ¼
Any solution u1 which satisfies cos (3u) ¼
solutions
u1 þ
F.8 Cubic Equations
p . Here 3 1 b ¼ (2p3 27
cos (3u) 0:
Let x ¼ m cos u, then
F.7 Quadratic Equations Any quadratic equation may be reduced to the form,
3 cos u
2p 3
and
u1 þ
3b : am
3b , will also have the am
4p : 3
The roots of the cubic x3 þ ax þ b ¼ 0 are rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffi rffiffiffiffiffiffiffi a a 2p a 4p 2 cos u1 , 2 cos u1 þ , 2 cos u1 þ : 3 3 3 3 3
by substituting for y the value, x 1 a ¼ (3q 3
2
p ) and
9pq þ 27r):
For solution, let sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3 b b2 a 3 þ , þ A¼ 4 27 2
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3 b b2 a 3 þ , B¼ 4 27 2
then the values of x will be given by, x ¼ A þ B,
A þ B A B pffiffiffiffiffiffiffi þ 3, 2 2
AþB 2
A 2
B pffiffiffiffiffiffiffi 3:
Example Where Hyperbolic Functions Are Necessary for Solution with Latter Procedure The roots of the equation x3 x þ 2 ¼ 0 may be found as follows: Here, rffiffiffi 1 a ¼ 1, b ¼ 2, m ¼ 2 ¼ 1:155 3 6 ¼ 5:196 cos (3u) ¼ 1:155 cos (3u) ¼ cos (3u p) ¼ cosh [i(3u p)] ¼ 5:196:
F-3
Appendix F: Algebra Formulas and Coordinate Systems Using hyperbolic function tables for cosh[i(3u þ p)] ¼ 5.196, it is found that
has the resolvent cubic equation y3 by2 þ (ac 4d)y a2 d þ 4bd c2 ¼ 0:
i(3u p) ¼ 2:332:
Let y be any root of this equation and
Thus, 3u p ¼ i(2:332) 3u ¼ p i(2:332) p u1 ¼ i(0:777) 3 2p u1 þ ¼ p i(0:777) 3 4p 5p ¼ i(0:777) u1 þ 3 3 hp i cos u1 ¼ cos i(0:777) 3p p ¼ cos [ cos i(0:777)] þ sin [ sin i(0:777)] 3 p3 p ¼ cos ( cosh 0:777) þ i sin ( sinh 0:777) 3 3 ¼ (0:5)(1:317) þ i(0:866)(0:858) ¼ 0:659 þ i(0:743): Note that cos m ¼ cosh (im) and
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a2 b þ y: R¼ 4 If R 6¼ 0, then let D¼ and rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3a2 4ab 8c a3 : R2 2b E¼ 4 4R If R ¼ 0, then let rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3a2 2b þ 2 y2 4d D¼ 4
sin m ¼ i sinh (im):
Similarly,
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3a2 4ab 8c a3 R2 2b þ 4 4R
and
2p ¼ cos [p i(0:777)] cos u1 þ 3
¼ ( cos p)( cosh 0:777) þ i( sin p)( sinh 0:777) ¼ 1:317,
E¼
Then the four roots of the original equation are given by
and 4p 5p cos u1 þ i(0:777) ¼ cos 3 3 5p 5p ¼ cos ( cosh 0:777) þ i( sin )( sinh 0:777) 3 3
¼ (0:5)(1:317) i(0:866)(0:858) ¼ 0:659 i(0:743):
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3a2 2b 2 y2 4d: 4
a R D x¼ þ 4 2 2 and x¼
a 4
R E : 2 2
The required roots are 1:155[0:659 þ i(0:743)] ¼ 0:760 þ i(0:858) (1:155)( 1:317) ¼ 1:520
(1:155)[0:659 i(0:743)] ¼ 0:760 i(0:858):
F.10 Quartic Equation A quartic equation, x4 þ ax3 þ bx2 þ cx þ d ¼ 0,
F.11 Partial Fractions This section applies only to rational algebraic fractions with numerator of lower degree than the denominator. Improper fractions can be reduced to proper fractions by long division. Every fraction may be expressed as the sum of component fractions whose denominators are factors of the denominator of the original fraction. Let N(x) ¼ numerator, a polynomial of the form n0 þ n1 x þ n2 x2 þ þ ni xi
F-4
Appendix F: Algebra Formulas and Coordinate Systems
1. Nonrepeated Linear Factors
Example x2 þ 1 A 0 A1 A 2 f1 x þ f0 ¼ þ þ þ 2 3x þ 6) x 3 x 2 x x 3x þ 6
N(x) A F(x) ¼ þ (x a)G(x) x a G(x) N(x) A¼ G(x) x¼a
x 3 (x 2
1 A0 ¼ , 6 1 0 ( 3) 1 6 A1 ¼ ¼ , 6 12 1 1 1 (1) ( 6 12 A2 ¼ 6 8 1 1 > < f0 ¼ 0 (0) þ (1) 6 12 m¼3 > : f ¼ 0 1 (0) 1 (0) 1 6 12
F(x) is determined by methods discussed in the following sections.
Example
x2 þ 3 A B F(x) ¼ þ þ x(x 2)(x 2 þ 2x þ 4) x x 2 x 2 þ 2x þ 4 x2 þ 3 3 A¼ ¼ (x 2)(x 2 þ 2x þ 4) x¼0 8 2 x þ3 4þ3 7 ¼ ¼ B¼ x(x 2 þ 2x þ 4) x¼2 2(4 þ 4 þ 4) 24
N(x) A0 A1 Am ¼ þ þ þ x xm G(x) xm xm1
1
þ
n0 , g0
A1 ¼
n1
A0 g1 g0
,
A2 ¼
n2
A0 g2 g0
(x
N(x) A0 A1 ¼ þ a)m G(x) (x a)m (x a)m
8 f ¼ n1 > < 0 m ¼ 1 f1 ¼ n2 > : fj ¼ njþ1 8 f ¼ n2 > < 0 m ¼ 2 f1 ¼ n3 > : fj ¼ njþ2 8 f0 ¼ n 3 > > < m ¼ 3 f1 ¼ n3 > > :f ¼ n j
jþ3
Ai gk
i¼0
i
A1 g2 A1 g2
A2 g1
A0 g4
A1 g3
A2 g2
[A0 gjþ3 þ A1 gjþ2 þ A2 gjþ1 ]
any m : fj ¼ nmþj
m 1 X
Ai gmþj
x 3 : 2)2 (x 2 þ x þ 1)
2 ¼ y, x ¼ y þ 2
A 0 A1 f1 y þ f0 þ þ 2 y2 y y þ 5y þ 7 1 1 (5) 1 12 7 , A1 ¼ A0 ¼ ¼ , 7 49 7 0 1 12 53 (1) (5) ¼ B f0 ¼ 0 7 49 49 m ¼ 2B @ 1 12 12 (0) (1) ¼ f1 ¼ 0 7 49 49
[A0 gjþ2 þ A1 gjþ1 ] A0 g3
Am 1 F(x) þ (x a) G(x)
¼
A0 gjþ1 A0 g3
þ þ
(y þ 2) 3 y 1 ¼ y 2 [(y þ 2)2 þ (y þ 2) þ 1] y 2 (y 2 þ 5y þ 7)
#
A0 g2 A1 g1
1
by substitution of x ¼ y þ a. Resolve into
(x
A0 g1
A0 g2
13 11 ( 3) ¼ 72 24 13 13 (1) ¼ 72 72
Example
Let x k 1 X
N 0 (y) ym G0 (y)
A1 g1
General Term:* " 1 Ak ¼ nk g0
13 , 72
partial fractions in terms of y as described in Method 2. Then express in terms of x by substitution y ¼ x a.
F(x) G(x)
F(x) ¼ f0 þ f1 þ f2 x2 þ , G(x) ¼ g0 þ g1 x þ g2 x2 þ . . . A0 ¼
¼
3. Repeated Linear Factors
Change to form
2. Repeated Linear Factors
3)
12 53 1 12 y y 1 49 49 49 7 ; 2 2 ¼ 2 þ þ 2 y y þ 5y þ 7 y (y þ 5y þ 7) y
Let y ¼ x
2, then
1
i¼0
* Note: If G(x) contains linear factors, F(x) may be determined by Method 1.
(x
x 3 ¼ 2)2 (x 2 þ x þ 1) (x ¼
12 12 53 1 (x 2) 49 7 þ 35 þ 49 x2 þ x þ 1 2)2 (x 2) 1 12 12x 29 þ þ 7(x 2)2 35(x 2) 49(x 2 þ x þ 1)
F-5
Appendix F: Algebra Formulas and Coordinate Systems
F.12 Polar Coordinates in a Plane
4. Repeated Linear Factors Alternative method of determining coefficients: N(x) A0 Ak Am 1 F(x) ¼ þ þ þ þ þ x a G(x) (x a)m G(x) (x a)m (x a)m k
1 k N(x) Ak ¼ Dx k! G(x) x¼a where Dkx is the differentiating operator, and the derivative of zero order is defined as: D0x u ¼ u: 5. Factors of Higher Degree Factors of higher degree have the corresponding numerators indicated. N(x) a1 x þ a0 F(x) ¼ þ (x2 þ h1 x þ h0 )G(x) x2 þ h1 x þ h0 G(x) a1 x þ a0 b1 x þ b0 F(x) þ 2þ 2 2 (x þ h x þ h ) G(x) (x þ h1 x þ h0 ) 1 0 N(x) a2 x 2 þ a1 x þ a0 F(x) ¼ þ (x3 þ h2 x2 þ h1 x þ h0 )G(x) x3 þ h2 x2 þ h1 x þ h0 G(x) N(x)
(x2 þ h1 x þ h0 )2 G(x)
¼
Polar Coordinates In a plane, let O X (called the initial line) be a fixed ray radiating from point O (called the pole or origin). Then any point P, other than O, in the plane is located by angle u (called the vectorial angle) measured from O X to the line determined by O and P and the distance r (called the radius vector) from O to P, where u is taken as positive if measured counterclockwise and negative if measured clockwise, and r is taken as positive if measured along the terminal side of angle u and negative if measured along the terminal side of u produced through the pole. Such an ordered pair of numbers, (r, u), is called polar coordinates of the point P. The polar coordinates of the pole O are taken as (0, u), where u is arbitrary. It follows that, for a given initial line and pole, each point of the plane has infinitely many polar coordinates.
Example Some polar coordinates of P are (2, 608), (2, 4208), (2, ( 2, 2408), ( 2, 1208).
.. .
3008),
P
Problems of this type are determined first by solving for the coefficients due to linear factors as shown above, and then determining the remaining coefficients by the general methods given below.
2
6. General Methods for Evaluating Coefficients 1. N(x) N(x) A(x) B(x) C(x) ¼ ¼ þ þ þ D(x) G(x)H(x)L(x) G(x) H(x) L(x)
60° X
O
Multiply both sides of equation by D(x) to clear fractions. Then collect terms, equate like powers of x, and solve the resulting simultaneous equations for the unknown coefficients. 2. Clear fractions as above. Then let x assume certain convenient values (x ¼ 1.0, 1, . . . ). Solve the resulting equations for the unknown coefficients. 3. N(x) A(x) B(x) ¼ þ G(x)H(x) G(x) H(x)
Points Distance between P1 and P2: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r12 þ r22 2r1 r2 cos (u1 u2 ) Points P1, P2, P3 are collinear if and only if r2 r3 sin (u3
Then, N(x) G(x)H(x)
A(x) B(x) ¼ G(x) H(x)
If A(x) can be determined, such as by Method 1, then B(x) can be found as above.
u2 ) þ r3 r1 sin (u1
u3 ) þ r1 r2 sin (u2
u1 ) ¼ 0:
Polygonal Areas Area of triangle P1P2P3: 1 [r1 r2 sin (u2 2
u1 ) þ r2 r3 sin (u3
u2 ) þ r3 r1 sin (u1
u3 )]
F-6
Appendix F: Algebra Formulas and Coordinate Systems
Area of polygon P1P2 . . . Pn:
Y
x
1 [r1 r2 sin (u2 u1 ) þ r2 r3 sin (u3 u2 ) þ þ rn 1 rn 2 sin (un un 1 ) þ rn r1 sin (u1 un )]
P
r
The area is positive or negative according as P1P2 . . . Pn is a counterclockwise or clockwise polygon.
y
θ
Straight Lines
O
X΄
X
Let p ¼ distance of line from O, v ¼ counterclockwise angle from O X to the perpendicular through O to the line: Normal form: r cos(u Two-point form: sin(u2 u1)
v) ¼ p
sin(u
r[r1
u 1) þ r 2
sin(u
u2)] ¼ r1r2
Y΄
x ¼ r cos u, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r ¼ x2 þ y2 ,
Circles Center at pole, radius a: r ¼ a Center at (a, 0) and passing through the pole: r ¼ 2a cos u Center at a, p2 and passing through the pole: r ¼ 2a sin u Center (h, a), radius a: r2 2hr cos(u a) þ h2 a2 ¼ 0 Conics Let 2p ¼ distance from directrix to focus, e ¼ eccentricity. 2ep Focus at pole, directrix to left of pole: r ¼ 1 e cos u 2ep Focus at pole, directrix to right of pole: r ¼ 1 þ e cos u 2ep Focus at pole, directrix below pole: r ¼ 1 e sin u 2ep Focus at pole, directrix above pole: r ¼ 1 þ e sin u Parabola with vertex at pole, directrix to left of pole:
r¼
4p cos u sin2 u
x sin u ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , x2 þ y2
F.13 Rectangular Coordinates in Space Rectangular (Cartesian) Coordinates Let X0 Y, Y0 Y, Z0 Z (called the y-axis, and the z-axis, respectively) be three mutually perpendicular lines in space intersecting in a point O (called the origin), forming in this way three mutually perpendicular planes X O Y, X O Z, Y O Z (called the xy-plane, the xz-plane, and the yz-plane, respectively). Then, any point P of space is located by its signed distances x, y, z from the yz-plane, the xz-plane, and the xy-plane, respectively, where x and y are the rectangular coordinates with respect to the axes X0 X and Y0 Y of the orthogonal projection P0 of P on the xy-plane (here, taken horizontally) and z is taken as positive above and negative below the xy-plane. The ordered triple of numbers, (x, y, z), are called rectangular coordinates of the point P.
Ellipse with center at pole, semiaxes a and b horizontal and vertical, respectively: r2 ¼
a2
Z
a 2 b2 sin u þ b2 cos2 u 2
Hyperbola with center at pole, semiaxes a and b horizontal and vertical, respectively: r2 ¼
y ¼ r sin u, y u ¼ arctan , x y cos u ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ y2
x y
P(x,y,z)
Y΄
z
2 2
O
ab a2 sin2 u b2 cos2 u
x
y
Relations between Rectangular Polar Coordinates Let the positive x-axis coincide with the initial line and let r be nonnegative.
X΄
X Z΄
Y
F-7
Appendix F: Algebra Formulas and Coordinate Systems
Points Let P1(x1, y1, z1) and P2(x2, y2, q z2)ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi be any two points. Distance between P1 and P2: (x2 x1 )2 þ(y2 y1 )2 þ(z2 z1 )2 rx2 þ sx1 ry2 þ sy1 rz2 þ sz1 Point dividing P1P2 in ratio rs , , rþs rþs rþs x þ x y þ y z þ z 1 2 1 2 1 2 Midpoint of P1P2: , , 2 2 2 Points P1, P2, P3 are collinear if and only if x2 x1 : y2 y1 : z2 z1 ¼ x3 x1 : y3 y1 : z3 z1 : Points P1, P2, P3, P4 are coplanar if and only if
x1
x2
x3
x4
y1 y2 y3 y4
Area of triangle P1P2P3:
cos u ¼ cos a1 cos a2 þ cos b1 cos b2 þ cos g1 cos g2 For parallel lines: a1 ¼ a2, b1 ¼ b2, g1 ¼ g2 For perpendicular lines: cos a1 cos a2 þ cos b1 cos b2 þ cos g1 cos g2 ¼ 0 Angle between two lines with directions (a1, b1, c1) and (a2, b2, c2): a1 a2 þ b1 b2 þ c1 c2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi cos u ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a21 þ b21 þ c21 a22 þ b22 þ c22 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (b1 c2 c1 b2 )2 þ (c1 a2 a1 c2 )2 þ (a1 b2 b1 a2 )2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sin u ¼ a21 þ b21 þ c21 a22 þ b22 þ c22
1
1
¼ 0: 1
1
z1 z2 z3 z4
Angle between two lines with direction angles, a1, b1, g1, and a2, b2, g2:
For parallel lines:
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u
u y1 z1 1
2
z1 x1 1
2
x1 y1 1
2 u 1 t y2 z2 1
þ
z2 x2 1
þ
x2 y2 1
2
y3 z3 1 z3 x3 1 x3 y3 1
a1 : b1 : c1 ¼ a2 : b2 : c2 For perpendicular lines: a1 a2 þ b1 b2 þ c1 c2 ¼ 0
Volume of tetrahedron P1 P2 P3 P4:
x1
1
x2 6
x3
x4
y1 y2 y3 y4
z1 z2 z3 z4
1
1
1
1
The direction (b1 c2
Direction Numbers and Direction Cosines Let a, b, g (called direction angles) be the angles that P1P2, or any line parallel to P1P2, makes with the x-, y-, and z-axis, respectively. Let d ¼ distance between P1 and P2. Direction cosines of P1P2: cos a ¼
x2 x1 , d
cos b ¼
y2 y 1 , d
cos g ¼
z2 z1 d
cos2 a þ cos2 b þ cos2 g ¼ 1 If a, b, c are direction numbers of P1P2, then: a : b : c ¼ x2 x1 : y2 y1 : z2 z1 ¼ cos a : cos b : cos g a cos a ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , 2 a þ b2 þ c 2 cos g ¼
b cos b ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , 2 a þ b2 þ c2
c pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 a þ b2 þ c2
c1 b2 , c1 a2
a1 c2 , a1 b2
b1 a 2 )
is perpendicular to both directions (a1, b1, c1) and (a2, b2, c2). The directions (a1, b1, c1), (a2, b2, c2), (a3, b3, c3) are parallel to a common plane if and only if
Straight Lines
a1
a2
a3
Point direction form: Two-point form:
x x2
b1 b2 b3 x
x1 a
c1
c2
¼ 0: c3 ¼
x1 y ¼ x1 y2
y
y1 b
¼
y1 z ¼ y1 z2
z
z1 c z1 z1
Parametric form: x ¼ x1 þ ta, y ¼ y1 þ tb, z ¼ z1 þ tc A1 x þ B1 y þ C1 z þ D1 ¼ 0 General form: A2 x þ B2 y þ C2 z þ D2 ¼ 0
Direction of line: (B1C2 C1B2, C1A2 A1C2, A1B2 Projection of segment P1P2 on direction (a, b, c): (x2
x1 )a þ (y2 y1 )b þ (z2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a2 þ b2 þ c2
z1 )c
B1A2)2
F-8
Appendix F: Algebra Formulas and Coordinate Systems
Distance from point P0 to line through P1 in direction (a, b, c): vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u
u y0 y1 z0 z1
2
z0 z1 x0 x1
2
x0 x1 y0 y1
2 u þ þ t b c c a a b a 2 þ b2 þ c 2
Distance between line through P1 in direction (a1, b1, c1) and line through P2 in direction (a2, b2, c2):
x2 x1 y2 y1 z2 z1
a1 b1 c1
a b2 c2 2 s ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
b1 c1 2 c1 a1 2 a1 b1 2
þ
þ
b c c a a b 2 2 2 2 2 2
The line through P1 in direction (a1, b1, c1) and the line through P2 in direction (a2, b2, c2) intersect if and only if
x2 x1
a1
a2
y2
y1
z1
c1
¼ 0: c2
z2
b1 b2
Planes General form: Direction to normal: Perpendicular to yz-plane: Perpendicular to xz-plane: Perpendicular to xy-plane: Perpendicular to x-axis: Perpendicular to y-axis: Perpendicular to z-axis: Intercept form:
Ax þ By þ Cz þ D ¼ 0 (A, B, C) By þ Cz þ D ¼ 0 Ax þ Cz þ D ¼ 0 Ax þ By þ D ¼ 0 Ax þ D ¼ 0 By þ D ¼ 0 Cz þ D ¼ 0 x y z þ þ ¼1 a b c
Plane through point P1 and perpendicular to direction (a, b, c): a(x
x1 ) þ b(y
y1 ) þ c(z
z1 ) ¼ 0
Plane through point P1 and parallel to directions (a1, b1, c1) and (a2, b2, c2):
Three-point form:
x
x1
x2
x3
y y1 y2 y3
z z1 z2 z3
1
1
¼0 1
1
x
or
x2
x3
x1 x1 x1
y y2 y3
y1 y1 y1
z z2 z3
z1
z1
¼ 0 z1
Normal form (p ¼ distance from origin to plane: a, b, g are direction angles of perpendicular to plane from origin): x cos a þ y cos b þ z cos g ¼ p To reduce Ax þ By þ Cz þ D ¼ 0 to normal form, divide by pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A2 þ B2 þ C2 , where the sign of the radical is chosen opposite to the sign of D when D 6¼ 0, the same as the sign of C when D ¼ 0 and C 6¼ 0, the same as the sign of B when C ¼ D ¼ 0. Distance from point P1 to plane Ax þ By þ Cz þ D ¼ 0: Ax1 þ By1 þ Cz1 þ D pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A2 þ B2 þ C 2
Angle u between planes A1x þ B1y þ C1z þ D1 ¼ 0 and A2x þ B2y þ C2z þ D2 ¼ 0: A1 A2 þ B1 B2 þ C1 C2 ffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi cos u ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 A1 þ B21 þ C12 A22 þ B22 þ C22
Planes parallel: Planes perpendicular:
A1: B1: C1 ¼ A2: B2: C2 A1A2 þ B1B2 þ C1C2 ¼ 0
Spheres Center at origin, radius r: x2 þ y2 þ z2 ¼ r2 Center at (g, h, k), radius r: (x g)2 þ (y h)2 þ (z General form: 2 Ax þ Ay2 þ Az 2 þ Dx þ Ey þ Fz þ M ¼ 0, x2 þ y2 þ z 2 þ 2dx þ 2ey þ 2fz þ m ¼ 0
k)2 ¼ r2 A 6¼ 0
Center: ( d, e, f) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Radius: r ¼ d2 þ e2 þ f 2 m Sphere on P1P2 as diameter:
x x1
a1
a 2
y
x x1
x2 x1
a
y y2
y1 b1 b2
z
z1
c1
¼ 0 c 2
Plane through points P1 and P2 parallel to direction (a, b, c): y1 y1 b
z z2
z1
z1
¼ 0 c
(x
x1) (x
x2) þ (y
Four-point form:
2
x þ y2 þ z 2
2
x þ y2 þ z 2 1 1
12
x þ y2 þ z 2 2 2
2
x2 þ y2 þ z 2 3 3
3
x2 þ y2 þ z 2 4 4 4
x x1 x2 x3 x4
y1) (y
y y1 y2 y3 y4
z z1 z2 z3 z4
y2) þ (z
1
1
1
¼ 0 1
1
z1) (z
z2) ¼ 0
F-9
Appendix F: Algebra Formulas and Coordinate Systems 8
2
4
9
2
3
10
2
11
2
x2=a2 þ y2=b2 þ z2=c2 ¼ 1
12 13
x2=a2 þ y2=b2 z2=c2 ¼ 1 x2=a2 þ y2=b2 þ z2=c2 ¼ 0 x2=a2 þ y2=b2 þ 2z ¼ 0
The 17 Quadric Surfaces in Standard Form
x2=a2 þ y2=b2 þ z2=c2 ¼ 1
1.
Real ellipsoid:
2.
Imaginary ellipsoid:
3.
Hyperboloid of one sheet:
4.
Hyperboloid of two sheets:
5.
Real quadratic cone:
6. 7.
Imaginary quadric cone: Elliptic paraboloid:
8.
Hyperbolic paraboloid:
9.
Real elliptic cylinder:
10.
Imaginary elliptic cylinder:
11.
Hyperbolic cylinder:
12.
Real intersecting planes:
13.
x2=a2 þ y2=b2 z2=c2 ¼ 1 x2=a2 þ y2=b2 z2=c2 ¼ 0
No
Hyperbolic paraboloid
Yes
Real elliptic cylinder
3
Yes
Imaginary elliptic cylinder
3
No
Hyperbolic cylinder
2 2
2 2
No Yes
Real intersecting planes Imaginary intersecting planes
14
1
3
Parabolic cylinder
15
1
2
Real parallel planes
16
1
2
Imaginary parallel planes
17
1
1
Coincident planes
þ
x2=a2 y2=b2 þ 2z ¼ 0 x2=a2 þ y2=b2 ¼ 1
x2=a2 þ y2=b2 ¼ 1 x2=a2 y2=b2 ¼ 1 x2=a2 y2=b2 ¼ 0 2
2
2
2
x =a þ y =b ¼ 0
Imaginary intersecting planes:
2
14. 15.
Parabolic cylinder: Real parallel planes:
16.
Imaginary parallel planes:
17.
Coincident planes:
x þ 2rz ¼ 0 x2 ¼ a2 x2 ¼ a2 x2 ¼ 0
Cylindrical and Conical Surfaces Any equation in just two of the variables x, y, z represents a cylindrical surface whose elements are parallel to the axis of the missing variables. Any equation homogeneous in the variables x, y, z represents a conical surface whose vertex is at the origin. Transformation of Coordinates
General Equation of Second Degree The nature of the graph of the general quadratic equation in x, y, z, ax2 þ by2 þ cz 2 þ 2fyz þ 2gzx þ 2hxy þ 2px þ 2qy þ 2rz þ d ¼ 0,
To transform an equation of a surface from an old system of rectangular coordinates (x, y, z) to a new system of rectangular coordinates (x0 , y0 , z0 ), substitute for each old variable in the equation of the surface its expression in terms of the new variables. Translation:
is described in the following table in terms of r3, r4, D, k1, k2, k3, where 2
3 a h g e ¼ 4 h b f 5, g f c
2
a h g 6h b f E¼6 4g f c p q r
3 p q7 7, r5 d
r3 ¼ rank e, r4 ¼ rank E, D ¼ determinant of E,
a x
k1 , k2 , k3 are the roots of
h
g
Case
r3
r4
1
3
4
2
3
4
3
3
4
4
3
4
5
3
3
6
3
3
7
2
4
Sign of D
Nonzero k’s Same Sign?
h bx f
g
f
¼ 0: cx Quadric Surface
Yes
Real ellipsoid
þ
Yes
Imaginary ellipsoid
þ
No
Hyperboloid of one sheet
No
Hyperboloid of two sheets
No
Real quadratic cone
Yes
Imaginary quadric cone
Yes
Elliptic paraboloid
x ¼ x0 þ h The new axes are parallel to the old axes and the coordinates of y ¼ y0 þ k the new origin in terms of the old system are (h, k, l). z ¼ z0 þ l Rotation about the origin: x ¼ l1 x0 þ l2 y0 þ l3 z0 The new origin is coincident with the old origin and y ¼ m1 x0 þ m2 y0 þ m3 z0 the x0 -axis, y0 -axis, x0 -axis have direction cosines z ¼ n1 x0 þ n2 y0 þ n3 z 0 (l1 , m1 , n1 ), (l2 , m2 , n2 ), (l3 , m3 , n3 ) respectively, with respect to the old system of axes.
F-10
Appendix F: Algebra Formulas and Coordinate Systems
x0 ¼ l1 x þ m1 y þ n1 z
x ¼ r sin u cos f,
y0 ¼ l2 x þ m2 y þ n2 z
y ¼ r sin u sin f,
z0 ¼ l3 x þ m3 y þ n3 z
z ¼ r cos u, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r ¼ x2 þ y 2 þ z 2 ,
Cylindrical Coordinates If (r, u, z) are the cylindrical coordinates and (x, y, z) the rectangular coordinates of a point P, then x ¼ r cos u, y ¼ r sin u, z ¼ z,
z u ¼ arccos pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , x2 þ y2 þ z 2
y f ¼ arctan : x
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r ¼ x2 þ y2 , u ¼ arctan xy , z ¼ z:
Z
Z
P θ P
r
z
Y X θ
r
Y
X
Spherical Coordinates If (r, u, f) are the spherical coordinates and (x, y, z) the rectangular coordinates of a point P, then
Index A
B
Abel transformation, 9-14 Algebra formulas and coordinate systems arithmetic progression, F-1 combinations, F-2 common difference, F-1 common ratio, F-1 cubic equations, F-2–F-3 factorials, F-1 geometric progression, F-1 harmonic progression, F-1 partial fractions coefficients, F-5 expression, F-3 higher degree factors, F-5 nonrepeated linear factors, F-4 repeated linear factors, F-4–F-5 permutations, F-2 polar coordinates, F-5–F-6 quadratic equation, F-2 quartic equation, F-3 rectangular (Cartesian) coordinates, F-6 cylindrical and conical surfaces, F-9 cylindrical coordinates, F-10 direction numbers and cosines, F-7 general quadratic equation, F-9 planes, F-8 points, F-7 quadric surfaces, F-9 spheres, F-8 spherical coordinates, F-10 straight lines, F-7–F-8 transformation, F-9–F-10 Algebraic reconstruction techniques (ART), 8-34 Altes Q distribution, 13-8–13-9 Analog all-pass filters, 7-62–7-63 Analog chirp Fourier transform, 18-6 Asymptotic sequence, 1-53 Attenuated Radon transform, 8-41 Autocorrelation sequence, 6-6
Backprojection definition, 8-31 filter, 8-32–8-33 filtered projection convolution methods, 8-31–8-32 feature space function, 8-31 frequency space implementation, 8-32 inverse Fourier transform, 8-31 Band-limited functions, 2-45 sampling theorem, 2-43–2-44 truncated sampling reconstruction, 2-44 Band-pass Hilbert transformers, 7-68–7-71 Bandwidth theorem absolutely integrable functions, 2-17–2-18 finite energy functions, 2-18–2-19 Bedrosian’s theorem, 7-69, 7-85 Bertrand Pk distributions, 13-10 Bessel functions Bessel’s equations solution, 2-32–2-33 Chebyshev polynomial, 7-30–7-31 definition, 1-35 finite Hankel transforms, 11-2, 11-5 Fourier Bessel series, 1-39 Fourier cosine transform (FCT), 3-9 Fourier sine transform (FST), 3-16 frequency domain differentiation, 7-30 Hankel transform elementary properties, 9-1–9-2 finite interval, 9-11–9-13 infinite interval, 9-13–9-15 integral order Bessel functions, 2-34 integral representation, 1-37 inverse Fourier transformation, 7-31 nonintegral order Bessel functions, 2-35 power series representation, 7-28, 7-31–7-32 properties, 1-41–1-43 recurrence relation, 1-36 second order Bessel differential equation, 7-28 waveforms, 7-29–7-30 zero-order Bessel function, 2-33–2-34
Bessel’s equality, 2-15 Bessel’s inequality Hartley series, 4-12 signal orthogonality, 1-20 Beta function, 12-34 Bidimensional empirical mode decomposition (BEMD) applications, 20-9 boundary effects, 20-8 extrema detection, 20-7–20-8 intermittency, 20-7 scattered data interpolation (SDI), 20-7–20-8 sifting process and stopping criterion, 20-7 texture image analysis, 20-6–20-7 types, 20-7 Bilateral Laplace transform, see Two-sided Laplace transform Binomial distribution, 2-56–2-57 Biorthogonal filter bank, 10-32–10-33 Boundary value problems, 11-10–11-11 Bounded input bounded output (BIBO) stability, 1-17 Broad-band modulation index, 12-29 Bromwich contour branch points and cuts analytic and single valued function, A-21 evaluation, A-22–A-25 singularity, A-22 finite number of poles, A-19–A-21 rational transforms, A-19 Butterworth IIR Hilbert transformers, 7-77
C Cauchy principal values, 2-7 Cauchy–Riemann equations, 7-4 Cauchy–Schwarz inequality, 1-20 Cauchy second integral theorem, 5-17 Cauchy’s residue theorem, 6-24 Central-slice theorem, 8-2, 8-6–8-7
IN-1
IN-2 Chebyshev polynomials Bessel function, 7-30–7-31 circular harmonic decomposition, 8-35 Fourier cosine transform (FCT), 3-8 Fourier sine transform (FST), 3-15 functions and formulas, 8-43–8-44 integral order Bessel functions, 2-34 modified Clenshaw–Curtis quadrature method, 9-11–9-12 orthogonal functions, unit disk, 8-39 signal orthogonality, 1-34–1-35 Chirp signal analysis, 18-1–18-2; see also Discrete chirp-Fourier transform (DCFT) Choi–Williams exponential distribution (CWD), 13-29–13-30 Circular harmonic decomposition Chebyshev polynomial, 8-35 extension, higher dimensions, 8-35 Fourier series, 8-34 three dimensions, 8-35–8-36 Classical time–frequency representations (TFRs) Altes Q distribution, 13-8–13-9 Bertrand Pk distributions, 13-10 quadratic time–frequency representation band-limited cosine, 13-13 interference geometry, 13-11–13-12 multicomponent signal, 13-11 nonlinear operation, 13-10 short-time Fourier transform (STFT), 13-4–13-5 TFR warping, 13-9–13-10 Wigner distribution and Woodward ambiguity function Dirac function, 13-6 Fourier transform, 13-7–13-8 Gaussian signal, 13-6–13-7 quantum mechanics, 13-5 signal operations, 13-7 Complex variable functions analytic continuation, A-11 Bromwich contour branch points and cuts, A-21–A-25 finite number of poles, A-19–A-21 definite integral evaluation infinite integrals, sines and cosines, A-26–A-28 miscellaneous integrals, A-28–A-31 periodic functions (0 to 2p), A-25 1 to þ1, A-26 derivative of analytic function, A-5–A-6 integration Cauchy first integral theorem, A-4 Cauchy second integral theorem, A-5 contour integral, A-4 contour transformation, A-18–A-19 definition, A-2–A-3 evaluation theorem, A-16–A-18 logarithmic derivative, A-32–A-33 path, complex plane, A-3
Index Laurent’s theorem, A-6–A-7 power series convergence, A-10–A-11 Maclaurin series, A-9 negative power series, A-10 positive power series, A-9 Taylor series, A-9 principal value integral, A-31–A-32 residue theory, A-14–A-16 sequences and series, A-8–A-9 single=multiple valued functions, A-1–A-2 singularities at 1, A-14 branch points, A-12–A-13 definition, A-11–A-12 essential and nonessential singularity, A-11, A-13–A-14 isolated and nonisolated singularity, A-12 phase change, A-12–A-13 poles, A-12 removable singularity, A-12 Taylor’s theorem, A-6 Computational algorithms discrete sine and cosine transforms (DST and DCT) algorithms decimation-in-frequency algorithms, 3-30–3-31 decimation-in-time algorithms, 3-28–3-30 Fourier cosine and sine transform (FCT and FST) algorithms, 3-28 Conditional inverses, D-9–D-11 Cosine-and-sine (cas) function, 4-2–4-3 Cramer–Rao bounds, velocity estimation broad-band modulation index, 12-29 Doppler compression, 12-28 Fisher information matrix, 12-28–12-29 maximum-likelihood estimation, 12-28 Parseval’s formula, 12-28 variance, 12-30 zero mean white Gaussian noise, 12-29 Cross-power spectrum, 7-53 Cyclic convolution, 4-19–4-20
D Daubechies basis maximum flatness filter, 10-45–10-46 time domain, 10-46–10-47 Definite integrals, C-1–C-8 impulse response, A-31 infinite integrals, A-26 miscellaneous integrals, A-28–A-31 periodic functions (0 to 2p), A–25 1 to þ1, A-26 Delta functions periodic arrays, 2-26 regular arrays, 2-24–2-25 variables and derivatives, 2-27–2-28 Determinants, D-3–D-4
2-D DFT tensor algorithm algorithm steps, 19-16 modified tensor algorithms, 19-20–19-21 n-dimensional DFT, 19-25–19-26 N, power of two, 19-18 N, prime, 19-16 recursive tensor algorithm multiplication operation, 19-21–19-22 N ¼ L1L2, 19-23 N1 3 N2, 19-23–19-25 N, power of odd prime, 19-22–19-23 tensor image transform, 19-16 Differentiating Hilbert transformers cascade connection, 7-77 coefficients, 7-79 definition, 7-77 Fourier series, 7-78–7-79 impulse response, 7-78 transfer function, 7-77–7-78 Digital filters finite impulse response (FIR) filters, 6-28–6-29 infinite impulse response (IIR) filters, 6-27–6-28 Digital Hilbert transformers antisymmetric sequence, 7-71 frequency-independent group delay, 7-72 magnitude and phase function, 7-71–7-72 transfer function, 7-70–7-71 Digital phase splitters, 7-75 Dirac delta function, see Impulse delta function Dirac function, 13-6 Direct matrix factorization decimation-in-frequency algorithms, 3-30–3-31 decimation-in-time algorithms, 3-28–3-30 Discrete chirp-Fourier transform (DCFT) analog chirp-Fourier transform, 18-6 chirp rate estimation, 18-1–18-2 frequency matching, 18-1 magnitude, 18-2 multiple component chirp signals, 18-5–18-6 numerical simulations four chirp components, 18-9, 18-13–18-14 multiple chirp components, 18-10 three chirp components, 18-8, 18-12–18-13 two chirp components, 18-7, 18-11–18-12 single component chirp signal DFT, 18-2–18-3 inverse DCFT (IDCFT), 18-2 linear chirp signal, 18-4–18-5 sidelobe magnitude vs. estimation performance, 18-3–18-4 2-D Discrete cosine transform (DCT) fast 1-D cosine transforms, 19-38 modified DCT, 19-39 N=2-point DCT, 19-36–19-37 tensor representation, 19-35
IN-3
Index Discrete Fourier transforms (DFT), 2-41, 6-26–6-27 basic properties, 17-6 circular convolution and correlation, 17-7–17-8 circular time and frequency shift, 17-8 complex conjugate, 17-8 definition, 17-5 functions, 17-9–17-12 imaginary sequences, 17-7 linear transformation, 17-5–17-6 Parseval’s theorem, 17-9 product, 17-8 properties, 17-9 real-valued sequences, 17-7 SAR and ISAR imaging, 18-1 spectrum estimation, 18-1 symmetry properties, 17-6–17-7 time reversal, 17-8 Discrete Hartley transforms (DHT), 4-17 2-D DHT representation 3-D image-signals, 19-28–19-29 index mapping, 19-27 spectral information, frequency-points, 19-28 splitting-signals, 19-26 tensor algorithm, 19-26–19-27 3-D DHT tensor representation, 19-29–19-31 n-dimensional DHT, 19-31–19-32 2-D Discrete hexagonal Fourier transform (DHFT) calculation, frequency-points, 19-57–19-59 complexity, 19-55–19-56 definition, 19-56–19-57 paired representation, 19-59–19-62 8 3 4-point DHFT, 19-59 rectangular and hexagonal lattice, 19-56 regular hexagonal tessellation, 19-55 Discrete Hilbert transformation (DHT), 6-26 bilinear transformation, 7-59–7-60 circular convolution, 7-56 complex analytic discrete sequence, 7-58 discrete one-sided convolution, 7-55 exponential kernels, 7-54 impulse response, 7-55–7-56 inverse transformation, 7-54 linearity, 7-57–7-58 Parseval’s theorem, 7-56–7-57 shifting property, 7-57 transfer function, 7-55–7-56 Discrete, linear, and time-invariant (DLTI) system, 7-43–7-44 Discrete Mellin transform dilatocylce function Z(n) with ratio Q, 12-25 n and b sample connection, 12-26–12-27 number of samples, 12-27 periodize MS(b), period 1=ln q, 12-25–12-26 Discrete periodic Radon transform, 8-42–8-43
Discrete-time Fourier transforms (DTFT) approximated continuous-time Fourier transforms, 17-5 definition, 17-1–17-2 discrete-time signals, 17-4 finite sequences, 17-4 LTI discrete system frequency response, 17-5 properties, 17-2–17-3 smearing effect, 17-4 Discrete wavelet transform timescale space lattices, 10-14–10-15 wavelet frame, 10-15–10-16 Distortion power, 7-53 Doppler compression, 12-28
E Electrical power, Hilbert transforms complex power notion, 7-51 instantaneous power, 7-50 power notion generalization Budeanu’s vs. Fryze’s definitions, 7-52 finite average power, 7-52–7-54 in-phase component, 7-51–7-52 nonsinusoidal periodic waveform, 7-51 quadrature component, 7-52 reactive power, 7-51–7-52 quadrature instantaneous power, 7-50 voltage and current harmonic waveforms, 7-50–7-51 Empirical mode decomposition (EMD) drawbacks, 20-4–20-5 end effects, 20-6 extrema envelope, 20-1 vs. Fourier and wavelet transform, 20-4 Hilbert transform, 20-5–20-6 intermittency, 20-2–20-3 pictorial depiction, 20-2 recent developments, 20-6 sifting process, 20-2–20-3 stopping criteria, 20-2, 20-3 without extrema envelopes, 20-8 Euler–Cauchy differential equation, 12-13 Euler’s equation, 7-88 Euler’s transformation, 9-13
F Fast Hartley transform (FHT) applications, 4-18 cyclic convolution, 4-19–4-20 DHTand IDHT, 4-17 FFT algorithm, 4-17 frequency domain, 4-19–4-20 nontrivial computation, 4-21 periodic load current, 4-20 program, 4-28–4-31 quasiperiodic transient inputs, 4-19 time-domain convolution, 4-18–4-19 transform domain, 4-19 zero padding, 4-20
Fast lapped transform (FLT), 15-22–15-23, 15-24 Filter bank biorthogonal filter bank, 10-32–10-33 FIR filter bank, 10-27–10-28 orthonormal filter bank, 10-30–10-31 perfect reconstruction aliasing cancellation, 10-29 down-sampling, 10-28 modulation matrix, 10-29–10-30 up-sampling, 10-29 time domain, orthonormal filters, 10-31–10-32 Filtered backprojection algorithm, 8-31 Final value theorem, 5-9 Finite energy functions bandwidth theorem, 2-18–2-19 square integrable function, 2-18 Finite Hankel transforms, 9-9–9-10 definition, 11-1–11-2 long circular cylinder temperature distribution, 11-3, 11-5 unsteady viscous flow, 11-3–11-4 vibrations, circular membrane, 11-4 operational properties, 11-2–11-3 Finite impulse response (FIR) Hilbert transformers equiripple function, 7-74–7-75 impulse responses, 7-73 normalized dimensionless pass-band, 7-74 structure, 7-72 transfer function, 7-73–7-74 Z-transform, 7-73 Finite Sturm–Liouville transform, 11-11 Fisher information matrix, 12-28–12-29 Fourier–Bessel series, 9-9, 11-1 Fourier cosine transform (FCT) algebraic functions, 3-5–3-6, 3-32 Bessel functions, 3-9 complementary error function, 3-8 convolution property, 3-10 cosine integral function, 3-9 definitions, 3-1–3-2 differentiation-in-t, 3-9–3-10 differentiation-in-v, 3-10 exponential and logarithmic functions, 3-6–3-7, 3-32 exponential integral function, 3-9 orthogonal polynomials, 3-8 properties and operational rules, 3-2–3-5 real data sequence, 3-28 shift-in-t, shift-in-v and kernel product property, 3-10 sine integral function, 3-8 trigonometric functions, 3-7, 3-32 Fourier inverse transform, 2-2 Fourier sine transform (FST) algebraic functions, 3-13–3-14, 3-33 Bessel functions, 3-16 complementary error function, 3-15–3-16 cosine integral function, 3-16 definitions, 3-11
IN-4 exponential and logarithmic functions, 3-14, 3-33 exponential integral function, 3-16 orthogonal polynomials, 3-15 properties and operational rules, 3-11–3-13 real data sequence, 3-28 sine integral function, 3-16 trigonometric function, 3-14–3-15, 3-33 Fourier transforms circularly symmetric functions and Hankel transform, 2-38–2-39 definitions Cauchy principal values, 2-7 generalized transforms, 2-3–2-4 notation, and terminology, 2-2 residue theorem, 2-4 discrete Fourier transform, 2-41 functions, 2-65–2-66 absolutely integrable functions, 2-16–2-18 band-limited functions, 2-20 Bessel functions, 2-32–2-35 causal functions, 2-30 with finite duration, 2-20 finite energy functions, 2-18–2-19 on finite intervals, 2-32 finite power functions, 2-20–2-22 on half-line, 2-31–2-32 negative powers and step functions, 2-28 periodic arrays, delta functions, 2-26 periodic functions, 2-22–2-24 rational functions, 2-29 real=imaginary valued even=odd functions, 2-16 regular arrays, delta functions, 2-24–2-25 square integrable functions, 2-18 variables and derivatives, 2-27–2-28 fundamental Fourier identities, 2-65 general identities and relations bandwidth theorem, 2-15 Bessel’s equality, 2-15 conjugation, 2-8–2-9 correlation, 2-11 differentiation and multiplication, 2-11–2-12 integration, 2-13–2-14 invertibility, 2-8 linearity, 2-9 modulation, 2-10 moments, 2-13 near-equivalence, 2-8 Parseval’s equality, 2-14–2-15 products and convolution, 2-10 scaling, 2-9 translation and multiplication, 2-9–2-10 graphical representations, 2-67–2-75 half-line sine and cosine transforms, 2-40 Hankel transform, 9-2–9-3 Hartley transform, 4-5 Laplace transform, 2-42–2-43
Index linear systems casual systems, 2-52 complex exponentials and periodic functions, 2-51 correlation, 2-60 differential equations, 2-52 linear shift invariant systems, 2-49 modulation and demodulation, 2-54–2-55 random signals, 2-60–2-61 reality and stability, 2-50 RLC circuits, 2-54 Mellin transform, 12-3 multidimensional Fourier transforms, 2-35–2-36 one-dimensional spectral representations, 13-2–13-3 partial differential equations half-infinite rod, 2-64 infinite rod, 2-63–2-64 one-dimensional heat equation, 2-62 Radon and Abel transforms, 8-7–8-8 random variables correlation, 2-60 multiple random process and independence, 2-57 probability and statistics, 2-55–2-56 random signals and stationary random signals, 2-59 sums of random processes, 2-58–2-59 sampled signal reconstruction band-limited functions, 2-43–2-45 finite duration functions, 2-47 fundamental sampling formulas and Poisson’s formula, 2-47–2-48 separable functions, multidimensional transform, 2-36–2-37 signal analysis properties, 13-19–13-20 Wigner distribution, 13-7–13-8 Z-transform, 6-36 Fourier transform tensor representation 2-D directional images 2D IDTF, image-signal processing, 19-13–19-14 frequency point location, 19-11–19-12 image reconstruction, 19-15–19-16 point disposition, 19-12–19-13 superposition, 19-15 splitting-signal, 19-10–19-11 Fractional Fourier transform (FRT) applications communications, 14-21 optics and wave propagation, 14-21–14-22 quantum mechanics, 14-22 signal and image processing, 14-21 ath order, 14-1 definition differential equation, 14-3–14-4 eigenvalues and eigenfunctions, 14-2–14-3 harmonic oscillation, 14-3
linear integral transforms, 14-2 square-integrable function, 14-3 digital computation, 14-19–14-20 discrete transform, 14-18–14-19 domains frequency domain, 14-4 phase space, 14-4–14-5 space or time domain, 14-4 time-frequency domain, 14-4 Wigner distribution, 14-5–14-6 dual operators chirp multiplication and chirp convolution operator, 14-9 differentiation and multiplication operator, 14-8 discretization and periodization operator, 14-9 phase shift operator and translation operator, 14-8 scaling operator, 14-9 filtering convolution and multiplication operations, 14-17 cost-accuracy trade-off, 14-16 filter function, 14-14 generalized filtering configurations, 14-17 multistage and multichannel configuration, 14-15–14-16 optimal filter, 14-15 signal recovery, 14-15 fractional Fourier domain decomposition (FFDD), 14-17–14-18 functions, 14-6 linear canonical transforms (LCT) basic properties, 14-13 composite transform, 14-11 decompositions, 14-13 noncommutative group sets, 14-11–14-12 operational properties, 14-14 Wigner distribution, 14-12–14-13 magnitude, 14-2 operational properties, 14-7–14-8 properties, 14-6–14-7 singular-value decomposition (SVD), 14-17–14-18 -time-order and space-order representations polar time-order representation, 14-9–14-10 rectangular time-order representation, 14-9 Wigner distribution and ambiguity function, 14-10–14-11 zeroth-order, 14-1 Frequency matching, 18-1
G Gamma function, 12-4, 12-33–12-34 Gaussian distribution, 3-6 Gauss–Jacobi rules, 9-11, 9-14
IN-5
Index Gauss quadrature formulas, 9-11 Gauss’ theorem, E-9 Gegenbauer polynomials, 1-34 Gegenbauer transform, 8-35 application, 11-16 definition, 11-14 operational properties, 11-14–11-15 Generalized lapped orthogonal transform (GenLOT), 15-19 definition, 15-17–15-18 degrees of freedom, 15-18 implementation, 15-18 inverse and forward transform, 15-20–15-21 nonlinear unconstrained optimization, 15-18, 15-20 Geometric Dirac comb, 12-21–12-22 Gram–Schmidt orthonormalization process, 1-21 Green’s theorem, E-9
H Hankel transform, 8-29–8-30 applications electrified disc, 9-6–9-7 electrostatic problem, 9-8–9-9 heat conduction, 9-8 Laplace equation, 9-8 Bessel functions elementary properties, 9-1–9-2 finite interval, 9-11–9-13 infinite interval, 9-13–9-15 definition, 9-2 finite Hankel transform, 9-9–9-10 Fourier transform, 9-2–9-3 Fourier transforms, 2-38–2-39 numerical integration methods, 9-10–9-11 properties, 9-3–9-4 Radon and Abel transforms, 8-29–8-30 Weber’s integral theorem, 9-10 Hartley oscillator, 4-1 Hartley transform bus voltages, 4-24–4-25 cas function, 4-2–4-3 vs. classical complex-valued fast Fourier transform (FFT), 4-1 complex and real Mellin transforms, 4-7 definition, 4-1 Dirichlet conditions, 4-5 distribution network model, 4-21–4-22 electrostatic coupling, 4-22 elementary properties autocorrelation, 4-9 convolution, 4-8–4-9 function shift=delay and reversal, 4-8 linearity, 4-7 modulation, 4-8 nth derivative of a function, 4-9 power spectrum and phase, 4-8 product, 4-9 scaling=similarity, 4-8
energy signals, 4-26 engineering signals, 4-26–4-28 even and odd function, 4-4 expression, 4-2 fast Hartley transform (FHT) applications, 4-18 cyclic convolution, 4-19–4-20 DHTand IDHT, 4-17 FFT algorithm, 4-17 frequency domain, 4-19–4-20 nontrivial computation, 4-21 periodic load current, 4-20 program, 4-28–4-31 quasiperiodic transient inputs, 4-19 time-domain convolution, 4-18–4-19 transform domain, 4-19 zero padding, 4-20 Fourier magnitude, 4-22–4-23 Fourier transforms, 4-5 greyqui hoy, 4-2 Hermitian symmetry, 4-5 Hilbert transforms, 4-6 impedance frequency components, 4-22 inverse Hartley transform, 4-2–4-3 Laplace transforms, 4-6 multiple dimensions, 4-10 nonsinusoidal waveform propagation, 4-21 power signals, 4-26 predictor–corrector method, 4-23 real Fourier transform (RFT), 4-6–4-7 scaling coefficient, 4-3 self-inverse property, 4-3 sine and cosine transforms, 4-5 systems analysis, Hartley series Bessel’s inequality, 4-12 electric power quality assessment, 4-15–4-17 finality of coefficients, 4-10, 4-12 impulse function, 4-10 linear system response problem, 4-14 orthogonal basis function, 4-13 orthonormal set, 4-12–4-13 Parseval’s equality, 4-12 Riemann–Lebesgue lemma, 4-11–4-12 transfer function methodology, 4-15 truncation approximation, 4-11 transfer impedance, 4-23–4-24 transient=aperiodic excitations, 4-24–4-25 trigonometric properties and functions, 4-3 Heaviside expansion theorem, 5-12 Hermite polynomials, 8-17–8-18 definition, 1-28–1-29 Fourier cosine transform (FCT), 3-8 Fourier sine transform (FST), 3-15 functions and formulas, 8-44–8-45 integral representation and equation, 1-29 orthogonality relation, 1-29–1-30 properties, 1-31 recurrence relation, 1-29
Hermite transforms basic operational properties, 11-23 even functions, 11-27 generalized convolution, 11-25 Laguerre transformation, 11-26 odd functions, 11-26 recurrence relations, 11-24–11-25 definition, 11-21–11-22 Hermitian symmetry, 4-5 Hierarchical lapped transform (HLT) filter banks and discrete wavelets, 15-10–15-11 time–frequency diagram, 15-11 tree-structured transform basis functions, 15-13 M-band LT, tree nodes and branches, 15-12 tree and TF diagram, 15-12 variable-length LT cascading effect, 15-13–15-14 factorization, 15-14 High-pass filters, 10-22 Hilbert–Huang transform method (HHT) bidimensional empirical mode decomposition (BEMD) applications, 20-9 boundary effects, 20-8 extrema detection, 20-7–20-8 intermittency, 20-7 scattered data interpolation (SDI), 20-7–20-8 sifting process and stopping criterion, 20-7 texture image analysis, 20-6–20-7 types, 20-7 empirical mode decomposition (EMD) drawbacks, 20-4–20-5 end effects, 20-6 extrema envelope, 20-1 vs. Fourier and wavelet transform, 20-4 Hilbert transform, 20-5–20-6 intermittency, 20-2–20-3 pictorial depiction, 20-2 recent developments, 20-6 sifting process, 20-2–20-3 stopping criteria, 20-2, 20-3 without extrema envelopes, 20-8 global health monitoring technique, 20-8–20-9 recommendations, 20-9–20-10 Hilbert space, 12-18 Hilbert transformers analog all-pass filters, 7-62–7-63 band-pass Hilbert transformers, 7-68–7-71 delay, phase distortions, and equalization, 7-66–7-67 design methods, 7-72 differentiating Hilbert transformers, 7-77–7-79 digital Hilbert transformers, 7-70–7-72 digital phase splitters, 7-75 finite impulse response (FIR), 7-72–7-75
IN-6 infinite impulse response (IIR), 7-75–7-77 phase-splitter Hilbert transformers, 7-61–7-62 SSB filtering, 7-70 tapped delay-line filters, 7-67 Hilbert transforms analytic functions Cauchy–Riemann equations, 7-4 complex function, 7-3–7-4 Euler’s formula, 7-3, 7-5 analytic signal instantaneous amplitude, complex phase, and complex frequency, 7-32–7-34 integration, 7-23, 7-25 multiplication, 7-28 wavelets, 7-99 autoconvolution and energy equality, 7-18–7-19 Bessel functions Chebyshev polynomial, 7-30–7-31 Fourier series, 7-29 frequency domain differentiation, 7-30 integral form, 7-28 inverse Fourier transformation, 7-31 power series representation, 7-28, 7-31–7-32 second order Bessel differential equation, 7-28 waveforms, 7-29–7-30 Clifford analytic signal, 7-98–7-99 definitions, 7-2–7-3 differentiation, 7-20 discrete Hilbert transformation (DHT) bilinear transformation, 7-59–7-60 circular convolution, 7-56 complex analytic discrete sequence, 7-58 discrete one-sided convolution, 7-55 exponential kernels, 7-54 impulse response, 7-55–7-56 inverse transformation, 7-54 linearity, 7-57–7-58 Parseval’s theorem, 7-56–7-57 shifting property, 7-57 transfer function, 7-55–7-56 distribution, 7-9–7-10 electrical power complex power notion, 7-51 instantaneous power, 7-50 power notion generalization, 7-51–54 quadrature instantaneous power, 7-50 voltage and current harmonic waveforms, 7-50–7-51 Fourier transforms, 2-30 Hartley transform, 4-6 Hermite polynomials and functions Fourier image, 7-23 Gaussian Fourier pair, 7-21 orthogonal function, 7-22–7-23 recursion formula, 7-22 waveforms, 7-22 weighting function, 7-23
Index Hilbert–Huang transform, 20-5–20-6 Hilbert pairs, 7-14–7-17 iteration, 7-14 linear systems, Kramers–Kronig relations amplitude-phase relations, DLTI systems, 7-43–7-44 causality, 7-42 linear macroscopic continuous media, 7-44–7-45 linear, time-invariant (LTI) systems, 7-41–7-42 minimum phase property, 7-42–7-44 physical realizability, transfer functions, 7-42 signal delay, Hilbertian sense, 7-45 modulation theory compatible single side-band modulation, 7-38–7-39, 7-41 generalized single side-band modulation, 7-37–7-38 harmonic carrier, modulation function, 7-35–7-37 monogenic 2-D signal quaternionic Fourier transformation (QFT), 7-97 quaternion-valued function, 7-96 Riesz transform, 7-96–7-97 spherical coordinates representation, 7-97–7-98 multidimensional complex signals conjugate 2-D complex signals, 7-90 definition, 7-89 2-D modulation theory, 7-92–7-93 Euler’s equation, 7-88 labeling orthants, 7-94 local amplitudes, phases, and complex frequencies, 7-91 real and complex notation, 7-91–7-92 multidimensional Hilbert transformations 2-D Hilbert transformation, 7-81, 7-84 evenness and oddness, 7-79–7-80 partial Hilbert transformation, 7-81 separable functions, 7-82 spectral description, 7-81–7-82 Stark’s extension, Bedrosian’s theorem, 7-85 one-sided spectrum even and odd term, 7-5 Gaussian pulse and Fourier image, 7-8–7-9 Hartley transforms, 7-6–7-7 linear two-port network, 7-5 mean value, 7-8 periodic cosine signal, 7-7 two-sided symmetric unipolar square pulse, 7-7–7-8 periodic signals cotangent Hilbert transformations, 7-12–7-13 generating function, 7-10
periodic function, Fourier series, 7-10–7-11 time domain, 7-11–7-12 properties, 7-17–7-18 quaternionic 2-D signals Hermitian symmetry, 2-D Fourier spectrum, 7-95–7-96 quaternionic spectral analysis, 7-95 quaternion numbers and quaternionvalued functions, 7-94–7-95 sampling theory band-pass signals, 7-48–7-49 delta sampling sequence, 7-46 interpolatory expansion, 7-48 low-pass band-limited spectrum, 7-47 low-pass sampled signal, 7-48–7-49 noncausal impulse response, 7-47 periodic sequence, 7-46 transfer function, 7-47–7-48 signal multiplication, nonoverlapping spectra, 7-27 Wigner distribution, 7-98, 13-5
I Impulse delta function, 1-4 Infinite impulse response (IIR) Hilbert transformers, 7-75–7-77 Initial value theorem, 5-9 Integral order Bessel functions, 2-34 Integration Cauchy first integral theorem, A-4 Cauchy second integral theorem, A-5 contour integral, A-4 contour transformation, A-18–A-19 definition, A-2–A-3 evaluation theorem, A-16–A-18 path, A-3 Interpolatory functions, 7-48 Inverse discrete Hartley transform (IDHT), 4-17 Inverse finite Hankel transform, 11-2 Inverse Fourier transform, 8-31 Inverse Hartley transform, 4-2–4-3 Inverse Hermite transform, 11-22 Inverse Jacobi transform, 11-12, 11-14 Inverse Laguerre transform, 11-16 Inverse Laplace transform definition, 5-10 Heaviside expansion theorem, 5-12 partial fraction expansion, 5-10–5-11 proper fraction, 5-11 Inverse Legendre transform, 11-6 Inverse Z-transform one-sided Z-transform integration, 6-9 irrational function, 6-10 partial fraction expansion, 6-8 power series method, 6-7 simple and multiple poles, 6-9
IN-7
Index two-sided Z-transform integral inversion formula, 6-24 partial fraction expansion, 6-22 power series expansion, 6-21 Isotropic Hilbert transform, see Riesz transforms
J Jacobi transforms definition, 11-12 generalized heat conduction problem, 11-13–11-14 operational properties, 11-13
K Karhunen–Loeve transform (KLT) auto-covariance matrix data vector, 3-19 signal vector, 3-20 diagonalization, 3-19–3-20 discrete cosine transform (DCT), 3-20 discrete sine transform (DST), 3-20–3-21 Markov-1 signal, 3-20 nonsingular symmetric Toeplitz matrix, 3-19 residual correlation, 3-21 signal decorrelation, 3-1 variance distribution, 3-20–3-21 Kelvin functions, 9-15
L Laguerre polynomials Fourier cosine transform (FCT), 3-8 Fourier sine transform (FST), 3-15 generating function, 1-30 Leibniz formula, 1-31 orthogonality, 1-32–1-33 properties, 1-33–1-34 Radon and Abel transforms, 8-18–8-19 recurrence relations, 1-31–1-32 Rodrigues formula, 1-31 Laguerre transforms definition, 11-16 diffusion equation, 11-21 heat conduction problem, 11-20–11-21 operational properties, 11-18–11-20 Laplace equation, 9-8 Laplace transforms Dirichlet conditions, 5-1 Fourier cosine transform (FCT) exponential and logarithmic functions, 3-6 trigonometric functions, 3-7 Fourier sine transform (FST) exponential and logarithmic functions, 3-14 trigonometric functions, 3-15 Hartley transform, 4-6 inverse Laplace transform definition, 5-10
Heaviside expansion theorem, 5-12 partial fraction expansion, 5-10–5-11 proper fraction, 5-11 inversion integral Cauchy second integral theorem, 5-17 region of convergence, 5-17 transformable function, 5-20 Laplace transform pairs, 5-29–5-42 Mellin transform, 12-3 ordinary linear equations, constant coefficients, 5-13 partial differential equations, 5-20 path of integration, 5-2 piecewise continuous function, 5-1 properties, 5-42–5-43 complex integration, 5-5 complex translation, 5-6–5-7 convolution, 5-7 differentiation, 5-3–5-4 final value theorem, 5-9 frequency convolution, s-plane, 5-8 initial value theorem, 5-9 integration, 5-4–5-5 linearity, 5-3 multiplication, 5-5 time delay, real translation, 5-6 two-sided Laplace transform, 5-27–5-28 Z-transform, 6-34–6-36 Lapped transforms (LT) block transforms, 15-2, 15-4 vs. DCT, 15-4–15-5, 15-6 DCT-based image compression, 15-1 discrete MIMO linear systems, 15-3–15-4 discrete transform factorization, 15-2–15-3 extended lapped transforms (ELT), 15-2 factorization, 15-9–15-20 fast lapped transform (FLT), 15-22–15-23, 15-24 finite-length signals distorted sample recovery, 15-26–15-27 overall transform, 15-25–15-26 symmetric extensions, 15-27–15-28 general factorization (GLBT), 15-21–15-22 generalized LOT (GenLOT), 15-19 definition, 15-17–15-18 degrees of freedom, 15-18 implementation, 15-18 inverse and forward transform, 15-20–15-21 nonlinear unconstrained optimization, 15-18, 15-20 hierarchical lapped transform (HLT) filter banks and discrete wavelets, 15-10–15-11 time–frequency diagram, 15-11 tree-structured transform, 15-12–15-13 variable-length LT, 15-13–15-14 image and signal compression, 15-4–15-5, 15-6
lapped biorthogonal transform (LBT) factorization, 15-16 frequency basis, 15-15–15-16 vs. LOT, 15-16–15-17 lapped orthogonal transform (LOT), 15-2, 15-14–15-15 modulated LT=extended lapped transform (ELT) cosine sequence phase, 15-23 lattice-style algorithm, 15-24 plane rotation stages, 15-24–15-25 multi-input multi-output (MIMO) system, 15-8–15-9 nonorthogonal lapped transforms, 15-8 orthogonal lapped transforms matrix, 15-5–15-6 notation, 15-7–15-8 PR property, 15-6 signal sampling, 15-6–15-7 subband signal variance, 15-7 vectors, 15-6–15-7 symmetric basis, 15-14 symmetric delay factorization (SDF), 15-20 Laurent’s theorem, A-6–A-7 Legendre polynomials associated Legendre polynomials, 1-24 complete orthonormal system, 1-22–1-23 definition, 1-21 Fourier cosine transform (FCT), 3-8 Fourier sine transform (FST), 3-15 properties, 1-26–1-27 Rodrigues and recursive formula, 1-22 Schläfli’s integral formula, 1-22 Legendre transforms boundary value problems, 11-10–11-11 definition, 11-5–11-6 operational properties, 11-7–11-10 Linear canonical transforms (LCT) basic properties, 14-13 composite transform, 14-11 decompositions, 14-13 noncommutative group sets, 14-11–14-12 operational properties, 14-14 Wigner distribution, 14-12–14-13 Linear discrete system analysis causality, 6-26 discrete Fourier transform (DFT), 6-26–6-27 frequency characteristics, 6-26 Paley–Wiener theorem, 6-26 stability, 6-25–6-26 transfer function, 6-25 Linear discrete-time filters, 6-32–6-33 Linear-phase filters, 10-28 Linear systems casual systems, 2-52 complex exponentials and periodic functions, 2-51 correlation, 2-60 differential equations, 2-52
IN-8 Kramers–Kronig relations amplitude-phase relations, DLTI systems, 7-43–7-44 causality, 7-42 linear macroscopic continuous media, 7-44–7-45 linear, time-invariant (LTI) systems, 7-41–7-42 minimum phase property, 7-42–7-44 physical realizability, transfer functions, 7-42 signal delay, Hilbertian sense, 7-45 linear shift invariant systems, 2-49 modulation and demodulation, 2-54–2-55 random signals, 2-60–2-61 reality and stability, 2-50 RLC circuits, 2-54 Linz’s method, 9-14 Low-pass filters, 2-49–2-50, 10-22
M Matrices and determinants addition, subtraction and multiplication, D-2 characteristics roots and vector, D-8–D-9 conditional inverses, D-9–D-11 definition, D-1–D-2 determinants, D-3–D-5 inversion, D-5–D-7 matrix differentiation, D-11–D-13 recognition rules and special forms, D-2–D-3 singularity and rank, D-5 statistical matrix forms, D-13–D-14 traces, D-7–D-8 Matrix differentiation, D-11–D-13 Matrix inversion, D-5–D-7 McCully’s theorem, 11-20 Mellin transforms, 4-7 beta function, 12-34 Cauchy’s theorem, 12-1, 12-3 definition, 12-2 discretization and fast computation arithmetic sampling, Mellin space, 12-24 discrete Mellin transform, 12-25–12-27 geometric sampling, original space, 12-24–12-25 interpolation formula, v-space, 12-27 distribution transformation, 12-5–12-6, 12-21 gamma function, 12-33–12-34 hyperbolic class, 13-34 inversion formula, 12-3–12-5 Laplace and Fourier transformations, 12-3 multiplicative convolution, 12-8–12-9 practical inversion inversion integral, 12-9 polar coordinates, 12-10–12-11 Slater’s theorem, 12-10 products and convolutions transformation, 12-22–12-23
Index properties, 12-7–12-8, 12-35–12-36 psi function, 12-34 Riemann’s zeta function, 12-34 signal analysis affine time–frequency distributions, 12-32–12-33 Cramer–Rao bounds, velocity estimation, 12-28–12-30 dual Mellin variable interpretation, 12-30–12-31 numerical computation, 12-27 wavelet transform, 12-31–12-32 standard applications asymptotic expansion, integrals, 12-15 convolution equations, 12-12–12-14 harmonic sums, asymptotic behavior, 12-16–12-17 integral computation, 12-12 potential problem, wedge, 12-14–12-15 summation of series, 12-11–12-12 transformation construction, 12-17–12-20 uncertainty relations, 12-20–12-21 Mexican-hat wavelet, 10-13 Minimum phase transfer function, 7-42 Minkowski inequality, 1-20 Mixed time–frequency signal transformations affine class, 13-31 alternative formulation, 13-30, 13-32–13-33 expression, 13-30 kernels, 13-32 affine-Cohen subclass, 13-33 classical time–frequency representations (TFRs) Altes Q distribution, 13-8–13-9 Bertrand Pk distributions, 13-10 quadratic time–frequency representations, 13-10–13-13 short-time Fourier transform (STFT), 13-4–13-5 TFR warping, 13-9–13-10 Wigner distribution and Woodward ambiguity function, 13-5–13-8 Cohen’s class, 13-27 alternative formulations, 13-23, 13-29 expression, 13-23 implementation consideration, 13-29 kernel constraints, 13-26, 13-29–13-30 shift covariant class, 13-28 covariance properties convolution covariance, 13-16 frequency-shift covariance, 13-14 hyperbolic time-shift covariance, 13-15–13-16 modulation covariance, 13-16 scale covariance, 13-15 time-shift covariance, 13-15 hyperbolic class, 13-35 affine-hyperbolic subclass, 13-37 expression, 13-33 kernels, 13-36–13-37
Mellin transformation, 13-34 quadratic signal product, 13-33 inner products, 13-22–13-23 kth power class, 13-38–13-39 central member, 13-38 expression, 13-37 inverse phase function, 13-38 kernels, 13-39–13-40 power AF, 13-38 signal product, 13-37 signal warping, 13-38 musical score, 13-1 one-dimensional spectral representations Fourier transform, 13-2–13-3 instantaneous frequency and group delay, 13-3–13-4 signal analysis properties finite frequency support, 13-19 finite time support, 13-18 Fourier transform, 13-19–13-20 group delay, 13-19 instantaneous frequency, 13-19 signal localization chirp convolution, 13-21 chirp multiplication, 13-21–13-22 frequency localization, 13-20 hyperbolic chirp localization, 13-21 linear chirp localization, 13-20–13-21 time localization, 13-20 statistical energy density distribution properties energy preservation, 13-17–13-18 frequency marginal preservation, 13-17 frequency moment preservation, 13-18 positivity, 13-17 real, 13-16–13-17 time marginal preservation, 13-17 time moment preservation, 13-18 Modified Clenshaw–Curtis (MCC) quadrature method, 9-11–9-13 Modified Schur–Cohn criterion, 6-25 Modulated lapped transform (MLT)=extended lapped transform (ELT) cosine sequence phase, 15-23 lattice-style algorithm, 15-24 plane rotation stages, 15-24–15-25 Monogenic 2-D signal isotropic Hilbert transform, 7-97 quaternionic Fourier transformation (QFT), 7-97 quaternion-valued function, 7-96 Riesz transform, 7-96–7-97 spherical coordinates representation, 7-97–7-98 Multidimensional complex signals conjugate 2-D complex signals, 7-90 definition, 7-89 2-D modulation theory, 7-92–7-93 Euler’s equation, 7-88 labeling orthants, 7-94
IN-9
Index local amplitudes, phases, and complex frequencies, 7-91 real and complex notation, 7-91–7-92 Multidimensional discrete unitary transforms butterfly operation, 19-2–19-4 calculation, 19-1–19-2 data processing, 19-1 2-D DCT fast 1-D cosine transforms, 19-38 modified DCT, 19-39–19-41 N=2-point DCT, 19-36–19-37 tensor representation, 19-35 2-D DFT, hexagonal lattice calculation, frequency-points, 19-57–19-59 complexity, 19-55–19-56 definition, 19-56–19-57 paired representation, 19-59–19-62 8 3 4-point DHFT, 19-59 rectangular lattice, 19-56 regular hexagonal tessellation, 19-55 2-D DFT tensor algorithm modified tensor algorithms, 19-20–19-21 n-dimensional DFT, 19-25–19-26 N, power of two, 19-18–19-20 N, prime, 19-16–19-18 recursive tensor algorithm, 19-21–19-25 discrete Hartley transforms 2-D DHT representation, 19-26–19-29 3-D DHT tensor representation, 19-29–19-31 n-dimensional DHT, 19-31–19-32 3-D paired representation conventional and tensor representation, 19-41–19-42 2D-to-3D paired transform, 19-43–19-44 N, power of odd prime, 19-49–19-50 N, power of two, 19-44–19-49 set-frequency characteristics, 19-50–19-55 signal processing, 19-42–19-43 splitting-signals, 19-42, 19-43 2-D shifted DFT (SDFT) components, 19-32 L1L2 3 L1L2-point SDFT, 19-35 Lr 3 Lr-point SDFT, 19-34–19-35 2r 3 2r-point SDFT, 19-34 tensor representation, 19-32–19-33 Fourier transform tensor representation 2-D directional images, 19-11–19-16 splitting-signal, 19-10–19-11 nontraditional representation image enhancement method, 19-6–19-7 image processing diagram, 19-5–19-6 spectral information processing, 19-5 Nussbaumer algorithm, 19-4 paired transform–based algorithms
2-D DHT calculation, 19-62–19-63 2-D discrete cosine transform, 19-63–19-64 2-D discrete Hadamard transform, 19-64–19-67 partitioning basis function, 19-7–19-8 covering, cyclic groups, 19-8–19-9 frequency-points, 19-8–19-9 fundamental period, 19-7 lattice torus, 19-9 linear transformation, 19-7 s-splitting, 19-8 tensor representation, 19-8 transformation orders, 19-7 polynomial transforms, 19-4–19-5 row–column algorithm, 19-2, 19-3 vector–radix algorithm, 192–19-4 Multidimensional Fourier transforms, 2-35–2-36 Multidimensional Hilbert transformations 2-D Hilbert transformations, 7-81, 7-84 evenness and oddness, 7-79–7-80 partial Hilbert transformation, 7-81 separable functions, 7-82 spectral description, 7-81–7-82 Stark’s extension, Bedrosian’s theorem, 7-85 Multiresolution signal analysis Gaussian pyramid, 10-16 Laplacian pyramid, 10-16–10-17 orthonormal wavelet transform, 10-18–10-19 power system signal, 10-50 scale and resolution, 10-18 subband coding, 10-17–10-18 Multiresolution wavelet analysis band-pass filters, 10-9 constant fidelity analysis, 10-9–10-10 multiresolution filter bank, 10-9 scale and resolution, 10-10 time and frequency domain localization, 10-8
N Neumann series, 9-12 Nonintegral order Bessel functions, 2-35 Nonsingular symmetric Toeplitz matrix, 3-19 Normal distribution, 2-56 Nyquist interval, 1-49 Nyquist rate, 7-47
O Oliver’s algorithm, 9-13 One-sided Z-transform complex conjugate signal, 6-4–6-5 convolution, 6-3 correlation, 6-6 discrete functions, 6-1 final value, 6-4
initial value, 6-3–6-4 integration, 6-9 irrational function, 6-10 (nT)k multiplication, 6-4 linearity, 6-2 n and nT multiplication, 6-3 parameters, 6-7 Parseval’s theorem, 6-6 partial fraction expansion, 6-8 periodic sequence, 6-2 power series method, 6-7 product transform, 6-5 shifting property, 6-2 simple and multiple poles, 6-9 time scaling, 6-2 Optimum linear filters, 6-33–6-34 Orthogonal Bessel functions, 8-39 Orthonormal filter bank, 10-30–10-31 Orthonormal wavelet transform biorthogonal wavelet basis, 10-26–10-27 multiresolution signal analysis basis, 10-18–10-19 orthonormal basis, 10-19–10-20 orthonormal subspaces, 10-20–10-21 reconstruction, 10-25–10-26 wavelet series decomposition, 10-23–10-24 Haar wavelets, 10-24–10-25 low-pass and high-pass filters, 10-22 orthonormal projections, 10-21–10-22 recursive projections, 10-22–10-23
P 3-D Paired representation conventional and tensor representation, 19-41–19-42 2D-to-3D paired transform, 19-43–19-44 N, power of odd prime, 19-49–19-50 N, power of two basis paired functions, 19-45 concept, 19-48–19-49 generator and orbits of points, 19-44 set of triplets, 19-47 transformation, 19-46 set-frequency characteristics direction-image component, 19-50 inverse 2-D DFT, 19-51 resolution map, 19-53–19-55 series images, 19-52–19-53 splitting-signal, 19-51–19-52 signal processing, 19-42–19-43 splitting-signals, 19-42, 19-43 Paired transform–based algorithms 2-D DHT calculation, 19-62–19-63 2-D discrete cosine transform, 19-63–19-64 2-D discrete Hadamard transform (DHdT) basis functions, 19-64 1-D DHdT, 19-64, 19-65–19-66 properties, 19-64–19-65 Paley–Wiener theorem, 6-26 Parks–McClellan algorithm, 7-74
IN-10 Parseval’s equality Bessel’s inequality, 4-12 Fourier transforms, 2-14–2-15 Parseval’s theorem definition, 7-18 discrete Hilbert transformation (DHT), 7-56–7-57 one-sided Z-transform, 6-6 two-sided Z-transform, 6-20 Partial differential equations, 5-20 half-infinite rod, boundary value problem, 2-64 infinite rod heat sources and sinks, 2-63–2-64 initial value problem, 2-63 one-dimensional heat equation, 2-62 Phase-splitter Hilbert transformers, 7-61–7-62 Piecewise continuous function, 5-1 Poincaré sense asymptotic sequence, 1-53 Poisson integral formula, 11-11 Power series convergence, A-10–A-11 Maclaurin series, A-9 negative power series, A-10 positive power series, A-9 Taylor series, A-9 Principal value integral, A-31–A-32 Psi function, 12-34
Q Quadratic time–frequency representation band-limited cosine, 13-13 interference geometry, 13-11–13-12 multicomponent signal, 13-11 nonlinear operation, 13-10 Quadrature filter, see Hilbert transformers
R Radon and Abel transforms Abel transform pairs, 8-24 circular harmonic decomposition Chebyshev polynomial, 8-35 extension, higher dimensions, 8-35 Fourier series, 8-34 three dimensions, 8-35–8-36 component and matrix notations, 8-3 definitions central-slice theorem, 8-6–8-7 feature, Radon, and Fourier space, 8-3 three and higher dimensions, 8-6 two dimensions, 8-3–8-5 derivatives, 8-15–8-16 discrete periodic Radon transform, 8-42–8-43 elliptic integral, 8-46 fractional integrals, 8-25 functions and formulas Chebyshev polynomials, 8-43–8-44 Hermite polynomials, 8-44–8-45
Index selected integral formulas, 8-45 Zernike polynomials, 8-45 generalizations and wavelets, 8-41–8-42 Hankel transform, 8-29–8-30 integral equation, 8-2 inversion backprojection, 8-31–8-33 direct Fourier method, 8-33–8-34 ill-posed problem, 8-30 indeterminacy theorem, 8-30 iterative and algebraic reconstruction techniques, 8-34 three dimensions, 8-20 two dimensions, 8-19–8-20 Laguerre polynomials, 8-18–8-19 linear transformations, 8-9–8-10 orthogonal functions, unit disk Chebyshev polynomials, 8-39 Fourier space, 8-38 orthogonal Bessel functions, 8-39 Radon space, evaluation, 8-38 Zernike polynomials, 8-36–8-37 Parseval relation, 8-39 properties convolution, 8-9 differentiation, 8-8–8-9 Fourier transform, 8-7–8-8 linearity, 8-8 similarity, symmetry and shifting, 8-8 reconstruction problem, 8-1–8-2 rotationally symmetric function, 8-28 sinc function, 8-46 singular integral equations, 8-21–8-22 spherical symmetry, 8-30 tautochrone problem generalization, 8-27 Radon–Wigner transform, 8-42 Random signal, 1-1 Real Fourier transform (RFT), 4-6–4-7 Reduced interference distributions (RID), 13-29 Residue theory, A-14–A-16 Riemann–Lebesgue lemma, 4-11–4-12 Riemann’s zeta function, 12-34 Riess–Fischer theorem, 1-20 Riesz transform, 7-96–7-97 RLC circuits, 2-54
S Sampled signal reconstruction band-limited functions, 2-45 sampling theorem, 2-43–2-44 truncated sampling reconstruction, 2-44 finite duration functions, 2-47 fundamental sampling formulas and Poisson’s formula, 2-47–2-48 Sampling theorem aliasing, 1-49 delta sampling representation, 1-50–1-51 finite energy function, 1-49 frequency sampling, 1-51 Nyquist interval, 1-49
Papoulis extensions, 1-52 rectangular pulse train, 1-51 Schläfli’s integral formula, 1-22 Schwarz’s inequality finite power function, 2-21 stationary random signal and independence correlation, 2-60 Series and summations binomial, B-1 exponential, B-2 hyperbolic and inverse hyperbolic, B-3–B-4 logarithmic, B-2 Maclaurin, B-2 reversion, B-1 series, B-1 Taylor, B-1–B-2 trigonometric, B-2–B-3 2-D Shifted DFT (SDFT) components, 19-32 L1L2 3 L1L2-point SDFT, 19-35 Lr 3 Lr-point SDFT, 19-34–19-35 2r 3 2r-point SDFT, 19-34 tensor representation, 19-32–19-33 Short-time Fourier transform (STFT) classical time–frequency representation, 13-4–13-5 definition, 10-3–10-4 discrete short-time Fourier transform, 10-5 Gabor function, 10-4 Gaussian window, 10-4–10-5 inverse short-time Fourier transform, 10-4 regular lattice, 10-5 time and frequency resolution, 10-4 uncertainty principle, 10-4 Signals and systems asymptotic series asymptotic approximation, 1-53 asymptotic power series, 1-53–1-55 definition, 1-52–1-53 Poincaré sense asymptotic sequence, 1-53 Bessel functions definition, 1-35 Fourier Bessel series, 1-39 integral representation, 1-37 nonintegral order, 1-36 properties, 1-41–1-43 recurrence relation, 1-36 beta function, 1-12–1-13 Chebyshev polynomials, 1-34–1-35 classification, 1-1 complete orthonormal set, 1-21 convolution definition, 1-13–1-14 harmonic inputs, 1-18 impulse response, 1-15 nonanticipative convolution system, 1-15 properties, 1-16–1-18 stability, 1-17
IN-11
Index correlation, 1-19 delta function definition, 1-4 properties, 1-6–1-7, 1-9 distance function, 1-20 distributions definition, 1-5–1-6 generalized limit, 1-7–1-8 testing function, 1-4–1-5 energy and power signals, 1-4 functions (signals), variables, and point sets, 1-1–1-2 gamma function definition, 1-10 integral expressions, 1-11 properties and specific evaluations, 1-11–1-12 Hermite polynomials definition, 1-28–1-29 integral representation and equation, 1-29 orthogonality relation, 1-29–1-30 properties, 1-31 recurrence relation, 1-29 inner product, 1-19 Laguerre polynomials generating function, 1-30 Leibniz formula, 1-31 orthogonality, 1-32–1-33 properties, 1-33–1-34 recurrence relations, 1-31–1-32 Rodrigues formula, 1-31 Legendre polynomials associated Legendre polynomials, 1-24 complete orthonormal system, 1-22–1-23 definition, 1-21 properties, 1-26–1-27 Rodrigues and recursive formula, 1-22 Schläfli’s integral formula, 1-22 limits and continuous functions, 1-2–1-3 normalization, 1-20 quadratically integrable functions, 1-19–1-20 series approximation, 1-21 signal sampling aliasing, 1-49 delta sampling representation, 1-50–1-51 finite energy function, 1-49 Fourier transform, 1-48–1-49 frequency sampling, 1-51 Nyquist interval, 1-49 Papoulis extensions, 1-52 rectangular pulse train, 1-51 values and interval, 1-47–1-48 Zernike polynomials definition, 1-40 piecewise continuous function, 1-45 Radial polynomials, 1-40, 1-45–1-46 Zernike moments, 1-45, 1-47–1-48
Sine and cosine transforms cepstral analysis, speech processing, 3-23 computational algorithms decimation-in-frequency algorithms, 3-30–3-31 decimation-in-time algorithms, 3-28–3-30 fast Fourier transform (FFT), 3-28 data compression, 3-23–3-24 differential equations one-dimensional boundary value problem, 3-21–3-22 time-dependent one-dimensional boundary value problem, 3-22–3-23 two-dimensional boundary value problem, 3-22 discrete sine and cosine transforms (DST and DCT) decimation-in-frequency algorithms, 3-30–3-31 decimation-in-time algorithms, 3-28–3-30 definitions, 3-17 Karhunen–Loeve transform (KLT), 3-19–3-21 properties and operational rules, 3-17–3-19 Fourier cosine transform (FCT) algebraic functions, 3-5–3-6, 3-32 Bessel functions, 3-9 complementary error function, 3-8 convolution property, 3-10 cosine integral function, 3-9 definitions, 3-1–3-2 differentiation-in-t, 3-9–3-10 differentiation-in-v, 3-10 exponential and logarithmic functions, 3-6–3-7, 3-32 exponential integral function, 3-9 orthogonal polynomials, 3-8 properties and operational rules, 3-2–3-5 real data sequence, 3-28 shift-in-t, shift-in-v and kernel product property, 3-10 sine integral function, 3-8 trigonometric functions, 3-7, 3-32 Fourier sine transform (FST) algebraic functions, 3-13–3-14, 3-33 Bessel functions, 3-16 complementary error function, 3-15–3-16 cosine integral function, 3-16 definitions, 3-11 exponential and logarithmic functions, 3-14, 3-33 exponential integral function, 3-16 orthogonal polynomials, 3-15 properties and operational rules, 3-11–3-13 real data sequence, 3-28
sine integral function, 3-16 trigonometric function, 3-14–3-15, 3-33 image compression discrete local sine transform (DLS), 3-26 lapped orthogonal transform (LOT), 3-25 original vs. reconstructed image, 3-27 transform domain processing, 3-24 Single side-band (SSB) filtering, 7-70 Singularities at 1, A-15 branch points, A-12–A-13 definition, A-11–A-12 essential and nonessential singularity, A-11, A-13–A-14 isolated and nonisolated singularity, A-12 phase change, A-12–A-13 poles, A-12 removable singularity, A-12 Singular-value decomposition (SVD), 14-17–14-18 Slater’s theorem, 12-10 Square matrix traces, D-7–D-8 Statistical matrix, D-13–D-14 Stokes’ theorem, E-9 Struve functions, 9-15 Surface acoustic wave (SAW) filter, 7-70–7-71 Systems analysis, Hartley series Bessel’s inequality, 4-12 electric power quality assessment, 4-15–4-17 finality of coefficients, 4-10, 4-12 impulse function, 4-10 linear system response problem, 4-14 orthogonal basis function, 4-13 orthonormal set, 4-12–4-13 Parseval’s equality, 4-12 Riemann–Lebesgue lemma, 4-11–4-12 transfer function methodology, 4-15 truncation approximation, 4-11
T Tapped delay-line filters, 7-67 Taylor’s theorem, A-6 Tempered generalized functions, 2-4 Three-term recurrence formula, 1-36 Transversal filter structure, 6-29 Two-sided Laplace transform, 5-27–5-28 Two-sided Z-transform complex conjugate signal, 6-21 convolution, 6-17 correlation, 6-18 discrete representation, 6-11 eanT multiplication, 6-18 frequency translation, 6-18 integral inversion formula, 6-24 linearity, 6-16 nT, multiplication, 6-17 Parseval’s theorem, 6-20 partial fraction expansion, 6-22
IN-12 power series expansion, 6-21 product, 6-18–6-20 region of convergence (ROC), 6-14–6-16 shifting and scaling, 6-16 time reversal, 6-16–6-17
U Unitarity property, see Parseval’s formula
V Vector analysis cross product, E-2–E-3 definition, E-1 differential operators-rectangular coordinates, E-7–E-8 differentiation, E-5–E-6 formulas, E-9 Gauss’ theorem, E-9 geometry curves in space, E-6–E-7 sphere, E-5 straight line and plane, E-3–E-4 Green’s theorem, E-9 scalar, dot, and inner product, E-2 scalar triple product, E-3 Stokes’ theorem, E-9 transformation of integrals, E-8–E-9 triple product, E-3 vector algebra, E-1
W Wavelet theory orthogonal filters, time domain, 10-37–10-38 orthonormality, 10-34–10-35 regularity construction of scaling function, 10-40 convergence, wavelet reconstruction, 10-39–10-40 quadrature mirror filter, 10-40–10-41 smoothness measure, 10-39 time domain, 10-41–10-42 two scale relations, frequency domain cross-filter orthogonality, 10-36 Fourier transforms, 10-35 Haar’s basis, 10-37 paraunitary matrix, 10-36–10-37 wavelet and subband filters, 10-38–10-39 Wavelet transform ambiguity function, 10-6 B-spline basis, 10-42–10-43 constant-Q analysis, 10-1 continuous wavelet transform, 10-2–10-3 Daubechies basis maximum flatness filter, 10-45–10-46 time domain, 10-46–10-47 discrete wavelet transform timescale space lattices, 10-14–10-15 wavelet frame, 10-15–10-16
Index fast wavelet transform, 10-47–10-49 filter bank biorthogonal filter bank, 10-32–10-33 FIR filter bank, 10-27–10-28 orthonormal filter bank, 10-30–10-31 perfect reconstruction, 10-28–10-30 time domain, orthonormal filters, 10-31–10-32 Gabor-wavelet, 10-13–10-14 Gaussian pyramid, 10-16 Gaussian wavelet, 10-13 Haar wavelet, 10-12–10-13 image compression, 10-53 image edge detection edge detectors, 10-51–10-52 multiscale edges, 10-52–10-53 two-dimensional wavelet transform, 10-52 Laplacian pyramid, 10-16–10-17 Lemarie and Battle wavelet bases, 10-43–10-45 Mellin transform, 12-31–12-32 Mexican-hat wavelet, 10-13 orthonormal wavelet transform biorthogonal wavelet basis, 10-26–10-27 multiresolution signal analysis basis, 10-18–10-19 orthonormal basis, 10-19–10-20 orthonormal subspaces, 10-20–10-21 reconstruction, 10-25–10-26 wavelet series decomposition, 10-21–10-25 power system signal, 10-50 properties admissible condition, 10-6–10-7 linear transform property, 10-11–10-12 multiresolution wavelet analysis, 10-8–10-10 regularity, 10-7–10-8 scale and resolution, 10-18 scaling function basis, 10-1 short-time Fourier transform definition, 10-3–10-4 discrete short-time Fourier transform, 10-5 Gabor function, 10-4 Gaussian window, 10-4–10-5 inverse short-time Fourier transform, 10-4 regular lattice, 10-5 time and frequency resolution, 10-4 uncertainty principle, 10-4 signal detection, 10-50–10-51 subband coding, 10-17–10-18 time-frequency space analysis, 10-3 Wigner distribution functions, 10-5–10-6 Weber’s integral theorem, 9-10 Whittaker’s interpolatory function, 7-48 Wiener–Khintchine theorem, 2-21 Wigner distribution Fourier transforms, 13-7–13-8 fractional Fourier transform (FRT)
ambiguity function, 14-5–14-6 definition, 14-5 filtering, 14-15 properties, 14-5 Radon transforms and slices, 14-10–14-11 Hilbert transforms, 7-98, 13-5 linear canonical transforms (LCT), 14-12–14-13 Wavelet transform, 10-5–10-6 Woodward ambiguity function Dirac function, 13-6 Fourier transform, 13-7–13-8 Gaussian signal, 13-6–13-7 quantum mechanics, 13-5 signal operations, 13-7 Wigner–Ville distribution, 13-5, 13-13
Z Zak transform applications Gabor expansions, 16-17 Jacobi theta functions, 16-17 Suter–Stevens fast Fourier transform algorithm, 16-19 time–frequency analysis, 16-17–16-18 weighted fast Fourier transforms, 16-18–16-19 continuous transform algebraic property, 16-6–16-7 definition, 16-2–16-3 Fourier transform, 16-9–16-11 general property, 16-3–16-6 geometric property, 16-7–16-8 inverse transform, 16-8–16-9 multidimensional transform, 16-13–16-14 multidimensional weighted transform, 16-14 multidimensional windowed, weighted transform, 16-14 topological property, 16-7 translation and modulation operator, 16-11–16-13 discrete transform definition, 16-14 extensions, 16-15 inverse transform, 16-14–16-15 property, 16-14 finite transform definition, 16-15 extensions, 16-17 inverse transform, 16-16–16-17 property, 16-16 history, 16-1 linear spaces, 16-2 notation, 16-1–16-2 Zernike polynomials definition, 1-40 functions and formulas, 8-45 orthogonal functions, unit disk, 8-36–8-37
IN-13
Index piecewise continuous function, 1-45 Radial polynomials, 1-40, 1-45–1-46 Zernike moments, 1-45, 1-47–1-48 Zero-order Bessel function, 2-33–2-34 Zero padding, 4-20 Z-transform difference equations, constant coefficients, 6-24 digital filters finite impulse response (FIR) filters, 6-28–6-29 infinite impulse response (IIR) filters, 6-27–6-28 Fourier transform, 6-36 inverse transforms, partial fractions, 6-39 Laplace transform, 6-34–6-36 linear discrete system analysis causality, 6-26 discrete Fourier transform (DFT), 6-26–6-27 frequency characteristics, 6-26 Paley–Wiener theorem, 6-26 stability, 6-25–6-26 transfer function, 6-25
linear, time-invariant, discrete-time, dynamical systems, 6-29 negative-time sequences, 6-38 one-sided Z-transform complex conjugate signal, 6-4–6-5 convolution, 6-3 correlation, 6-6 discrete functions, 6-1 final value, 6-4 initial value, 6-3–6-4 integration, 6-9 irrational function, 6-10 (nT)k multiplication, 6-4 linearity, 6-2 n and nT multiplication, 6-3 parameters, 6-7 Parseval’s theorem, 6-6 partial fraction expansion, 6-8 periodic sequence, 6-2 power series method, 6-7 product transform, 6-5 shifting property, 6-2 simple and multiple poles, 6-9 time scaling, 6-2
positive-time sequences, 6-36–6-38 random process linear discrete-time filters, 6-32–6-33 optimum linear filtering, 6-33–6-34 power spectral densities, 6-32 two-sided Z-transform complex conjugate signal, 6-21 convolution, 6-17 correlation, 6-18 discrete representation, 6-11 eanT multiplication, 6-18 frequency translation, 6-18 integral inversion formula, 6-24 linearity, 6-16 nT, multiplication, 6-17 Parseval’s theorem, 6-20 partial fraction expansion, 6-22 power series expansion, 6-21 product, 6-18–6-20 region of convergence (ROC), 6-14–6-16 shifting and scaling, 6-16 time reversal, 6-16–6-17 Z-transform pairs, 6-39–6-43