Handbook of Computational Geometry

  • 95 291 1
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Handbook of Computational Geometry

This Page Intentionally Left Blank HANDBOOK OF C O M PU TAT I O NAL GEOMETRY E d i t e d by J.-R. Sack Carleton U

2,762 962 44MB

Pages 1087 Page size 468 x 684 pts Year 2010

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

HANDBOOK OF COMPUTATIONAL GEOMETRY

This Page Intentionally Left Blank

HANDBOOK OF C O M PU TAT I O NAL GEOMETRY

E d i t e d by

J.-R. Sack Carleton University, Ottawa, Canada

J. Urrutia Universidad Nacional Aut6noma de M~xico, M(xico

N 2000 ELSEVIER Amsterdam

• Lausanne

• New

York

• Oxford

• Shannon

• Singapore

• Tokyo

E L S E V I E R S C I E N C E B.V. S a r a B u r g e r h a r t s t r a a t 25 P.O. B o x 211, 1000 A E A m s t e r d a m , T h e N e t h e r l a n d s

©

2000 Elsevier Science B.V.

A l l rights r e s e r v e d .

T h i s w o r k is p r o t e c t e d u n d e r c o p y r i g h t b y E l s e v i e r S c i e n c e , a n d the f o l l o w i n g t e r m s a n d c o n d i t i o n s a p p l y to its use: Photocopying Single photocopies of single chapters may be made for personal use as allowed by national copyright laws. Permission of the Publisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for non-profit educational classroom use. Permissions may be sought directly from Elsevier Science Rights & Permissions Department, PO Box 800, Oxford OX5 I DX, UK; phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail: [email protected]. You may also contact Rights & Permissions directly through Elsevier's home page (http://www.elsevier.nl), selecting first 'Customer Support', then 'General Information', then 'Permissions Query Form'. In the USA, users may clear permissions and make payments through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA; phone: (978) 7508400, fax: (978) 7504744, and in the UK through the Copyright Licensing Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London W1P 0LP, UK; phone: (+44) 171 631 5555; fax: (+44) 171 631 5500. Other countries may have a local reprographic rights agency for payments. Derivative Works Tables of contents may be reproduced for internal circulation, but permission of Elsevier Science is required for external resale or distribution of such material. Permission of the Publisher is required for all other derivative works, including compilations and translations. Electronic Storage or Usage Permission of the Publisher is required to store or use electronically any material contained in this work, including any chapter or part of a chapter. Except as outlined above, no part of this work may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the Publisher. Address permissions requests to: Elsevier Science Rights & Permissions Department, at the mail, fax and e-mail addresses noted above. Notice No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made.

First e d i t i o n

2000

Library of Congress Cataloging-in-Publication Data A c a t a l o g r e c o r d f r o m the L i b r a r y o f C o n g r e s s has b e e n a p p l i e d for.

ISBN: 0 444 82537 1

G T h e p a p e r u s e d in this p u b l i c a t i o n m e e t s the r e q u i r e m e n t s o f A N S I / N I S O Z 3 9 . 4 8 - 1 9 9 2 ( P e r m a n e n c e o f Paper). P r i n t e d in T h e N e t h e r l a n d s

Computational Geometry is a young and vibrant field of Computer Science born in the early 1970’s. Since its beginning, Computational Geometry has attracted the interest of a large number of researchers, due to its numerous and strong interactions with various fields of Science and Engineering such as Algorithms and Data Structures, Combinatorial Mathematics, Euclidean Geometry, and Optimization. Recently the demand for efficient geometric computing in a variety of areas of applied sciences such as Geographical Information Systems, Visualization, Robotics, Computer Graphics, and CAD has further fueled research in Computational Geometry. This has resulted in a wealth of powerful techniques and results of interest to researchers and practitioners. The need for a comprehensive source of information on the fundamental techniques and tools developed in Computational Geometry is evident. This handbook will be an important source of information for all of us interested in one way or another in Computational Geometry. Readers whose objectives are to use a method for their applications will be able to find and choose different approaches and techniques to solve a large number of problems, most of them described in a comprehensive and concise manner. Researchers interested in theoretical results will find this handbook an invaluable and comprehensive source of information in which most of the results, tools and techniques available in our field are to he tound. This handbook presents chapters which survey in detail most of the research available to date in this field. The handbook contains survey papers in the following fundamental topics: Arrangements, Voronoi Diagrams, Geometric Data Structures (incl. point location, convex hulls, etc.), Spatial Data Structures, Polygon Decomposition, Randomized Algorithms, Derandomization, Parallel Computational Geometry (deterministic and randomized). Visibility, Art Gallery and Illumination Problems, Closest Point Problems, Link Distance Problems. Similarity of Geometric Objects. Davenport-Schinzel Sequences, and Spanning Trees and Spanners. There are also three chapters devoted to applications of Computational Geometry to other fields of science: Geographical Information Systems, Geometric Shortest Paths and Network Optimization, and Mesh Generation. In addition, there is a chapter devoted to robustness and numerical issues, and chapters on Animation and Graph Drawing. We would like to thank for the enthusiasm shown by all the contributors to this handbook, as well as express our gratitude to the numerous anonymous referees whose invaluable contributions made this project possible. We are grateful to Arjen Sevenster of Elsevier Science for his enthusiastic support which made this handbook a reality. Finally, we thank Anil Maheshwari for his help. J.-R. Sack J. Urrutia

This Page Intentionally Left Blank

List of Contributors Agarwal, P.K., Duke University, Durham, NC (Chs. 1 , 2 ) Alt, H., Freie Univer.sitat Berlin, Berlin (Ch. 3) Asano, T., Japan Advanced Institute ofscience and Technology, Ishikawu (Ch. 19) Atallah, M.J., Purdue University, West Lafayetre, IN (Ch. 4) Aurenhammer, F., Technische Universitat Grae, Graz (Ch. 5 ) Bern, M., Xerox Palo Alto Research Center; Pa10 Alto, CA (Ch. 6 ) Chen, D.Z., University of Notre Dame, Notre Dame, IN (Ch. 4) De Floriani, L., Universita di Genova, Genova (Ch. 7 ) Djidjev, H.N., University cf Wanvick, Coventry (Ch. 12) Dobkin, D.P., Princeton University, Princeton, NJ (Ch. 8) Eppstein, D., University of California, Iwine, CA (Ch. 9) Ghosh, S.K., Tutu Institute qf Fundcimental Research, Bombay (Ch. 19) Goodrich, M.T., Johns Hopkins University, Baltimore, M D (Ch. 10) Guibas, L.J., Stanford University Stcmti,rd. CA (Ch. 3 ) Hausner, A,, Princeton Universitj: Princeton. NJ (Ch, 8 Kzil, J.M., Utiitzer,sitvc!f’Sa.skutc,hc~varr, Suskutonn. SK (Ch. i 1 ) Klein, K., F p t n Universitiit Hugen, f f u g r n (Ch. S 1 Magillo, P., I h ~ r r . s i t idi~ G m m u Grrrow (CII.7 ) Mahebhwari, A,, Carieron University, Ottawa, ON (Ch. 12j MatouSek, J., Charles Universiq, Prtiha (Ch. 13) Mitchell, J.S.B.. State Universiq ($New York, Stony Brook, NY (Ch. 15) Mulmuley, K., The University of Chicago, Chicago, IL, and l.l.i?,Bombay (Ch. 16) Nievergelt, J., ETH Zurich, Zurich (Ch. 17) Plassmann, P., Pennsylvnnia State University, University Park, PA (Ch. 6 ) Puppo, E., Universitu di Genova, Genova (Ch. 7 ) Ramaiyer, K., Infbrmix Sfhvare, Oakland, CA (Ch. 10) Reif, J.H., Duke University, Durham, NC (Ch. 18) Sack, J.-R., Carleton University, Ottawa, ON (Ch. 12) Schirra, S., ~ ~ i x - p l ~ i t i c k - l n s t i t uInfiirmatik, tfur Saarbrucken (Ch. 14) Sen, S., Indian Institute of Technology, New Delhi (Ch. IS) Sharir, M., Tel Aviv University, Tel Aviv, and New York University, New York, NY (Chs. 1,2) Shermer, T.C., Simon Fruser University, Burnaby, BC (Ch. 19) Smid, M., University ofMagdeburg, Magdeburg (Ch. 20) Tamassia, R., Brown UniversiQ, Providence, RI (Ch. 21) Urrutia, J., Universidad Nacional Autcinonza de MPxico, MPxico (Ch. 22) Widmayer, P., ETH Zurich, Zurich (Ch. 17) vii

This Page Intentionally Left Blank

Contents Preface List of Contributors

V

vii

1. Davenport-Schinzel sequences and their geometric applications l? K. Aganval and M. Sharir 2. Arrangements and their applications l? K. Aganval and M. Sharir 3. Discrete geometric shapes: M a h i n g , interpolation, and approximation H . Alt and L.J. Guibas 4. Deterministic parallel computational geometry M.J. Atallah and D.Z. Chen 5 . Voronoi diagrams E Aurenhammer and R. Klein 6. Mesh generation M. Bern and P. Plassmann 7. Applications of computational geometry to geographic information systems L. de Floriani, l? Magillo and E. Puppo 8. Making geometry visible: An introduction to the animation of geometric algorithms A. Hausner and D.P. Dobkin 9. Spanning trees and spanners D. Eppste in 10. Geometric data structures M. 7: Goodrich and K. Ramaiyer 1 1. Polygon decomposition J.M. Keil 12. Link distance problems A. Maheshwari, J.-R. Sack and H.N. Djidjev 13. Derandomization in computational geometry J. Matoufek 14. Robustness and precision issues in geometric computation S. Schirra 15. Geometric shortest paths and network optimization J.S. B. Mitchell 16. Randomized algorithms in computational geometry K. Mulmuley ix

1 49 121

155 20 1 29 1

333 389 425 463 49 1 519 559 597

633 703

x

Contents

17. Spatial data structures: Concepts and design choices

725

J. Nievergelt and P. Widmayer

18. Parallel computational geometry: An approach using randomization

765

J.H. Reif and S. Sen

19. Visibility in the plane

829

T. Asano, S.K. Ghosh and T.C. Shermer

20. Closest-point problems in computational geometry

877

M. Smid

21. Graph drawing

937

R. Tamassia

22. Art gallery and illumination problems

973

J. Urrutia

Author Index Subject Index

I-1 1-35

CHAPTER 1

Davenport-Schinzel Sequences and Their Geometric Applications* Pankaj K. Agarwal Center for Geometric Computing, Department of Computer Science, Box 90129, Duke University, Durham, NC 27708-0129, USA E-mail:[email protected]

Micha Shark School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel, and Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA E-mail: sharir@math. tau. ac. il

Contents 1. Introduction 2. Davenport-Schinzel sequences and lower envelopes 2.1. Lower envelopes of totally defined functions 2.2. Lower envelopes of partially defined functions 2.3. Constructing lower envelopes 3. Simple bounds and variants 4. Sharp upper bounds on A^ («) 4.1. Ackermann's function — A review 4.2. The upper bound for A.3(«) 4.3. Upper bounds onXgin) 5. Lower bounds on A^Cn) 6. Davenport-Schinzel sequences and arrangements 6.1. Complexity of a single face 6.2. Computing a single face 6.3. Zones 6.4. Levels in arrangements

.

.

. . . ,

3 4 4 7 8 9 11 11 12 16 16 20 21 23 27 28

*Both authors have been supported by a grant from the U.S.-Israeli Binational Science Foundation. Pankaj Agarwal has also been supported by a National Science Foundation Grant CCR-93-01259, by an Army Research Office MURI grant DAAH04-96-1-0013, by a Sloan fellowship, and by an NYI award and matching funds from Xerox Corporation. Micha Sharir has also been supported by NSF Grants CCR-91-22103 and CCR-93-11127, by a Max-Planck Research Award, and the Israel Science Fund administered by the IsraeU Academy of Sciences, and the G.I.F., the German-Israeli Foundation for Scientific Research and Development.

HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V. All rights reserved 1

2

RK. Agarwal and M. Sharir

7. Miscellaneous applications 7.1. Applications of D5(n, 2)-sequences 7.2. Motion planning 7.3. Shortest paths 7.4. Transversals of planar objects 7.5. Dynamic geometry 7.6. Hausdorff distance and Voronoi surfaces 7.7. Visibility problems 7.8. Union of Jordan regions 7.9. Extremal {0, l}-matrices 8. Concluding remarks References

Abstract An {n,s) Davenport-Schinzel sequence, for positive integers n and 5, is a sequence composed of n distinct symbols with the properties that no two adjacent elements are equal, and that it does not contain, as a (possibly non-contiguous) subsequence, any altemation a- -b-' a- • b- " of length s -\-2 between two distinct symbols a and b. The close relationship between Davenport-Schinzel sequences and the combinatorial structure of lower envelopes of collections of functions make the sequences very attractive because a variety of geometric problems can be formulated in terms of lower envelopes. A near-linear bound on the maximum length of Davenport-Schinzel sequences enable us to derive sharp bounds on the combinatorial structure underlying various geometric problems, which in turn yields efficient algorithms for these problems.

29 30 31 34 35 36 37 38 39 40 41 41

Davenport-Schinzel sequences and their geometric applications

3

1. Introduction Davenport-Schinzel sequences, introduced by H. Davenport and A. Schinzel in the 1960s, are interesting and powerful combinatorial structures that arise in the analysis and construction of the lower (or upper) envelope of collections of univariate functions, and therefore have applications in a variety of geometric problems that can be reduced to computing such an envelope. In addition, Davenport-Schinzel sequences play a central role in many related geometric problems involving arrangements of curves and surfaces. For these reasons, they have become one of the major tools in the analysis of combinatorial and algorithmic problems in geometry. DEFINITION 1.1. Let n and s be two positive integers. A sequence U = {u\,.. ..Um) of integers is an {n,s) Davenport-Schinzel sequence (a DS(n, s)-sequence for short) if it satisfies the following conditions: (i) I ^Ui ^n for each / < m, (ii) Ui 7^ M/+1 for each / < m, and (iii) there do not exist 5* + 2 indices 1 ^ /i < 12 < • " 1, there exists a polynomial Cs (q) o f degree at most s - 1, such that ~-2s+l (n) 4. The first superlinear bound on )vs (n) was obtained by Hart and Sharir [80], who proved that )v3(n) = ; 2 ( n ~ ( n ) ) . Their original proof transforms DS(n, 3)-sequences into certain path compression schemes on rooted trees. A more direct proof for the lower bound on )~3(n) was given by Wiernik and Sharir [154] they describe an explicit recursive scheme for constructing a DS(n, 3)-sequence of length £2 ( n a ( n ) ) . See also [ 100]

Davenport-Schinzel sequences and their geometric applications

17

for another proof of the same lower bound. We sketch Wiernik and Sharir's construction, omitting many details, which can be found in [142,154]. Let [Ck{m)]k^\ be a sequence of functions from N to itself, defined by C\(m) = I, Q ( l ) = 2Q_i(2), Ck(m) = Ck(m- l)-Ck-\{Ck(m-

1)),

m ^ 1, k^2, k^2,m^2.

It can be shown that, for all /: > 4, m ^ 1, Ak-i(m) ^ Ck(m) ^ Akim + 3).

(5.1)

In what follows, let /x = Q ( m — 1), y — Ck-\ (Ck(m — 1)), and y = fx - v. For each k,m^ 1, we construct a sequence Sk(m) that satisfies the following two properties: (PI) Sk(m) is composed of Nk(m) = m • Ckim) distinct symbols. These symbols are named (d,l),ford= I,.. .,m, I = 1 , . . . , y, and are ordered in lexicographical order, so that (d, I) < (d\ /O if I < I' or 1 = 1' and d < d\ (P2) Sk(jn) contains y fans of size m, where each fan is a contiguous subsequence of the form ((1, /)(2, /) • • • (m, /)), for / == 1 , . . . , y. Since fans are pairwise disjoint, by definition, the naming scheme of the symbols of Sk(m) can be interpreted as assigning to each symbol the index / of the fan in which it appears, and its index d within that fan. The construction of Skim) proceeds by double induction on k and m, as follows. 1. k=l: The sequence is a single fan of size m: S\{m) — ((1, 1)(2, 1) • • (m, 1)). Properties (PI) and (P2) clearly hold here (Ci (m) = 1). 2. k = 2: The sequence contains a pair of disjoint fans of size m, with a block of elements following each of these fans. Specifically, S2(m) = ((1, 1) (2, 1) •. • (m - 1, 1) (m, 1) (m - 1, 1) • • • (1, 1) (1,2)(2,2)-.- ( m - l , 2 ) ( m , 2 ) ( m - 1 , 2 ) ••• (1,2)>. Indeed, S2(m) contains C2(m) = 2 fans and is composed of 2m distinct symbols. 3. k^3,m = I: The sequence is identical to the sequence for ^^ = ^ — 1 and m' = 2, except for renaming of its symbols and fans: Sk-\(2) contains Q_i(2) = ^Ck(l) fans, each of which consists of two symbols; the symbol renaming in Sk(l) causes each of these two elements to become a 1-element fan. Properties (PI) and (P2) clearly hold. 4. The general case k^3,m> 1: (i) Generate inductively the sequence S' = Sk(m — I); by induction, it contains /x fans of size m — 1 each and is composed of (m — I) - /JL symbols. (ii) Create v copies of S' whose sets of symbols are pairwise disjoint. For each j ^ V, rename the symbols in the jth copy 5y of S' as (d,i, j) where I ^d ^m — I is the index of the symbol in the fan of S'- containing it, and 1 ^ / < /x is the index of this fan in S'-.

RK. Agarwal and M. Sharir

18

fan I

fan^

fan^

SL

S'2 S[

5* fan^

faru^

fan^

Fig. 5. Lower bound construction: merging the subsequences.

(iii) Generate inductively the sequence S* = Sk-\iiJi) whose set of symbols is disjoint from that of any S'-; by induction, it contains v fans of size fi each. Rename the symbols of S* as (m, /, /) (where / is the index of that symbol within its fan, and j is the index of that fan in 5*). Duplicate the last element (m, /x, j) in each of the y fans of S*. (iv) For each 1 ^ / ^ //, 1 ig / ^ y, extend the /th fan of S'- by duplicating its last element (m — 1, /, y), and by inserting the corresponding symbol (m, /, j) of S* between these duplicated appearances of (m — \,i, j). This process extends the (m — l)-fans of S'- into m-fans and adds a new element after each extended fan. (v) Finally construct the desired sequence Sk{m) by merging the v copies S^- of S' with the sequence S*. This is done by replacing, for each 1 ^ y ^ y, the jih fan of S* by the corresponding copy 5" of S\ as modified in (iv) above. Note that the duplicated copy of the last element in each fan of S* (formed in step (iii) above) appears now after the copy S'- that replaces this fan; see Figure 5 for an illustration of this process. It is easily checked that S^ (m) consists of Nk{m) = v(m - l)/x-f

MQ-I(M)

=mCk{m)

symbols, and it can also be shown that Sk (m) is a DS(Nk (m), 3)-sequence satisfying properties (PI) and (P2). If we let a^ (m) denote the length of Sk(m), then a\(m) = m, (72 ( m ) = 4 m — 2,

Davenport-Schinzel sequences and their geometric applications

19

o^^(l) = ^^-l(2),

Gkim) = vak{m-\)

+ ak-\{ix) + v{ii + \).

The third term in the last equation is due to the duphcation of the rightmost symbol of each fan of 5'* and of each S'- (see Steps 4(iii)-(iv)). Using a double induction on k and m, one can prove that (Jk(m) > (km — 2)Ck(m) + 1. THEOREM PROOF.

5.1 ([80,154]). X3(n) =

^(na(n)).

Choose mk = Q + i (^ - 3). Then

nk = Nk(mk) =

Ck-^iik-2)^Ak+i(k-^l),

where the last inequality follows from (5.1). Therefore a(nk) ^k -\- 1, and hence >^3(nk) ^ cfkimk) ^ krik - 2Ck(mk) ^ (k - 2)nk ^ nk{a(nk) - 3). As shown in [142], this bound can be extended to any integer n, to prove that A-3(«) = D Q(na(n)). Generalizing the above construction and using induction on s — basically replacing each chain of the sequence Sk{m) by a DS{n, s — 2)~sequence, which, in turn, is constaicted recursively ~— Sharir [138] proved that A2^ + i(/i) — QCnainY). Later Agarwal et al. [11] proved that the upper bounds slated in Theorem 4.5 are almost optimal. In particular, using a rather mvolved doubly-mductive scheme, they constructed a DS{n, 4)-sequence of length i2(n2^^^"'). Then, by recurring on s, they generalized their construction of DSin, 4)sequences to higher-order sequences. The following theorem summarizes their result. THEOREM 5.2 ([11]).

(i) k4(n)--=:Q(n'2''^''^). (ii) For ^ > 1, there exists a polynomial Qsiq) of degree at most s — I, such that

OPEN PROBLEM

1. Obtain tights bounds on Xsin) for ^ > 4, especially for odd values

ofs. Wiemik and Sharir [154] proved that the DS{n, 3)-sequence Sk(m) constructed above can be realized as the lower envelope sequence of a set of n segments, w^hich leads to the following fairly surprising result: 5.3 ([154]). The lower envelope ofn segments can have Q (na(n)) breakpoints in the worst case.

THEOREM

20

RK. Agarwal and M. Sharir

Shor [145] gave a simpler example of n segments whose lower envelope also has Q (na(n)) breakpoints. These results also yield an ^{na(n)) lower bound on many other unrelated problems, including searching in totally monotone matrices [94] and counting the number of distinct edges in the convex hull of a planar point set as the points are being updated dynamically [151]. Shor has also shown that there exists a set of n degree-4 polynomials whose lower envelope has Q{na(n)) breakpoints [146] (which is somewhat weak, because the upper bound for this quantity is X4(n) = 0(n • 2"^^^^)). We conclude this section by mentioning another open problem, which we believe is one of the most challenging and interesting problems related to Davenport-Schinzel sequences. OPEN PROBLEM 2. Is there a natural geometric realization of higher order sequences? For example, can the lower envelope ofn conic sections have f2(n2^^^^) breakpoints!

6. Davenpoiit-Srhinzel sequences and arrangements In this section we consider certain geometric and topological structures induced by a family of arcs in the plane, where Davenport-Schinzel sequences play a major role in their analysis. Specifically, let T = {yi,..., y,;} be a collection of n Jordan arcs in the plane, each pair of which intersect in at most .v points, for some fixed constant s.^ DEFINITION 6.1. The arrangement AiF) of F is the planar subdivision induced by the arcs of JT; that is, AiF) is a planar map whose vertices are the endpoints of the arcs of F and their pairwise intersection points, whose edges are maximal (relatively open) connected portions of the y, 's that do not contain a vertex, and whose/ac-^5' are the connected components of R^ — (J T. The combinatorial complexity of a face is the number of vertices (or edges) on its boundary, and the combinatorial complexity of A(F) is the total complexity of all of its faces. The maximum combinatorial complexity of ^4(7") is clearly &(sn^) = 0(n^), and A(F) can be computed in time 0(n^ log«), under an appropriate model of computation, using the sweep-line algorithm of Bentley and Ottmann [29]. A slightly faster algorithm, with running time 0(nks-^2(f^)), is mentioned in Section 6.3. Many applications, however, need to compute only a small portion of the arrangement, such as a single face, a few faces, or some other substructures that we will consider shortly. Using D^-sequences, one can show that the combinatorial complexity of these substructures is substantially smaller than that of the entire arrangement. This fact is then exploited in the design of efficient algorithms, whose running time is close to the bound on the complexity of the substructures that these algorithms aim to construct. In this section we review combinatorial and algorithmic results related to these substructures, in which Z)5-sequences play a crucial role. ^ A Jordan arc is an image of the closed unit interval under a continuous bijective mapping. Similarly, a closed Jordan curve is an image of the unit circle under a similar mapping, and an unbounded Jordan curve is an image of the open unit interval (or of the entire real line) that separates the plane.

Davenport-Schinzel sequences and their geometric applications

21

'75 73

,77

"74

S

~ + ;- + + ~- ;+

-

-

+ ~ - ~ - - " ~ + ~4- - ~ 2/ - - \ 31 2

Fig. 6. A single face and its associated boundary sequence; all arcs are positively oriented from left to right.

6.1. Complexi~ o f a single face It is well known that the complexity of a single face in an arrangement of n lines is at most n [124], and a linear bound on the complexity of a face in an arrangement of rays is also known (see Alevizos et al~ [14,15]). The result of Wiernik and Sharir [154] on the lower envelopes of segments implies that the unbounded face in an arrangement of n line segments has Kg(n~(n)) vertices in the worst case. A matching upper bound was proved by Pollack et al. [123], which was later extended by Guibas et al. [75] to general Jordan arcs. The case of closed or unbounded Jordan curves was treated in [135]. THEOREM 6.2 ([75,135]). Let F be a set of n Jordan arcs in the plane, each pair of which intersect in at most s points, f o r some fixed constant s. Then the combinatorial complexity of any single face in ,,4(/-') is O(~,s+2(n)). I f each arc in Y' is a Jordan curve (closed or unbounded), then the complexity of a single face is at most/,s (n). PROOF (Sketch). We only consider the first part of the theorem; the proof of the second part is simpler, and can be found in [135,142]. Let f be a given face in A ( F ) , and let C be a connected component of its boundary. We can assume that C is the only connected component of Of. Otherwise, we repeat the following analysis for each connected component and sum their complexities. Since each arc appears in at most one connected component, the bound follows. For each arc Yi, let ui and vi be its endpoints, and let yi+ (respectively, y/-) be the directed arc yi oriented from u i to vi (respectively, from vi to u i). Without loss of generality, assume that C is the exterior boundary component of f . Traverse C in counterclockwise direction (so that f lies to our left) and let S = (sl, s2 . . . . . st) be the circular sequence of oriented arcs in F in the order in which they appear along C

22

RK. Agarwal and M. Sharir

(if C is unbounded, 5 is a linear, rather than circular, sequence). More precisely, if during our traversal of C we encounter an arc yi and follow it in the direction from «/ to vi (respectively, from Vi to w/) then we add y^ (respectively, yf) to S. See Figure 6 for an illustration. Note that in this example both sides of an arc yi belong to the outer connected component. Let § 1 , . . . , ?2n denote the oriented arcs of F. For each §/ we denote by |$/1 the nonoriented arc yj coinciding with ^/. For the purpose of the proof, we transform each arc yt into a very thin closed Jordan curve y^ by taking two nonintersecting copies of yi lying very close to one another, and by joining them at their endpoints. This will perturb the face / slightly but can always be done in such a way that the combinatorial complexity of C does not decrease. Note that this transformation allows a natural identification of one of the two sides of y* with y^ and the other side with yf. It can be shown (see [75,142]) that the portions of each arc §/ appear in 5 in a circular order that is consistent with their order along the oriented ^/. In particular, there exists a starting point in S (which depends on ^/) so that if we read S in circular order starting from that point, we encounter these portions of f, in their order along ^/. For each directed arc 5,, consider the linear sequence V, of all appearances of f/ in 5, arranged in the order they appear along $/. Let )[X/ and y/ denote respectively the index in S of the first and of the last element of V/. Consider S = {s\,.. .,St) SLS s. linear, rather than a circular, sequence (this change is not needed if C is unbounded). For each arc §,, if /x, > v, we split the symbol §/ into two distinct symbols ^/1, §,2, and replace all appearances of §/ in S between the places fjii and t (respectively, between 1 and y,) by §,1 (respectively, by f/2). Note that the above claim implies that we can actually split the arc ^/ into two connected subarcs, so that all appearances of §n in the resulting sequence represent portions of the first subarc, whereas all appearances of ^,2 represent portions of the second subarc. This splitting produces a sequence S^, of the same length as S, composed of at most 4n symbols. With all these modifications, one can then prove that 5* is a DS{4n, s -j- 2)-sequence. This is done by showing that each quadruple of the form (a • "b- • -a- • -b) in S* corresponds, in a unique manner, to an intersection point between the two arcs of F that a and b represent. See [75,142] for more details. This completes the proof of the first part of the theorem. D Theorem 6.2 has the following interesting consequence. Let T = {yi,...,}/,,} be a set of n closed Jordan curves, each pair of which intersects in at most s points. Let K = conv(r) be the convex hull of the curves in F. Divide the boundary of K into a minimum number of subarcs, ai, ^ 2 , . . . , a^?, such that the relative interior of each a/ has a nonempty intersection with exactly one of the curves yj. Then the number m of such arcs is at most Xs(n); see [135] for a proof. Recently, Arkin et al. [20] showed that the complexity of a single face in an arrangement of line segments with h distinct endpoints is only 0(h logh) (even though the number of segments can be &(h^)). A matching lower bound is proved is by Matousek and Valtr [106]. The upper bound by Arkin et al. does not extend to general arcs. Har-Peled [78] has also obtained improved bounds on the complexity of a single face in many special cases.

Davenport-Schinzel

sequences and their geometric applications

23

6.2. Computing a single face Let r be a collection of n Jordan arcs, as above, and let x be a point that does not lie on any arc of F. We wish to compute the face of A{r) that contains x. We assume that each arc in F has at most a constant number of points of vertical tangency, so that we can break it into 0(1) X-monotone Jordan arcs. We assume a model of computation allowing infinite-precision real arithmetic, in which certain primitive operations involving one or two arcs (e.g., computing the intersection points of a pair of arcs, the points of vertical tangency of an arc, the intersections of an arc with a vertical line, etc.) are assumed to take constant time. If /" is a set of n lines, or a set of n rays, then a single face can be computed in time 0{n log^). In the case of lines, this is done by dualizing the lines to points and using any optimal convex hull algorithm [124]; the case of rays is somewhat more involved, and is described in [14,15]. However, these techniques do not extend to arrangements of more general Jordan arcs. Pollack et al. [123] presented an 0{na{n) log^n)-time algorithm for computing the unbounded face in certain arrangements of line segments, but the first algorithm that works for general arcs was given by Guibas et al. [75]. Later, several other efficient algorithms — both randomized and deterministic — have been proposed. We first present randomized (Las Vegas) algorithms^ for computing a single face, and then review the deterministic solution of [75], and mention some other related results. Randomized algorithms have recently been designed for many geometric problems; see, e.g., [45,115, 136]. They are often much simpler than their deterministic counterparts, and are sometimes more efficient, as the present case will demonstrate. The efficiency of a Las Vegas randomized algorithm will be measured by its expected running time in the worst case, where the expectation is taken with respect to the internal randomizations performed by the algorithm. Randomized algorithms. The randomized algorithms that we will describe actually compute the so-called vertical decomposition of / . This decomposition, which we denote by / , is obtained by drawing a vertical segment from each vertex and from each point of vertical tangency of the boundary of / in both directions, and extend it until it meets another edge of / , or else all the way to ±oo. The vertical decomposition partitions / into "pseudo-trapezoidal" cells, each bounded by at most two arcs of F and at most two vertical segments. To simplify the presentation, we will refer to these cells simply as trapezoids; see Figure 7 for an illustration. We first present a rather simple randomized divide-and-conquer algorithm due to Clarkson [42] (see also [142]). The basic idea of the algorithm is as follows: Randomly choose a subset F\ assuming that yo intersects every arc of F in at most some constant number of points. P R O O F . Split every arc y G F into two subarcs at each intersection point of y and yo, and leave sufficiently small gaps between these pieces. In this manner all faces in the zone of yo are merged into one face, at the cost of increasing the number of arcs from n to 0(n). Now we can apply Theorem 6.2 to conclude the proof. D If F is a set of n lines and yo is also a line, then after splitting each line of F at their intersection points with yo we obtain a collection of 2n rays, and therefore the complexity of the unbounded face is 0 ( n ) . In fact, in this case one can show that the edges of the zone form a DS(4n, 2) sequence, thereby obtaining an upper bound of 8n — 1 on the complexity of the zone. Applying a more careful analysis, Bern et al. [31] proved the following theorem. T H E O R E M 6.8 ([31]). The complexity of the zone of a line in an arrangement ofn lines is at most 5.5/1, and this bound is tight within an additive constant term, in the worst case. See [14,31,37,56] for other results and applications of zones of arcs. An immediate consequence of Theorem 6.7 is an efficient algorithm for computing the arrangement A{F). Suppose we add the arcs of F one by one and maintain the arrangement of the arcs added so far. Let Ft be the set of arcs added in the first / stages, and let y/+i

28

P.K. Agarwal and M. Sharir

be the next arc to be added. Then in the (/ + l)st stage one has to update only those faces of AiPi) which He in the zone of yz+i, and this can easily be done in time proportional to the complexity of the zone; see Edelsbrunner et al. [56] for details. By Theorem 6.7, the total running time of the algorithm is 0{nXs+2(n)), and, by Theorem 6.8, the arrangement of a set of n lines can be computed in 0{n^) time. If the arcs of F are added in a random order, then the expected running time of the above algorithm is 0{n \ogn + k), where k is the number of vertices in A{r) [35,46,115], which is at most quadratic in n. The latter time bound is worst-case optimal. Theorem 6.7 can also be used to obtain an upper bound on the complexity of any m faces of A{r). Specifically, let {/i,..., /^} be a subset of m distinct faces in A{r), and let Uf denote the number of vertices in a face / of A{r). Then, using the Cauchy-Schwarz inequahty, 1/2 i=\

^ i

^

/

X 1/2

^feA(r)

^

where kj is the number of arcs in F that appear along the boundary of / . It is easily verified that

X! '^/^Z ^Y^ feAif)

yer

Y^

^f = C)(nAs+2(«)).

fe7.oneiy,r\{y})

Hence, we obtain the following result. 6.9 ([56,78]). Let F be a set ofn arcs satisfying the conditions stated earlier. The maximum number of edges bounding any m distinct faces ofA(F) is 0(m' ^'^)^s-\-2(n)).

THEOREM

It should be noted that Theorem 6.9 is weaker than the best bounds known for the complexity of m distinct faces in arrangements of several special types of arcs, such as lines, segments, and circles (see [21,33,43]), but it applies to arrangements of more general arcs.

6.4. Levels in arrangements Let F be a. set of n jc-monotone, unbounded Jordan curves, each pair of which intersects in at most s points. The level of a point p eE? in A(F) is the number of curves of F lying strictly below p, and the level of an edge e e AiF) is the common level of all the points lying in the relative interior of e. For a nonnegative integer k 0, his algorithm constructs a graph G^, whose nodes are points in R^, and whose edges connect some pairs of these points by straight segments. The size of Ge is 0(n^ \oginp) + n^Xs{n)/e^), where ^ is a fixed constant and p is the ratio of the length of the longest edge in O to the (straight) distance between p and q. He then reduces the problem to that of constructing a shortest path in Ge (where the weight of an edge is its Euclidean length), and shows that the ratio between the length of the path obtained in this manner and the actual collision-free shortest path between p and q is at most 1+6:. The running time of his algorithm is 0{{n^Xs{n)/e'^) \og{n/s) + n^ \og{np) \og{n log/))). A special case of shortest paths in 3-space, which has been widely studied, is when O consists of a single convex poly tope and p,q lie on its surface (see, for example, [38, 109,143]). A shortest path on the surface of a convex poly tope can be represented by the sequence of edges that it crosses, and we refer to such a sequence of edges as a shortestpath edge sequence. It is known that there are 6)(n'*) shortest path edge-sequences [111, 133]. Agarwal et al. [4] have shown that the exact set of all shortest-path edge sequences can be computed in time 0{n^Xs (n) \ogn), for some constant ^ > 0, improving a previous algorithm by Schevon and O'Rourke [134]. Baltsan and Sharir [28] considered the special case where O consists of two disjoint convex poly topes (and p and q lie anywhere in the free space). Using Davenport-Schinzel sequences to bound the number of candidate paths that one has to consider, they presented an algorithm with running time 0{n?''k\Q{n)\ogn) to find an exact collision-free shortest path between p and q.

Davenport-Schinzel

sequences and their geometric

applications

35

If the moving object is not a point and the object is allowed to rotate, the problem of computing a shortest path becomes significantly more difficult, even in the planar case. (In fact, even the notion of shortest path becomes much vaguer now.) Suppose we want to compute an optimal path for moving a line segment y =: pq (allowing both translations and rotations) amid polygonal obstacles with a total of n edges. Assume that the cost of a path is defined as the total distance traveled by one of its endpoints, say, /?, and restrict the problem further by requiring that p moves along polygonal paths that can bend only at obstacle vertices. This rather restricted version of the problem was studied by Papadimitriou and Silverberg [120], who gave an 0(n^ logn)-time algorithm for computing a shortest path in the above setting. Sharir [139] improved the running time to 0(n^a(n)\og^n), using Davenport-Schinzel sequences and planar arrangements.

7.4. Transversals ofplanar objects Let S = {Si, S2,..., Sn}hea collection of n compact convex sets in the plane. A line that intersects all sets of S is called a transversal (or a stabber) of S. Note that a line intersects a set if and only if it intersects its convex hull, so convexity is usually not a real restriction. For each set Si e S, let S* denote the set of points dual to the lines that intersect St, using a standard duality transform [54]. Sf is bounded from above by a convex x-monotone curve Ai and from below by a concave x-monotone curve Bf; see Figure 9. The stabbing region of S (or the space of all transversals) is the intersection 5* = 0/^=1 ^f- ^ ^ definition, 0, there exists a (d + /?)-variate polynomial g(x, a) = V^o(a) + V^i(a)^i(x) + V^2(a)^2(x) +

h i/k-\(sL)(pk-i(x) + ^^^(x),

forxG R^,a e E^, so that, for each 1 < / < n, we have Qi(x) = g(x,ai) for some a/ e W. Here each V^y(a), for 0 ^ j ^ /:, is a /7-variate polynomial, and each (pj(x), for 1 < j < ^ + 1, is a J-variate polynomial. It is easily seen that such a polynomial representation always exists for p < d^'^^ — let the ^'s be the monomials that appear in at least one of the polynomials of Q, and let x/fj (a) = aj (where we think of a as the vector of coefficients of the monomials). We define a transform ^ : M*^ ^- M^ that maps each point in M.^ to the point ^(x) = {(pi(x), (p2(x),..., (pk(x));

64

RK. Agarwal and M. Sharir

the image (p(Er) is a J-dimensional algebraic surface 17 in R'^. For each function Qt (x) = g(x, a/), we define a A:-variate Hnear function hi (y) = iAo(a/) + TAI (a/) ji + • • • + i/^jt-i (aOy^^-i + jk. Let / / = {/i/ = 0 I 1 ^ / ^ n} be a set of n hyperplanes in R^. Let § be a vertex of L{r). If § is incident to y i , . . . , y j , then Q\{^) = ••• = Qd{^) = 0 and 2 J + I ( § ) O ^ J + I O , . . . , Qn(^)crnO, where cr, G {>, 0.

4. Single cells Lower envelopes are closely related to other substructures in arrangements, notably cells and zones. The lower envelope is a portion of the boundary of the bottommost cell of the arrangement, though the worst-case complexity of L{r) can be larger than that of the bottommost cell of A{r). In two dimensions, it was shown in [206] that the complexity of a single face in an arrangement of n arcs, each pair of which intersect in at most s points, is 0(Xs+2(w))» and so has the same asymptotic bound as the complexity of the lower envelope of such a collection of arcs. This suggest that the complexity of a single cell in an arrangement of n surface patches in E^ satisfying the Assumptions (Al) and (A2) is close to 0{n^~^). The Upper Bound Theorem impfies that the complexity of a single cell in arrangement of hyperplanes in R^ is 0{n^^f^^), and the linearization technique described in Section 3 implies that the complexity of a single cell in an arrangement of n spheres is 0(«^^/^^). However, the lower-bound construction for lower envelopes implies a lower bound of Q (n^~ ^«(«)) for the complexity of a single cell for arrangements of simplices.

Fig. 5. A single cell in an arrangement of segments.

Pach and Sharir [295] were the first to prove a subcubic upper bound on the complexity of a single cell in arrangements of triangles in R-^. This bound was improved by Aronov and Sharir [52] to 0(n^/^), and subsequendy to 0(n^ log n) [54]. The latter approach extends to higher dimensions; that is, the complexity of a single cell in an arrangement ofn (d — 1)simplices in R^ is 0(n^~^ log«). A simpler proof was given by Tagansky [342]. These approaches, however, do not extend to nonlinear surfaces even in R^. Halperin [211,212] proved near-quadratic bounds on the complexity of a single cell in arrangement of certain classes of n bivariate surface patches, which arise in motionplanning applications. Halperin and Sharir [221] proved a near-quadratic bound on the

Arrangements and their applications

67

complexity of a single cell in an arrangement of the contact surfaces that arise in a rigid motion of a simple polygon amid polygonal obstacles in the plane, i.e., the surfaces that represent the placements of the polygon at which it touches one of the obstacles. The proof borrows ideas from the proof of Theorem 3.1. A near-optimal bound on the complexity of a single cell in the arrangement of an arbitrary collection of surface patches in R-^ satisfying Assumptions (Al) and (A2) was finally proved by Halperin and Sharir [219]: THEOREM 4.1 (Halperin and Sharir [219]). Let F be a set of surface patches in E^ satisfying Assumptions (Al) and (A2). For any £ > 0, the complexity of a single cell in A(F) IS 0(n^^')Jor any s > 0, where the constant of proportionality depends on s and on the maximum degree of the surface patches and of their boundaries.

The proof proceeds along the same lines as the proof of Theorem 3.1. However, they establish the following two additional results to "bootstrap" the recurrences that the proof derives. Let C be the cell of A(F) whose complexity we want to bound. (a) There are only O(n^) vertices v of the cell C that are locally x-extreme (that is, there is a neighborhood N of v and a connected component C^ of the intersection of A^ with the interior of C, such that v lies to the left (in the x-direction) of all points of C\ or v lies to the right of all these points). (b) There are only 0(n^+^) vertices on popular faces of C, that is, 2-faces / for which C lies locally near / on both sides of / . Property (a) is proved by an appropriate decomposition of C into 0(n^) subcells, in the style of a Morse decomposition of C (see [281]), so that each subcell has at most two points that are locally x-extreme in C. Property (b) is proved by applying the machinery of the proof of Theorem 3.1, where the quantity to be analyzed is the number of vertices of popular faces of C, rather than all inner vertices. Once these two results are available, the proof of Theorem 3.1 can be carried through, with appropriate modifications, to yield a recurrence for the number of vertices of C, whose solution is 0(n^+^). We refer the reader to the original paper for more details. To extend the above proof to higher dimension appropriate extensions of both properties (a) and (b) have to be established. The extension of (a) requires topological considerations related to Morse theory, and the extension of (b) requires an inductive argument, in which bounds on the number of vertices of popular faces of all dimensions need to be derived, using induction on the dimension of the faces. Recently, Basu [67] showed that the topological complexity, i.e., the sum of the Betti numbers, of a single cell in an arrangement of n surface patches satisfying Assumptions (A1)-(A2) is 0(n^~^). Using this result and the ideas in [219], he showed that the combinatorial complexity of a set of n surface patches in W^ satisfying Assumptions (Al) and (A2) is 0(n^~^~^^). He also showed that under certain geometric assumptions on surface patches the combinatorial complexity of a single cell is alsoO(n^-i). The linearization technique in the previous section can be extended to bound the complexity of a cell as well, namely, one can prove the following.

68

RK. Agarwal and M. Sharir

THEOREM 4.2. Let r be a collection of n hypersurfaces in M^, of constant maximum degree b. If F admits a linearization of dimension k, then the combinatorial complexity of a cell ofA(r) is 0(n^^^'^^), where the constant ofproportionality depends on k, d, and b.

5. Zones Let F be a set of n surfaces in R^. The zone of a variety a (not belonging to F), denoted as zone(a; T), is defined to be the set of J-dimensional cells in A(r) that intersect a. The complexity of zone(a; F) is defined to be the sum of complexities of the cells of A(F) that belong to zone(a; F), where the complexity of a cell in A(F) is the number of faces of all dimensions that are contained in the closure of the cell. The complexity of a zone was first studied by Edelsbrunner et al. [157]; see also [110]. The "classical" zone theorem [142,159] asserts that the maximum complexity of the zone of a hyperplane in an arrangement of n hyperplanes in W^ is 0(n^~^), where the constant of proportionality depends on d. The original proof given by Edelsbrunner et al. [157] had some technical problems. A correct, and simpler, proof was given by Edelsbrunner et al. [159]. Their technique is actually quite general and can also be applied to obtain several other interesting combinatorial bounds involving arrangements. For example, the proof by Aronov and Sharir for the complexity of a single cell in arrangements of simphces [54] used a similar approach. Other results based on this technique can be found in [4,50,51]. We therefore describe the technique, as applied in the proof of the zone theorem: THEOREM 5.1 (Edelsbrunner, Seidel and Sharir [159]). The maximum complexity of the zone of a hyperplane in an arrangement of n hyperplanes

This result is easy to prove for d = 2; see Chapter 1. For a set F of n hyperplanes in R*^ and another hyperplane b, let r^ (b; F) denote the total number of ^-faces contained on the boundary of cells in zone(b; F); each such /:-face is counted once for each cell that it bounds. Let Ti^(n,d) = maxr^(b\ F), where the maximum is taken over all hyperplanes b and all sets F of n hyperplanes in R^. The maximum complexity of zone(b; F) is at most Ylk=o ^^ (^' ^)- Thus the following lemma immediately implies the upper bound in Theorem 5.1. LEMMA

5.2. For each d and O^k

^d,

Tk(n,d) = 0{n'^-^), where the constants ofproportionality depend on d and k. PROOF. We use induction on d. As just noted, the claim holds for d = 2. Assume that the claim holds for all 3, the same application of the zone theorem yields only Ylc l^l/c = 0(«^), where fc is the number of hyperplanes of F meeting the boundary of C. Using the same induction scheme as in the proof of Theorem 5.1, Aronov et al. [50] showed that

J2

|C|2 = 0(n^logLfJ-^).

CeAiF)

It is believed that the right bound is 0(n^). Note that such a result does not hold for arrangements of simplices or of surfaces because the complexity of single cell can be Q{n^~^). The zone theorem for hyperplane arrangements can be extended as follows. THEOREM 5.3 (Aronov, Pellegrini and Sharir [51]). Let F be a set ofn hyperplanes in W^. Let a be a p-dimensional algebraic variety of some fixed degree, or the relative boundary of any convex set with affine dimension /? + 1, for 0 ^ p ^ d. The complexity of the Zone(a; F) is 0(n^^^~^^^^^^ log^n), where ^ = d -\- p (mod 2), and the bound is almost tight (up to the logarithmic factor) in the worst case.

In particular, for p = J — 1, the complexity of the zone is 0(n^~^ logn), which is almost the same as the complexity of the zone of a hyperplane in such an arrangement.

Arrangements and their applications

71

The proof proceeds along the same lines of the inductive proof of Theorem 5.1. However, the removal and re-insertion of a hyperplane y e F splits a face / of zone (a; F \ {y}) into two subfaces, both lying in zone (a; F), the charging scheme used in the proof of Theorem 5.1 becomes inadequate, because f C\ y need not belong to the zone of cr Pi y in the {d — 1)-dimensional cross-section of A{F) along y. What is true, however, is that / n y is a face incident to a popular facet of zone {a \ F) along y, that is, a facet g ^ y whose two incident cells belong to the zone. Thus the induction proceeds not by decreasing the dimension of the arrangement (as was done in the proof of Theorem 5.1), but by reapplying the same machinery to bound the number of vertices of popular facets of the original zone (a; F). This in turn requires similar bounds on the number of vertices of lower-dimensional popular faces. We refer the reader to Aronov et al. [51] for more details. In general, the zone of a surface in an arrangement of n surfaces in W^ can be transformed to a single cell in another arrangement of 0(n) surface patches in M^. For example. Let 7^ be a set of n (d— l)-simplices in R^, and let cr be a hyperplane. We split each y e F into two polyhedra at the intersection of A and a (if the intersection is nonempty), push these two polyhedra slightly away from each other, and, if necessary, retriangulate each polyhedron into a constant number of simplices. In this manner, we obtain a collection F^ of 0(n) simplices, and all cells of the zone of a in A(F) now fuse into a single cell of A(F^). Moreover, by the general position assumption, the complexity of the zone of cr in F is easily seen to be dominated by the complexity of the new single cell of A(F^). (The same technique has been used earlier in [150], to obtain a near-linear bound on the complexity of the zone of an arc in a two-dimensional arrangement of arcs.) Hence, the following theorem is an easy consequence of the result by Aronov and Sharir [54]. 5.4. The complexity of the zone of a hyperplane in an arrangement of n (d — \)-simplices in R^ is 0(n^~^ logn).

THEOREM

Using a similar argument one can prove the following. THEOREM 5.5 (Basu [67]; Halperin and Sharir [219]). Let F be a collection ofn surface patches in M.^, satisfying Assumptions (Al) and (A2). The combinatorial complexity of the zone in A{F) of an algebraic surface a of some fixed degree is 0{n^~^^^), for any £ > 0, where the constant of proportionality depends on e, on the dimension on the maximum degree of the given surfaces and their boundaries, and on the degree of a.

Once the bound on the complexity of a single cell in an arrangement of general algebraic surfaces is extended to higher dimensions, it should immediately yield, using the same machinery, to a similar bound for the zone of a surface in such an arrangement.

6. Levels The level of a point p eW^ m an arrangement A{F) of a set F of surface patches satisfying (A1)-(A3) is the number of surfaces of F lying vertically below p. For 0 < A: < n, we define k-level (resp. 0. The bound was improved by Aronov et al. [46] and Eppstein [173] to OC/i^/"^ poly log /i), and then by Dey and Edelsbrunner [135] to 0(«^/^). The best bound known, due to Agarwal et al. [8], is 0{n{k -\- 1)^/-^). They also proved a bound on the complexity of the A:-level for arrangements of triangles in M^. A nontrivial bound on the complexity of the /c-level in an arrangement of n hyperplanes in J > 3 dimensions, of the form 0(n^~^''), for some constant Sd that decreases exponentially with d, was obtained in [36,352]. This has later been slightly improved to 0(n^^^'^^k^^^'^^~^'^) in [8]. Table 1 summarizes the known upper bounds on /:-levels.

7. Many cells and incidences In the previous two sections we bounded the complexity of families of J-dimensional cells in A(r) that satisfied certain conditions (e.g., cells intersected by a surface, the cells of level at most k). We can ask a more general question: What is the complexity of any m distinct cells in AiF)! A single cell in an arrangement of lines in the plane can have n

Arrangements

and their applications

11

Table 2 Complexity of many cells Objects

Complexity

Source

Lines in M^ Segments in M^ Unit circles in M^

0{m^l^n^l^ +n) Q^^2/3^2/3 _|_ ^^^^^^ _^ ^ logm) Oim^^^n^^^a^^^in)+n)

[121] [48] [121]

Circles in M^

Q(^^3/5^4/54Q'(«)/5 _^ ^>,

Arcs in M^ Planes in M^ Hyperplanes in M^, J ^ 4

0(^Xq (n)) 0{m^/^n + n^) 0(m ^/^^^/^ log/^«) ;6 = ( L ^ / 2 J - l ) / 2

^21]

[150] [7] [50]

edges, but can the total complexity of m cells in an arrangement of lines be f2(mn)l This is certainly false for m = Q{n^). We can also formulate the above problem as follows: Let P be a set of m points and F a set of n surfaces in W^ satisfying Assumptions (Al) and (A2). Define C(P, F) to be the set of cells in A(F) that contain at least one point of P. Define /x(P, F) = Y1CGC(P r) 1^1 and fji(m,n,G) = max/x(P, F), where the maximum is taken over all sets P of m points and over all sets F of n surfaces in a given class G. Let L be the set of all lines in the plane. Canham [89] proved that /x(m,fi,L) = 0(m^ + n), from which it easily follows that /x(m, n, L) = 0{m^ -{-n). Although this bound is optimal for m ^ ^ , it is weak for larger values of m. Clarkson et al. [121] proved that iji(m,n,lj) = &(m^^^n^^^ + n). Their technique, based on random sampling, is general and constructive. It has led to several important combinatorial and algorithmic results on arrangements [121,204,205]. For example, following a similar, but considerably more involved, approach, Aronov et al. [48] proved that /x(m, n, E) = 0(m^/^n^/^ + m logn + na(n)), where E is the set of all line segments in the plane. An improved bound can be attained if the number of vertices in the arrangement of segments is small. Hershberger and Snoeyink [232] proved an 0(m^/^n^/-^ + n) upper bound on the complexity of m distinct cells in the arrangements of n segments in the plane where the segments satisfy certain additional conditions. Although Clarkson et al. [121] proved nontrivial bounds on the complexity of m distinct cells in arrangements of circles (see Table 2 above), no tight bound is known. OPEN PROBLEM 4. What is the maximum complexity of m distinct cells in an arrangement ofn circles in the planel

Complexity of many cells in hyperplane arrangements in higher dimensions was first studied by Edelsbrunner and Haussler [155]. Let H be the set of all hyperplanes in M^. They proved that the maximum number of (d — 1)-dimensional faces in m distinct cells in an arrangement of n hyperplanes in W^ is 0(m^^^n^^^ + n^~^). Refining an argument by Edelsbrunner et al. [153], Agarwal and Aronov [7] improved this bound to 0{m^^^n^/^ + n^~^). By a result of Edelsbrunner and Haussler [155], this bound is tight in the worst case.

78

RK. Agarwal and M. Sharir

Aronov et al. [50] proved that /x(m,n,H) = OCm^/^n^/^log^n), where ^ = ([d/l] l)/2. They also proved several lower bounds on /x(m,n,H): For odd values of d and m ^n, />6(m,n,H) = ©(m^L^/^J); for m of the form 6)(n^-2^) where 0 < i^ < [d/2i is an integer, /x(m, n, H) = ^(m^/^n^^/^^); and for arbitrary values of m, /x(m,«, H) = -^(m^/^n^/^-i/4). Agarwal [4], Guibas et al. [201], and Halperin and Sharir [217] obtained bounds on "special" subsets of cells in hyperplane arrangements. A problem closely related to, but somewhat simpler than, the many-cells problem is the incidence problem. Here is a simple instance of this problem: Let 7" be a set of n lines and P a. set of m points in the plane. Define X(P, F) = ^Z^er 1^ f^ ^h set T(m, n) = max J ( P , F), where the maximum is taken over all sets P of m distinct points and over all sets F of n distinct lines in the plane. Of course, this problem is interesting only when the lines in F are in highly degenerate position. If n = m^ -\-m-i-1, then a finite projective plane of order m has n points and n lines and each line contains m-\-l = Q (n^/^) points, so the number of incidences between n points and n lines is Q(n^^^). Szemeredi and Trotter [339] proved that such a construction is infeasible in R^. In a subsequent paper, Szemeredi and Trotter [340] proved that l(m, n) = 0{ni^f^n^f^ -\-m-\-n). Their proof is, however, quite intricate, and an astronomic constant is hidden in the big-O notation. Their bound is asymptotically tight in the worst case, as shown in [165]. A considerably simpler proof, with a small constant of proportionality in the bound, was given by Clarkson et al. [121], based on the random-sampling technique. In fact, the bound on many cells in arrangements of lines immediately yields a similar bound on X(m, n) [121], but the proof can be somewhat simplified for the incidence problem. Here we present an even more elegant and simpler proof, due to Szekely [338], for the bound on J(m, n) using Lemma 6.4: THEOREM 7.1 (Szemeredi and Trotter [340]). Let F be a set ofn lines and P a set ofm points in the plane. Then

X(P, F) = 0(n?l^n^l^ + m -h «). PROOF. We construct a geometric graph G = (V, £) whose vertices are the points of P. We connect two vertices /?, ^ by an edge if the points p and q are consecutive along a line in F. Each edge of G is a portion of a line of F, and no two edges overlap. Therefore at most (2) pairs of edges cross each other. Note that X(P, F) ^ \E\-\-n. \f\E\^ 4m, there is nothing to prove. Otherwise, by Lemma 6.4,

0 n\

\E\^

1

\3

which implies that J ( P , F) = 0(m^l^n^l^ + «).

D

Valtr [348] has studied the incidence problem and its generalization for dense point sets, where the ratio of the maximum and the minimum distances in P is at most O(v^). The incidence problem has been studied for other curves as well. Of particular interest is the number of incidences between points and unit circles in the plane [121,335] because of its close relationship with the following major open problem in combinatorial geometry.

Arrangements and their applications

79

which was originally introduced by Erdos in 1946 [175]: Let S be a set ofn points in the plane. How many pairs of points in S are at distance 1? Spencer et al. [335] had proved, by modifying the proof of Szemeredi and Trotter [340], that the number of incidences between m points and n unit circles is Oim^^^n^^^ -\- m -\- n). The proofs by Clarkson et al. [121] and by Szekely [338] have been extended to this case. The incidence bound implies that the number of unit distances between the points of S is 0(/i^/^). However, the best known lower bound on the number of unit distances is only ^i+^((iogiog^)/iog") [175] (see also [294]). OPEN PROBLEM

5. How many pairs of points in a given planar set of points are at dis-

tance 1? Furedi [188] showed that if points in S are in convex position, then the number of pairs at distance 1 is 0{n log^); the best known lower bound is 7n — 12 by Edelsbrunner and Hajnal [154]. The best known upper bound on the unit distances in R^ is 0{n^^^) [121]. Let 5* be a set of n points in M^ so that no four points of P lie on a circle, then the number of pairs of points in S at unit distance is 0{n^^^^) [204]. We can state the incidence problem in higher dimensions. If we do not make any additional assumptions on points and surfaces, the maximum number of incidences between m points and n planes is obviously mn: Take a set of n planes passing through a common line and place m points on this line. Agarwal and Aronov [7] proved that if F is a set of n planes and P is a set of m points in M^ so that no three points in P are colinear, then 1{P, F) = 0{m^^^n'^^^ -\-m-\-n). Edelsbrunner and Sharir [160] showed that if /" is a set of n unit spheres in R^ and P is a set of m points so that none of the points in P lies in the interior of any sphere, then X{P, F) = 0{m^/^n^'^ -\-m-\-n). See [204,297] for other results on incidences in higher dimensions.

8. Generalized Voronoi diagrams An interesting application of the new bounds on the complexity of lower envelopes is to generalized Voronoi diagrams in higher dimensions. Let 5 be a set of n pairwise-disjoint convex objects in R^, each of constant description complexity, and let p be some metric. For a point x G R^, let ^(x) denote the set of objects nearest to x, i.e., 0(x) = {seS\

p(x, s) < p{x, s') W e S}.

The Voronoi diagram Yorp(S) of S under the metric p (sometimes also simply denoted as Yor(S)) is a partition of R'^ into maximal connected regions C of various dimensions, so that the set 0(x) is the same for all x € C. Let yt be the graph of the function x^+i = p(x, Si). Set F = {yt \ 1 ^ / ^n}. Edelsbrunner and Seidel [158] observed that Vorp(5) is the minimization diagram of F. In the classical case, in which p is the Euclidean metric and the objects in S are singletons (points), the graphs of these distance functions can be replaced by a collection of n hyperplanes in R^+^, using the linearization technique, without affecting the minimization

80

P.K. Agarwal and M. Sharir

diagram. Hence the maximum possible complexity of Vor(5) is 0{n 1^^/^^), which actually can be achieved (see, e.g., [248,322]). In more general settings, though, this reduction is not possible. Nevertheless, the bounds on the complexity of lower envelopes imply that, under reasonable assumption on p and on the objects in 5, the complexity of the diagram is 0(n^+^), for any e >0. While this bound is nontrivial, it is conjectured to be too weak. For example, this bound is near-quadratic for planar Voronoi diagrams, but the complexity of almost every planar Voronoi diagram is only 0(n), although there are certain distance functions for which the corresponding planar Voronoi diagram can have quadratic complexity [59]. In three dimensions, the above-mentioned bound for point sites and Euclidean metric is 0(n^). It has been a long-standing open problem to determine whether a similar quadratic or near-quadratic bound holds in E^ for more general objects and metrics (here the new results on lower envelopes only give an upper bound of 0(n^~^^)). The problem stated above calls for improving this bound by roughly another factor of n. Since we are aiming for a bound that is two orders of magnitude better than the complexity of A(r), it appears to be a considerably more difficult problem than that of lower envelopes. The only hope of making progress here is to exploit the special structure of the distance functions p(x,s). Fortunately, some progress on this problem was made recently. It was shown by Chew et al. [115] that the complexity of the Voronoi diagram is 0(n^a(n)\ogn) for the case in which the objects of S are lines in E? and the metric p is a convex distance function induced by a convex polytope with a constant number of facets (see [115] for more details). Note that such a distance function is not necessarily a metric, because it will fail to be symmetric if the defining polytope is not centrally symmetric. The L i and Loo metrics are special cases of such distance functions. The best known lower bound for the complexity of the diagram in this special case is ^{n^a(n)). Dwyer [140] has shown that the expected complexity of the Voronoi diagram of a set of n random lines in R'^ is 0(n^^^). In another recent paper [80], it is shown that the maximum complexity of the Li-Voronoi diagram of a set of n points in R^ is G(n^). Finally, it is shown in [341] that the complexity of the three-dimensional Voronoi diagram of point sites under a general polyhedral convex distance function (induced by a polytope with 0(1) facets) is 0(«^ log«). OPEN PROBLEM 6. (i) Is the complexity of the Voronoi diagram of a set S ofn lines under the Euclidean metric in R? close to n^l (ii) Is the complexity of the Voronoi diagram of a set S ofpairwise disjoint convex polyhedra in M^, with a total ofn vertices, close to n^ under the polyhedral convex distance functions!

An interesting special case of these problems involves dynamic Voronoi diagrams for moving points in the plane. Let 5 be a set ofn points in the plane, each moving along some line at some fixed velocity. The goal is to bound the number of combinatorial changes of the Euclidean Vor(5) over time. This dynamic Voronoi diagram can easily be transformed into a three-dimensional Voronoi diagram, by adding the time f as a third coordinate. The points become lines in M^, and the metric is a distance function induced by a horizontal disk (that is, the distance from a point pixQ, yo, to) to a line i is the Euclidean distance from p to the point of intersection of i with the horizontal plane ^ = ^o)- Here too the open

Arrangements and their applications

81

problem is to derive a near-quadratic bound on the complexity of the diagram. Cubic or near-cubic bounds are known for this problem, even under more general settings [184,203, 328], but subcubic bounds are known only in some very special cases [114]. Next, consider the problem of bounding the complexity of generalized Voronoi diagrams in higher dimensions. As mentioned above, when the objects in S are n points in R^ and the metric is Euclidean, the complexity of Vor(5) is 0(n'^^/^^). As d increases, this becomes significantly smaller than the naive 0{n^^^) bound or the improved bound, 0(n^+^), obtained by viewing the Voronoi diagram as a lower envelope in IR^+^. The same bound of Q^^r^/21^ has recently been obtained in [80] for the complexity of the Loo-diagram of n points in W^ (it was also shown that this bound is tight in the worst case). It is thus tempting to conjecture that the maximum complexity of generalized Voronoi diagrams in higher dimensions is close to this bound. Unfortunately, this was recently shown by Aronov to be false [44], by presenting a lower bound of Q{n^~^). The sites used in this construction are convex polytopes, and the distance is either Euclidean or a polyhedral convex distance function. For d = 3, this lower bound does not contradict the conjecture made above, that the complexity of generalized Voronoi diagrams should be at most near-quadratic in this case. Also, in higher dimensions, the conjecture mentioned above is still not refuted when the sites are singleton points. Finally, for the general case, the construction by Aronov still leaves a gap of roughly a factor of n between the known upper and lower bounds. 9. Union of geometric objects Let /C = {^1,..., ^„} be a set of n connected J-dimensional sets in M^. In this section, we want to study the complexity of ^ = U/^=i ^i • Most of the work to date on this problem has been in two or three dimensions. Union of planar objects. Let us assume that each Ki is a Jordan region, bounded by a closed Jordan curve yi. Kedem et al. [241] have proved that if any two boundaries yi intersect in at most two points, then dK contains at most 6n ~ 12 intersection points (provided n ^ 3), and that this bound is tight in the worst case. An immediate corollary of their result is that the number of intersection points on the boundary of the union of a collection of homothets of some fixed convex set is linear, because the boundaries of any two such homothetic copies in general position can intersect in at most two points. The bound also holds when the homothets are not in general position. On the other hand, if pairs of boundaries may intersect in four or more points, then dK may contain Q{n^) intersection points in the worst case; see Figure 9. This raises the question of what happens if any two boundaries intersect in at most three points. Notice that in general this question is meaningless, since any two closed curves must intersect in an even number of points (assuming nondegenerate configurations). To make the problem interesting, let T be a collection of n Jordan arcs, such that both endpoints of each arc yt e F lie on the ;c-axis, and such that Ki is the region between yt and the X-axis. Edelsbrunner et al. [149] have shown that the maximum combinatorial complexity of the union K is G{na{n)). The upper bound requires a rather sophisticated analysis of the topological structure of K, and the lower bound follows from the construction by Wiemik and Sharir for lower envelopes of segments [357].

82

PK. Agarwal and M. Sharir

Fig. 9. Union of Jordan regions.

Next, consider the case when earh v io . • , . arbitrary, then a simple modTficTtiontf J . I . % " ^ ' ' ' " * ' P'^"^' ^^ * ^ trianglesear may have quadratic L ^ e x ^ nTwor^^^^^^^^ ' " ^'^"^^ ' ^^'^-'^^t K be "thin," that is, some of thefr anglesTr^v^^^^^^^ '^tT'^ * ^ « - g l e s have to that if the given triangles are all i f m e a n i r t h r l f T ! ' ' '' ''• ^'"'1 ^^^^ ^^ow n fixed constant ^o, then their u n i o n l ^ a s onfv . ' u ' ' ' ' " ' ' ' ' ' ' '^ '"^^^ ^'^"^ ^ components of R2 y ^ . .™ . l ^ "^'"' ""•"''^'" "^ ''«'^* (ie., connected constants Of p r o p o ^ . l ^ a m y ^ : i % r ; ^ ^ ^ ^ ^ ^ ^ ^ complexity of the union of « fat wedgesTs 0 ^ ; ? T T^'^.i^' '' "'• ^^^^ P^^^^'l *at the un:on Of fat Objects. M. Bern asJ d l h T f l ' : : ; ^ ^ ^el^^^^^ OPEN PROBLEM 7. Let A = (A,

A \ h

the aspect ratio of the smallest rctangle i l ' l l l " ^ V " " " ' ' . ^ " " ' " ' '''"''• ^ ^ «'' ^^ ?/ie complexity o / U L , 4 , ? ^«^to««5 4 , . Suppose £ « ^ , a, ^ 0(n). What is

//cr^;e"rii^:^^^^^^

r H -^'" ^ -^ ^ -"---«

constant number . of points He e f ^ n e ' 1 . t u " ' ' ' " ' ' '"^''"^^^^'" ^' "^^'^ «ome t " '"^^^ ' '^°"^*^"^ « ^^^^ that for each object of S the^atio bet "eerthe ,^^^ .nscribed disk is at most a. T h e r Z w e d th ' I ' ' ' " " " ' ' " ^ '"^"^ ^"'^ *^^ l^^g^^t for any . > 0, where the constLt of 1 I 'complexity of the union K is 0(„i+^) 167] for sHghtly improved rltsnson^^^^^^^^^ ^'^T^' ^" ^' ^' -^^ «^ - « [166! requires as an imtial but import ntsubs^P n a . '''''l''^'^^ '^ ^^' objects. Their proo the union: these are vertice^ o f X . Z l ^ u " ' " ^ ' • ^ ° ' '*^^ """^ber of regular vertices of exactly twice. In fact, the a n a l y r b X a t a n d T h f " ' '\'"° ' " " " ' ' " ' ^ ^ '^at intersect vertices of the union. Nevertheli m^^^L?. K ! " " " ' ' " ^^^ ""'"^'^ ^^^''^^ '^' i^^8"l^ shown that, for an arbitrary coltSoTo^^^^^^^^ ' P™'''™' ''^'^•' '^"'^ ^^arir [296] have cross in a constant number of l^ronehar/;';^^^^^^^^ the number of regular (resp. irregular) vertices o n l ^ h H " ' " ' ^ ' " ' ^ ^^^^P' ^) ^^ --eenusedmn70]toobtamtLrL::l;^rnrre7hel^^^^^^^^^^^^^^^

Arrangements and their applications

83

interesting in their own right, and some additional results concerning them have recently been obtained by Aronov et al. [49]. First, if there are only regular vertices (i.e., every pair of boundaries intersect at most twice), then the inequality obtained by [296] implies that the complexity of the union in this case is at most 6^ — 12, so the result by Pach and Sharir extends the older results of Kedem et al. [241]. In general, though, / can be quadratic, so the above inequality only yields a quadratic upper bound on the number of regular vertices of the union. However, it was shown in [49] that in many cases R is subquadratic. This is the case when the given regions are such that every pair of boundaries cross at most a constant number of times. If in addition all the regions are convex, the upper bound is close toO(«^/2). Aronov and Sharir [55] proved that the complexity of the union of n convex polygons in M^ with a total of s vertices is 0(n^ + sain)). Union in three and higher dimensions. Little is known about the complexity of the union in higher dimensions. It was recently shown in [80] that the maximum complexity of the union of n axis-parallel hypercubes in M^ is 6)(n^^/^^), and this improves to 0{n^^/^^) if all the hypercubes have the same size. However, the following problem remains open. OPEN PROBLEM

8. What is the complexity of the union ofn congruent cubes in M?l

Aronov and Sharir [53] proved that the complexity of the union of n convex polyhedra in R^ with a total of s faces is 0{n^ + sn log^ n). The bound was improved by Aronov et al. [57] to 0{n^ +sn\ogs). Unions of objects also arise as subproblems in the study of generalized Voronoi diagrams, as follows. Let S and p be as in the previous section (say, for the 3-dimensional case). Let K denote the region consisting of all points x G R^ whose smallest distance from a site in S is at most r, for some fixed parameter r > 0. Then K = \^^^^ B(s,r), where B(s,r) = {x eM? \ p(x,s) ^r}. We thus face the problem of bounding the combinatorial complexity of the union of n objects in R^ (of some special type). For example, if 5 is a set of lines and p is the Euclidean distance, the objects are n congruent infinite cylinders in R-^. In general, if the metric p is a distance function induced by some convex body P, the resulting objects are the Minkowski sums s 0 (—rP), for s e S, where A^B = {x-\-y\xeA, y e B]. Of course, this problem can also be stated in any higher dimension. Since it has been conjectured that the complexity of the whole Voronoi diagram in M? should be near-quadratic, the same conjecture should apply to the (simpler) structure K (whose boundary can be regarded as a level curve of the diagram at height r; it does indeed correspond to the cross-section at height r of the lower envelope in R^ that represents the diagram). Recently, this conjecture was confirmed by Aronov and Sharir in [56], in the special case where both P and the objects of S are convex polyhedra. They specialized their analysis of the union of convex polytopes to obtain an improved bound in the special case in which the polyhedra in question are Minkowski sums of the form /?/ 0 P, where the Ri 's are fz pairwise-disjoint convex polyhedra, P is a convex polyhedron, and the total number of faces of these Minkowski sums is s. The improved bounds are 0(ns logn) and ^{nsa(n)). If P is a cube, then the complexity of the Minkowski sum is 0(n^a(n)) [223].

84

P.K. Agarwal and M. Sharir

Agarwal and Sharir [26] showed that if 5 is a set of n Hnes and P is a sphere in M^, i.e., /C is a set of n congruent cyhnders, then the complexity of K is 0(n^/^"^^), for any ^ > 0. This bound was later improved and extended in [27]. They proved that the complexity of the Minkowski sum of a ball with a set of n triangles in M^ is 0{n^^^). OPEN PROBLEM

9. What is the complexity of the union ofn infinite cylinders of different

radii in M? ?

10. Decomposition of arrangements Many applications call for decomposing each cell of the arrangement into constant size cells; see Sections 12 and 13 for a sample of such applications. In this section we describe a few general schemes that have been proposed for decomposition of arrangements.

10.1. Triangulating hyperplane arrangements Each /:-dimensional cell in an arrangement of hyperplanes is a convex polyhedron, so we can triangulate it into /:-simplices. If the cell is unbounded, some of the simplices in the triangulation will be unbounded. A commonly used scheme to triangulate a convex polytope V is the so-called bottom-vertex triangulation, denoted 7^^. It recursively triangulates every face of 7^ as follows. An edge is a one-dimensional simplex, so there is nothing to do. Suppose we have triangulated all 7-dimensional cells of 7^ for j < k. We now triangulate a A:-dimensional cell C as follows. Let v be the vertex of C with the minimum A:j-coordinate. For each (k — 1)-dimensional simplex A lying on the boundary of C but not containing v (A was constructed while triangulating a (/: — 1)-dimensional cell incident to C), we extend A to a /:-dimensional simplex by taking the convex hull of A and v; see Figure 10(i). (Unbounded cells require some care in this definition; see [117].) The number of simplices in P ^ is proportional to the number of vertices in V. If we want to triangulate the entire arrangement or more than one of its cells, we compute the bottom-vertex triangulation / ^ for each face / in the increasing order of their dimension. Let A^^ (F) denote the bottom-vertex triangulation of A(r). A useful property of A^iF) is that each simplex A e A^ (F) is defined by a set D(A) of at most d(d + 3)/2 hyperplanes of F , in the sense that A e A'^iDiA)). Moreover, if IC(A) c r is the subset of hyperplanes intersecting Z\, then A e A^(R), for a subset R ^ F, if and only if D(A) c R and IC(A) Pi /? = 0. A disadvantage of bottom-vertex triangulation is that some vertices may have large degree. Methods for obtaining low-degree triangulations have been proposed in two and three dimensions [137].

10.2. Vertical decomposition Unfortunately, the bottom-vertex triangulation scheme does not work for arrangements of surfaces. Collins [124] described a general decomposition scheme, called cylindrical

Arrangements

and their applications

85

Fig. 10. (i) Bottom vertex triangulation of a convex polygon; (ii) vertical decomposition of a cell in an arrangement of segments.

algebraic decomposition, that decomposes A(r) into (bn)^ cells, each semialgebraic of constant description complexity (however, the maximum algebraic degree involved in defining a cell grows exponentially with d) and homeomorphic to a ball of the appropriate dimension. Moreover, his algorithm produces a cell complex, i.e., closures of any two cells are either disjoint or their intersection is the closure of another lower-dimensional cell of the decomposition. This bound is quite far from the known trivial lower bound of Q(n^), which is a lower bound on the size of the arrangement. A significantly better scheme for decomposing arrangements of general surfaces is their vertical decomposition. Although vertical decompositions of polygons in the plane have been in use for a long time, it was extended to higher dimensions only in the late 1980s. We describe this method briefly. Let C be a 3; for J = 3 the two bounds nearly coincide. Improving the upper bound appears to be very challenging. This problem has been open since 1989; it seems difficult enough to preempt, at the present state of knowledge, any specific conjecture on the true maximum complexity of the vertical decomposition of arrangements in J > 3 dimensions. OPEN PROBLEM 10. What is the complexity of the vertical decomposition of the arrangement ofn surfaces in M^ satisfying Assumptions (A1)-(A2)?

The bound stated above applies to the vertical decomposition of an entire arrangement of surfaces. In many applications, however, one is interested in the vertical decomposition of only a portion of the arrangement, e.g., a single cell, the lower envelope, the zone of some surface, a specific collection of cells of the arrangement, etc. Since, in general, the complexity of such a portion is known (or conjectured) to be smaller than the complexity of the entire arrangement, one would like to conjecture that a similar phenomenon applies to vertical decompositions. Schwarzkopf and Sharir [320] showed that the complexity of the vertical decomposition of a single cell in an arrangement of n surface patches in R^, as above, is 0(«^"^^), for any 6: > 0. A similar near-quadratic bound has been obtained by Agarwal et al. [9] for the vertical decomposition of the region enclosed between the envelope and the upper envelope of two sets of bivariate surface patches. Another recent result by Agarwal et al. [14] gives a bound on the complexity of the vertical decomposition of A^k{F) for a set F of surfaces in M^, which is only slightly larger that the worst-case complexity of A^k (F). OPEN PROBLEM 11. What is the complexity of the vertical decomposition of the minimization diagram ofn surfaces in W^ satisfying Assumptions (A1)-(A2)?

Agarwal and Sharir [25] proved a near-cubic upper bound on the complexity of the vertical decomposition in the special case when the surfaces are graphs of trivariate polynomials and the intersection surface of any pair of surfaces is jc}^-monotone. In fact, their bound holds for a more general setting; see the original paper for details. An interesting special case of vertical decomposition is that of hyperplanes. For such arrangements, the vertical decomposition is a too cumbersome construct, because, as described above, one can use the bottom-vertex triangulation (or any other triangulation) to decompose the arrangement into 0(n^) simplices. Still, it is probably a useful exercise to understand the complexity of the vertical decomposition of an arrangement of n hyperplanes in M^. A result by Guibas et al. [202] gives an almost tight bound of 0(n^logn) for this quantity in R^, but nothing significantly better than the general bound is known for d > 4. Another interesting special case is that of triangles in 3-space. This has been studied by [130,342], where almost tight bounds were obtained for the case of a single

Arrangements

and their applications

87

Table 3 Combinatorial bounds on the maximum complexity of the vertical decomposition of n surfaces. In the second row, K is the combinatorial complexity of the arrangement Objects

Bound

Source

SurfacesinR^, J > 3 Triangles in R Surfaces in R^, single cell Triangles in M , zone w.r.t. an algebraic surface Surfaces in M^, (^ i^)-level Hyperplanes in M"^

0(«2^-4x^(«)) 0(n^a(n)\ogn-hK) 0(n^+^) 0(n^log^n)

[103,330] [130,342] [320] [342]

0(n'^+'k) 0(n logn)

[14] [202]

cell (0(n^log^n)), and for the entire arrangement (0(n^a(n) logn + K)), where K is the complexity of the undecomposed arrangement). The first bound is slightly better than the general bound of [320] mentioned above. Tagansky [342] also derives sharp complexity bounds for the vertical decomposition of many cells in an arrangement of simplices, including the case of all nonconvex cells.

10.3. Other decomposition schemes Linearization, defined in Section 3, can be used to decompose the cells of the arrangement A{r) into cells of constant description complexity as follows. Suppose F admits a Hnearization of dimension k, i.e., there is a transformation ^ :M^ -> M^ that maps each point X G M^ to a point ^(x) e E^, each surface yt G T to a hyperplane hi C M^, and E^ to a J-dimensional surface U c E^. Let H = {hi \ I ^ i ^ n). We compute the bottomvertex triangulation J^ {H) of A{H). For each simplex A e A^(H), let Z = Z^ Pi T, and let zl* = (p~^(A) be the back projection of A onto E^; Z\* is a semialgebraic cell of constant description complexity. Set E = {A"" \ A e A'^(H)}. S is a decomposition of A(r) into cells of constant description complexity. If a simplex A G A ^ (H) intersects U, then A lies in the triangulation of a cell in zone(U; H). Therefore, by Theorem 5.3, \S\ = 0(n^(^~^^^^^^ log^ n), where y = (d -\-k) (mod 2). Hence, we can conclude the following. THEOREM 10.2. Let F be a set of hypersurfaces in M.^ of degree at most b. If F admits a linearization of dimension k, then A{F) can be decomposed into 0(nL^^+^^/^J log^ n) cells of constant description complexity, where y = d -\- k (mod 2).

As shown in Section 3, spheres in E^ admit a linearization of dimension J + 1 ; therefore, the arrangement of n spheres in E^ can be decomposed into 0(n^ logn) cells of constant description complexity. Aronov and Sharir [52] proposed another scheme for decomposing arrangements of triangles in E^ by combining vertical decomposition and triangulation. They first decompose

88

RK. Agarwal and M. Sharir

each three-dimensional cell of the arrangement into convex polyhedron, using an incremental procedure, and then they compute a bottom-vertex triangulation of each polyhedron. Other specialized decomposition schemes in M^ have been proposed in [221,271].

10.4. Cuttings All the decomposition schemes described in this section decompose M^ into cells of constant description complexity, so that each cell lies entirely in a single face of A{r). In many applications, however, it suffices to decompose M^ into cells of constant description complexity so that each cell intersects only a few surfaces of F. Such a decomposition lies at the heart of divide-and-conquer algorithms for numerous geometric problems. Let r be a set of n surfaces in R^ satisfying Assumptions (A1)-(A2). For a parameter r ^ n, a family E = {A\,..., As) of cells of constant description complexity with pairwise disjoint interiors is called a {\/r)-cutting of A{r) if the interior of each cell in E is crossed by at most n/r surfaces of F and E covers R^. If F is a set of hyperplanes, then E is typically a set of simplices. Cuttings have led to efficient algorithms for a wide range of geometric problems and to improved bounds for several combinatorial problems. For example, the proof by Clarkson et al. [121] on the complexity of m distinct cells in arrangements of lines uses cuttings; see the survey papers [3,265] for a sample of applications of cuttings. Clarkson [116] proved that a (l/r)-cutting of size 0(r^ log^ r) exists for a set of hyperplanes in R^. The bound was improved by Chazelle and Friedman [108] to O(r^); see also [1,259,263]. An easy counting argument shows that this bound is optimal for any nondegenerate arrangement. There has been considerable work on computing optimal (1/r)cuttings efficiently [1,100,224,259,263]. Chazelle [100] showed that a (l/r)-cutting for a set of n hyperplanes in R^ can be computed in time 0(nr^~^). Using Haussler and Welzl's result on ^-nets [227], one can show that if, for any subset /? c r , there exists a canonical decomposition of A(R) into at most g(\R\) cells of constant description complexity, then there exists a (l/r)-cutting of A(F) of size 0(^(rlogr)). By the result of Chazelle et al. [103] on the vertical decomposition of A(F), there exists a (l/r)-cutting of size 0((r logr)^'^~'^+^) of A(F). On the other hand, if F admits a linearization of dimension k, then there exists a (l/r)-cutting of size 0((rlogr)L(^+^)/2Jlogr).

11. Representation of arrangements Before we begin to present algorithms for computing arrangements and their substructures, we need to describe how we represent arrangements and their substructures. Planar arrangements of fines can be represented using any standard data structure for representing planar graphs such as quad-edge, winged-edge, and half-edge data structures [207, 244,354]. However, representation of arrangements in higher dimensions is challenging because the topology of ceUs may be rather complex. Exactly how an arrangement is represented largely depends on the specific application for which we need to compute it. For

Arrangements and their applications

89

example, representations may range from simply computing a representative point within each cell, or the vertices of the arrangement, to storing various spatial relationships between cells. We first review representations of hyperplane arrangements and then discuss surface arrangements. Hyperplane arrangements. A simple way to represent a hyperplane arrangement A{r) is by storing its l-skeleton [141]. That is, we construct a graph (V, E) whose nodes are the vertices of the arrangement. There is an edge between two nodes vi, Vj if they are endpoints of an edge of the arrangement. Using the l-skeleton of A{r), we can traverse the entire arrangement in a systematic way. The incidence relationship of various cells in A{r) can be represented using a data structure called incidence graph. A /^-dimensional cell C is called a subcell of a (/: + 1)-dimensional cell C if C lies on the boundary of C\ C is called the supercell of C. We assume that the empty set is a (—1)-dimensional cell of v4(r), which is a subcell of all vertices of A{r)\ and M^ is a (J + 1)-dimensional cell, which is the supercell of all J-dimensional cells of A{r). The incidence graph of A{r) has a node for each cell of A{r), including the (— 1)-dimensional and {d + 1)-dimensional cells. There is a (directed) arc from a node C to another node O if C is a subcell of C^; see Figure 11. Note that the incidence graph forms a lattice. Many algorithms for computing the arrangement construct the incidence graph of the arrangement. A disadvantage of 1-skeletons and incidence graphs is that they do not encode ordering information of cells. For examples, in planar arrangements of lines or segments, there is a natural ordering of edges incident to a vertex or of the edges incident to a twodimensional face. The quad-edge data structure encodes this information for planar arrangements. Dobkin and Laszlo [138] extended the quad-edge data structure to M^, which was later extended to higher dimensions [83,255,256]. Dobkin et al. [136] described an algorithm for representing a simple polygon as a short Boolean formula, which can be used to store faces of segment arrangements to answer various queries efficiently. Surface arrangements. Representing arrangements of surface patches is considerably more challenging than representing hyperplane arrangements because of the complex topology that cells in such an arrangement can have. A very simple representation of A{r) is to store a representative point from each cell of A{r) or to store the vertices of A{r). An even coarser representation of arrangements of graphs of polynomials is to store all realizable sign sequences. It turns out that this simple representation is sufficient for some applications [35,77]. The notion of l-skeleton can be generalized to arrangements of surfaces. However, all the connectivity information cannot be encoded by simply storing vertices and edges of the arrangement. Instead we need a finer one-dimensional structure, known as the roadmap. Roadmaps were originally introduced by Canny [90,92] to determine whether two points lie in the same connected component of a semialgebraic set; see also [195,197,230]. They were subsequently used for computing a semialgebraic description of connected components of a semialgebraic set [71,94,231]. We can extend the notion of roadmaps to entire arrangements. Roughly speaking, a roadmap 7l{r) of A{r) is a one-dimensional semialgebraic set that satisfies the following two conditions. (Rl) For every cell C in A{r), C Pi lZ{r) is nonempty and connected.

90

RK. Agarwal and M. Sharir

(R2) Let Cu) be the cross-section of a cell C G A(r) at the hyperplane x\=w. For any w; G R and for cell C e A{r), C^^^ impHes that every connected component of Cyo intersects 1Z(r). We can also define the roadmap of various substructures of arrangements. See [69,90] for details on roadmaps. A roadmap does not represent "ordering" of cells in the arrangement or adjacency relationship among various cells. If we want to encode the adjacency relationship among higher dimensional cells of ^ ( F ) , we can compute the vertical decomposition or the cylindrical algebraic decomposition of A(r) and compute the adjacency relationship of cells in the decomposition [43,317]. Brisson [83] describes the cell-tuple data structure that encodes topological structures, ordering among cells, the boundaries of cells, and other information for cells of surface arrangements. Many query-type applications (e.g., point location, ray shooting) call for preprocessing A{r) into a data structure so that various queries can be answered efficiently. In these cases, instead of storing various cells of an arrangement explicitly, we can store the arrangement implicitly, e.g., using cuttings. Chazelle et al. [104] have described how to preprocess arrangements of surfaces for point-location queries; Agarwal et al. [9] have described data structures for storing lower envelopes in R"* for point-location queries.

12. Computing arrangements We now review algorithms to compute the arrangement A{r) of a set F of n surface patches satisfying Assumptions (A1)-(A2). As in Chapter 1, we need to assume here an appropriate model of computation in which various primitive operations on a constant number of surfaces can be performed in constant time. We will assume an infinite-precision real arithmetic model in which the roots of any polynomial of constant degree can be computed exactly in constant time. Constructing arrangements of hyperplanes and simplices. Edelsbrunner et al. [157] describe an incremental algorithm that computes in time O(n^) the incidence graph of A{r), for a set Fofn hyperplanes in R^. Roughly speaking, their algorithm adds the hyperplanes of F one by one and maintains the incidence graph of the arrangement of the hyperplanes added so far. Let 7^ be the set of hyperplanes added in the first / stages, and let )//+i be the next hyperplane to be added. In the (/ + l)st stage, the algorithm traces )//+i through A(Fi). If a /:-face / of A(Fi) does not intersect y/, then / remains a face of A(Fi^]). If / intersects y/+i, then / G zone(yi-^\; Fi) and / is split into two /:-faces f'^, f~, lying in the two open halfspaces bounded by y/+i, and a (/: — l)-face f = f C\yi^\. The algorithm therefore checks the faces of zone{yiJ^\; Fi) whether they intersect y/+i. For each such intersecting face, it adds corresponding nodes in the incidence graph and updates the edges of the incidence graph. The (/ + l)st stage can be completed in time proportional to the complexity of zone (yi-^\; Fi), which is 0(/^~^); see [142,157]. Hence, the overall running time of the algorithm is 0(n^). A drawback of the algorithm just described is that it requires 0(n^) "working" storage because it has to maintain the entire arrangement constructed so far in order to determine

Arrangements

and their

applications

91

(ii)

(i)

Fig. 11. (i) Incidence graph of the arrangement of 2 Hnes. (ii) Adding a new line; incremental changes in the incidence graph as the vertex v, the edge 5, and the face A' are added.

which of the cells intersect the new hyperplane. An interesting question is whether A{r) can be computed using only 0{n) working storage. Edelsbrunner and Guibas [148] proposed the topological sweep algorithm that can construct the arrangement of n lines in O(n^) time using 0{n) working storage. Their algorithm, which is a generalization of the sweep-line algorithm of Bentley and Ottmann [72], sweeps the plane by a pseudo-line. The algorithm by Edelsbrunner and Guibas can be extended to enumerate all vertices in an arrangement of n hyperplanes in R^ in 0(n^) time using 0(n) space. See [41,58,161] for other topological-sweep algorithms. Avis and Fukuda [61] developed an algorithm that can enumerate in 0(n^k) time, using 0(n) space, all k vertices of the arrangement of a set F of n hyperplanes in M.^ in which every vertex is incident to d hyperplanes. Their algorithm is useful when there are many parallel hyperplanes in F. See also [62,186] for some related results. Using the random-sampling technique, Clarkson and Shor [120] developed an 0(n log n + k) expected time algorithm for constructing the arrangement of a set Fofn line segments in the plane; here k is the number of vertices in A(F); see also [284,285]. Chazelle and Edelsbrunner [102] developed a deterministic algorithm that can construct A(F) in time 0(n logn -j- k), using 0(n -f- k) storage. The space complexity was improved to 0(n), without affecting the asymptotic running time, by Balaban [63]. If F is a set of n triangles in R^, A (F) can be constructed in 0(n^ logn + k) expected time using a randomized incremental algorithm [106,330]. De Berg et al. [130] proposed a deterministic algorithm with 9

II

0(n a(n) logn + klogn) running time for computing A (F).

92

RK. Agarwal and M. Sharir

Chazelle and Friedman [109] described an algorithm that can preprocess a set F of n hyperplanes into a data structure of size 0(n^/log^ n) so that a point-location query can be answered in O(logn) time. Their algorithm was later simplified by Matousek [267] and Chazelle [100]. Mulmuley and Sen [287] developed a randomized dynamic data structure of size 0(n^) for point location in arrangements of hyperplanes that can answer a point-location query in 0(\ogn) expected time and can insert or delete a hyperplane in 0(n^~^ logn) expected time. Hagerup et al. [210] described a randomized parallel algorithm for constructing the arrangement of hyperplanes under the CRCW model. Their algorithm runs in 0(\ogn) time using 0(n^/logn) expected number of processors. A deterministic algorithm under the CREW model with the same worst-case performance was proposed by Goodrich [193]. There has been some work on constructing arrangements of lines and segments using floating-point (finite precision) arithmetic. Milenkovic [279] developed a general technique called double-precision geometry that can be applied to compute arrangements of lines and segments in the plane. For example, if the coefficients of each line in a set Fofn lines are represented using at most b bits, then his technique can compute A(r) in 0(n^ logn) time using at most b-\-20 bits of precision. A careful implementation of the algorithm by Edelsbrunner et al. [157] requires 3b bits of precision. Because of finite-precision arithmetic, Milenkovic's technique computes the coordinates of vertices approximately, and therefore produces a planar geometric graph, which is an arrangement of pseudo-lines. If the approximate arithmetic used by his algorithm makes relative error e, then the maximum error in the coordinates of vertices of A(r) computed by his algorithm is 0(^/s). Fortune and Milenkovic [182] showed that the sweep-line and incremental algorithms can be implemented so that the maximum error in the coordinates of vertices is at most 0(ne). For all practical purposes this approach is better than the one described in [279]. See [196,200, 222,278,280,308] for a few additional results on constructing arrangements using floatingpoint arithmetic. Constructing arrangements of surfaces. The algorithm by Edelsbrunner et al. [157] for computing hyperplane arrangements can be extended to computing the vertical decomposition A (F) for a set F of n arcs in the plane. In the (/ + l)st step, the algorithm traces }//+i through zone{yi-^\; Fi) and updates the trapezoids of A (Fj) that intersect )//+i. The running time of the (/ + l)st stage is 0(Ks-\-2(i))^ where s is the maximum number of intersection points between a pair of arcs in F. Hence, the overall running time of the algorithm is 0(nks-\-2(n)) [150]. Suppose F is a set of arcs in the plane in general position. If the arcs in F are added in a random order and a "history dag," as described in Chapter 1, is used to efficiently find the trapezoids of ^ (Fi) that intersect y/+i, the expected running time of the algorithm can be improved to 0(n logn -J- k), where k is the number of vertices in A(F) [106,330]. Very little is known about computing the arrangement of a set F of surfaces in higher dimensions. Chazelle et al. [103] have shown that A (F) can be computed in randomized expected time 0(«^^~^+^), using the random-sampling technique. Their algorithm can be made deterministic without increasing its asymptotic running time, but the deterministic algorithm is considerably more complex.

Arrangements and their applications

93

There has been some work for computing arrangements under the more reahstic model of precise rational arithmetic model used in computational real algebraic geometry [76]. Canny [93] had described an {nb)^^^''-iix^t algorithm for computing a sample point from each cell of the arrangement of a set of n hypersurfaces in R^, each of degree at most b. The running time was improved by Basu et al. [70] to n^+'^iy^i^) ^ Basu et al. [69] described an n^'^ ^ b^^^ ^ -time algorithm for computing the roadmap of a semialgebraic set defined by n polynomials, each of degree at most b. Although their goal is to develop the roadmap of a semialgebraic set, their algorithm first constructs the road map of the entire arrangement of the surfaces defining the semialgebraic set and then outputs the appropriate portion of the map.

13. Computing substructures in arrangements 13.1. Lower envelopes Let r be a set of surface patches satisfying Assumptions (A1)-(A2). We want to compute the minimization diagram A^(F) of F. We described in Chapter 1 the algorithms for computing the minimization diagram of a set of arcs in the plane. In this chapter we will focus on minimization diagrams of sets of surface patches in higher dimensions. There are again several choices, depending on the application, as to what exactly we want to compute. The simplest choice is to compute the vertices or the 1-skeleton of A4(r). A more difficult task is to compute all the faces of MiF) and represent them using any of the mechanisms described in the previous section. Another challenging task, which is required in many applications, is to store F into a data structure so that Lr(x), for any point x G M^""^ can be computed efficiently. For collections F of surface patches in M?, the minimization diagram A4(F) is a planar subdivision. In this case, the latter two tasks are not significantly harder than the first one, because we can preprocess M (F) using any optimal planar point-location algorithm [ 132]. Several algorithms have been developed for computing the minimization diagram of bivariate (partial) surface patches [21,78,79,128,328,330]. Some of these techniques use randomized algorithms, and their expected running time is 0(n^+^), which is comparable with the maximum complexity of the minimization diagram of bivariate surface patches. The simplest algorithm is probably the deterministic divide-and-conquer algorithm presented by Agarwal et al. [21]. It partitions F into two subsets A , A of roughly equal size, and computes recursively the minimization diagrams A^i, M2 of Fi and F2, respectively. It then computes the overlay A^* of A^i and M.2' Over each face / of A^* there are only (at most) two surface patches that can attain the final envelope (the one attaining L(Fi) over / and the one attaining L(F2) over / ) , so we compute the minimization diagram of these two surface patches over / , replace / by this refined diagram, and repeat this step for all faces of A^*. We finally merge any two adjacent faces / , /^ of the resulting subdivision if the same surface patches attain L(F) over both / and f\ The cost of this step is proportional to the number of faces of Al*. By the result of Agarwal et al. [21], A^* has 0(n^+^) faces. This imphes that the complexity of the above divide-and-conquer algorithm is 0(n^+^). If r is a set of triangles in M^, the running time of the algorithm is 0(n^a(n)) [151].

94

P.K. Agarwal and M. Sharir

This divide-and-conquer algorithm can also be used to compute S{r, F'), the region lying above all surface patches of one collection F' and below all surface patches of another collection T, in time Oin^^^), where n = \F\ + \F'\ [21]. A more difficult problem is to devise output-sensitive algorithms for computing M{F), whose complexity depends on the actual combinatorial complexity of the envelope. A rather complex algorithm is presented by de Berg [127] for the case of triangles in R^, whose running time is 0{n^^^^^ + n^^^^^k'^f^), where k is the number of vertices in M{F). If the triangles in F are pairwise disjoint, the running time can be improved to The algorithm by Edelsbrunner et al. [151] can be extended to compute in 0{n^~^a{n)) time all faces of the minimization diagram of {d — l)-simplices in R^ for d^A. However, little is known about computing the minimization diagram of more general surface patches in J ^ 4 dimensions. Let F be a set of surface patches in R^ satisfying Assumptions (A1)-(A2). Agarwal et al. [9] showed that all vertices, edges, and 2-faces of A^(r) can be computed in randomized expected time 0(n^~^+^). We sketch their algorithm below. Assume that F satisfies Assumptions (A1)-(A5). Fix a (J — 2)-tuple of surface patches, say y i , . . . , y^-2, and decompose their common intersection H t i ^ // i^^o smooth, x\X2monotone, connected patches, using a stratification algorithm. Let 77 be one such piece. Each surface yi ,fori^d—l, intersects 77 at a curve $,, which partitions 77 into two regions. If we regard each yi as the graph of a partially defined (d — l)-variate function, then we can define Ki c 77 to be the region whose projection on the hyperplane 7/ : Xd = 0 consists of points X at which y, (x) ^ yi (x) = • • • = yj-ii^)- The intersection Q = n / > j - i ^i is equal to the portion of 77 that appears along the lower envelope L(F).Wc repeat this procedure for all patches of the intersection P|-r, y, and for all (d — 2)-tuples of surface patches. This will give all the vertices, edges and 2-faces of L(F). Since 77 is x 1^2-monotone 2-manifold, computing Q is essentially the same as computing the intersection ofn — d-\-2 planar regions. Q can thus be computed using an appropriate variant of the randomized incremental approach [106,128]. It adds ^/ = y/ D 77 one by one in a random order (^ may consist of 0(1) arcs), and maintains the intersection of the regions Ki for the arcs added so far. Let Qr denote this intersection after r arcs have been added. We maintain the "vertical decomposition" of Qr (within 77), and represent Qr as a collection of pseudo-trapezoids. We maintain additional data structures, including a history dag and a union-find structure, and proceed exactly as in [106,128] (See Chapter 1). We omit here the details. We define the weight of a pseudo-trapezoid r to be the number of surface patches y/, for / > J — 1, whose graphs either cross r or hide r completely from the lower envelope (excluding the up to four function graphs whose intersections with 77 define r). The cost of the above procedure, summed over all (d — 2)-tuples of 7^, is proportional to the number of pseudo-trapezoids that are created during the execution of the algorithm, plus the sum of their weights, plus an overhead term of 0(n^~') needed to prepare the collections of arcs §/ over all two-dimensional patches 77. Modifying the analysis in the papers cited above, Agarwal et al. prove the following. 13.1 (Agarwal, Aronov and Sharir [9]). Let F be a set ofn surface patches in W^ satisfying Assumptions (A1)-(A2). The vertices, edges, and 2-faces of M{F) can be computed in randomized expected time 0{n^~^^^),for any 6: > 0. THEOREM

Arrangements and their applications

95

For d = A, the above algorithm can be extended to compute the incidence graph (or celltuple structure) of M.{r). Their approach, however, falls short of computing such representations for J > 4. Agarwal et al. also show that the three-dimensional point-location algorithm by Preparata and Tamassia [307] can be extended to preprocess a set of trivariate surface patches in time 0(n^+^) into a data structure of size 0(n^+^) sothatLr(x), for any point x G M^, can be computed in 0(log^ n) time. OPEN PROBLEM 12. Let r be a set of n surface patches in R^, for d > 4, satisfying Assumptions (A1)-(A3). How fast can F be preprocessed, so that Lr(x), for a query point X G M^~^ can be computed efficiently!

13.2. Single cells Computing a single cell in an arrangement of n hyperplanes in M^ is equivalent, by duality, to computing the convex hull of a set of n points in E^ and is a widely studied problem; see, e.g., [142,324] for a summary of known results. For J > 4, an 0(nL^/^^) expected-time algorithm for this problem was proposed by Clarkson and Shor [120] (see also [323]), which is optimal in the worst case. By derandomizing this algorithm, Chazelle [101] developed an 0(«^^/^^)-time deterministic algorithm. A somewhat simpler algorithm with the same running time was later proposed by Bronnimann et al. [85]. These results imply that the Euclidean Voronoi diagram of a set of n points in W^ can be computed in time 0(n ^^/^^). Since the complexity of a cell may vary between 0(1) and 0(n^^^^^), output-sensitive algorithms have been developed for computing a single cell in hyperplane arrangements [111,246,321]. For J < 3, Clarkson and Shor [120] gave randomized algorithms with expected time 0(nlogh), where h is the complexity of the cell, provided that the planes are in general position. Simple deterministic algorithms with the same worst-case bound were developed by Chan [95]. Seidel [321] proposed an algorithm whose running time is 0(n^ + h logn); the first term can be improved to 0(n^~^/^^^/^^+^^ log^ n) [266] or to 0((n/i)^"^/^L^/^J+^Hog''n) [96]. Chan et al. [99] described another output-sensitive algorithm whose running time is 0((n + (nf) ^ ~ ^/ ^^^'^'^ + fn^~^l ^^/^^) log*^ n). Avis et al. [60] described an algorithm that can compute in 0(nf) time, using 0(n) space, all / vertices of a cell in an arrangement of n hyperplanes in M^; see also [82,185]. All these outputsensitive bounds hold only for simple arrangements. Although many of these algorithms can be extended to nonsimple arrangements, the running time increases. As mentioned in Chapter 1, Guibas et al. [206] developed an 0(A^+2(^) log^ n)-time algorithm for computing a single face in an arrangement of n arcs, each pair of which intersect in at most s points. Later a randomized algorithm with expected time 0(A,^+2 W log n) was developed by Chazelle et al. [106]. Since the complexity of the vertical decomposition of a single cell in an arrangement of n surface patches in M? is 0(n^+^) [320], an application of the random-sampling technique yields an algorithm for computing a single cell in time 0(n^^^) in an arrangement of n surface patches in M? [320]. If F is a set of triangles, the running time can be improved to 0(n^ log^ n) [128]. Halperin [211,212] developed faster algorithms for computing a single cell in arrangements of "special" classes of bivariate surfaces that arise in motion-planning applications.

96

P.K. Agarwal and M. Sharir

13.3. Levels Let T be a set of n arcs in the plane, each pair of which Constructing the ^k-level intersect in at most 5" points. A^k (^) can be computed by a simple divide-and-conquer algorithm as follows [326]. Partition F into two subsets Fi, F2, each of size at most \n/2'], compute recursively A^k(^\), A^ki^^i), and then use a sweep-line algorithm to compute A^k(^) from A^k(J"\) and A^ki^i)- The time spent in the merge step is proportional to the number of vertices in A^ki^)^ A^k(^2) and the number of intersections points between the edges of two subdivisions, each of which is a vertex of A(F) whose level is at most 2k. Using Theorem 6.1, the total time spent in the merge step is 0(Xs-\-2(n)klogn). Hence, the overall running time of the algorithm is 0(Xs-\-2(n)k\og^ n). If we use a randomized incremental algorithm that adds arcs one by one in a random order and maintains A^kin)^ where Fi is the set of arcs added so far, the expected running time of the algorithm is 0(ks-\-2(n)k login/k)); see, e.g., [286]. Everett et al. [178] showed that if F is a set of n lines, the expected running time can be improved to 0(n \ogn + nk). Recently Agarwal et al. [13] gave another randomized incremental algorithm that can compute A^k(^) in expected time 0(ks-\-2(n)(k -\-\ogn)). In higher dimensions, little is known about computing A^ki^)^ for collections F of surface patches. For d = 3, Mulmuley [286] gave a randomized incremental algorithm for computing the ^/:-level in an arrangement of n planes whose expected running time is 0(nk^ login/k)). The expected running time can be improved to 0(n log'^ n -f nk^) using the algorithm by Agarwal et al. [13]. There are, however, several technical difficulties in extending this approach to computing levels in arrangements of surface patches. Using the random-sampling technique, Agarwal et al. [14] developed an Oin^^^k) expected-time algorithm for computing A^ki^)^ for a collection F of n surface patches in R^. Their algorithm can be derandomized without affecting the asymptotic running time. For J > 4, Agarwal et al.'s and Mulmuley's algorithms can compute the ^/:-level in arrangements of n hyperplanes in expected time Oin^^^'^^k^^^^^). These algorithms do not extend to computing the ^/:-level in surface arrangements because no nontrivial bound is known for the complexity of a triangulation of A^k i^) in four and higher dimensions. Constructing a single level Edelsbrunner and Welzl [ 164] gave an 0(/i logn-\-b log^ n)time algorithm to construct the /:-level in an arrangement of n lines in the plane, where b is the number of vertices of the /:-level. This bound was slightly improved by Cole et al. [123] to 0(« Xogn + b log^ k) and then recently by Chang [98] to 0(« \ogn + b \og^^^ n) for any ^ > 0. Haar-Peled [225] gave on 0((n + b)ain)\ogn) expected time algorithm. However, these algorithms do not extend to computing the /:-level in arrangements of curves. The approach by Agarwal et al. [13] can compute the /:-level in an arrangement of lines in randomized expected time OinXo^n + nk^^^Xo^^^n), and it extends to arrangements of curves and to arrangements of hyperplanes. Agarwal and Matousek [19] describe an output-sensitive algorithm for computing the /:-level in an arrangement of planes. The running time of their algorithm, after a slight improvement by Chan [96], is OinXogb + b^^^), where b is the number of vertices of the A:-level. Their algorithm can compute the k-lt\tX in an arrangement of hyperplanes in R^ in time 0(nlogZ? + («Z?)^-^/(L^/^J+^^+^ + Z?ni-2/(L^/2J+J)+^). As in the case ofsingle cells, all the output-sensitive algorithms assume that the hyperplanes are in general position.

Arrangements and their applications

97

13.4. Marked cells Let F be a set of n lines in the plane and S a set of m points in the plane. Edelsbmnner et al. [152] presented a randomized algorithm, based on the random-sampling technique, for computing C(S, F), the set of cells in A(r) that contain at least one point of S, whose expected running time is 0(m^^^~^n^^^^^^ -{-mlogn-\-nlognlogm), for any ^ > 0. A deterministic algorithm with running time 0{m^/^n^^^ \og^ n -\-n log^ n-^m logn) was developed by Agarwal [2]. However, both algorithms are rather complicated. A simple randomized divide-and-conquer algorithm, with 0((my/n + n) logn) expected running time, was recently proposed by Agarwal et al. [20]. Using random sampling, they improved the expected running time to 0(m^/^n^/^ log^^^(n/^/m) -\- (m-\-n) logn). If we are interested in computing the incidences between F and 5, the best known algorithm is by Matousek whose expected running time is o(m^/^f2^/^2^^^''§*^'^+''^^ + (m -f n)log(m + n)) [268]. An Q{m^/^n^^^ -^ {m-\-n) log(m + n)) lower bound for this problem is proved by Erickson [177] under a restricted model of computation. Matousek's algorithm can be extended to higher dimensions to count the number of incidences between m points and n hyperplanes in M^ in time 0((mn)^-i/^^+^>2^^^^g*^^+^>> + (m + w) log(m + n)) [268]. The above algorithms can be modified to compute marked cells in arrangements of segments in the plane. The best known randomized algorithm is by Agarwal et al. [20] whose running time is 0{m^^^n^^^ log^(n/v^) + (m + nlogm + na{n)) logn). Little is known about computing marked cells in arrangements of arcs in the plane. Using a randomized incremental algorithm, C{S, F) can be computed in expected time 0{ks+2(n)\f^^ogn), where s is the maximum number of intersection points between a pair of arcs in F [330]. If F is a set of n unit-radius circles and 5 is a set of m points in the plane, the incidences between F and S can be computed using Matousek's algorithm [268]. Randomized incremental algorithms can be used to construct marked cells in arrangements of hyperplanes in higher dimensions in time close to their worst-case complexity. For example, if F is a set of n planes in M^ and ^ is a set of m points in M^, then the incidence graph of cells in C{S, F) can be computed in expected time 0{nm^/^ logn) [128]. For J > 4, the expected running time is 0(m^/^n^/^log^ n), where y = {[_d/2\ — l)/2. De Berg et al. [133] describe an efficient point-location algorithm in the zone of a /c-flat in an arrangement of hyperplanes in R^. Their algorithm can answer a query in 0(log n) time using 0(nL^^+^V^J log^n) space, where y =d-\-k (mod2).

13.5. Union of objects Let r be a set of n semialgebraic simply connected regions in the plane, each of constant description complexity. The union of F can be computed in 0 ( / ( n ) log^ n) time by a divide-and-conquer technique, similar to that described in Section 13.3 for computing A^k{F). Here f(m) is the maximum complexity of the union of a subset of F of size m. Alternatively, \JF can be computed in 0 ( / ( n ) l o g n ) expected time using the lazy randomized incremental algorithm by De Berg et al. [128]. As a consequence, the union of n convex fat objects, each of constant description complexity, can be computed in 0(n^"^^) time, for any ^ > 0; see Section 9.

98

RK. Agarwal and M. Sharir

Aronov et al. [57] modified the approach by Agarwal et al. [9] so that the union of n convex polytopes in R^ with a total of s vertices can be computed in expected time 0{sn log n log s-\-rv'). The same approach can be used to compute the union of n congruent cylinders in time 0{n^^^). (Again, consult Section 9 for the corresponding bounds on the complexity of the union.) Many applications call for computing the volume or surface area of [J /" instead of its combinatorial structure. Overmars and Yap [292] showed that the volume of the union of n axis-parallel boxes in R^ can be computed in 0(n^^^ logn) time. Edelsbrunner [144] gave an elegant formula for the volume and the surface area of the union of n balls in R^, which can be used to compute the volume efficiently.

14. Applications In this section we present a sample of applications of arrangements. We discuss a few specific problems that can be reduced to bounding the complexity of various substructures of arrangements of surfaces or to computing these substructures. We also mention a few general areas that have motivated several problems involving arrangements and in which arrangements have played an important role.

14.1. Range searching A typical range-searching problem is defined as follows: Preprocess a set Sofn points in R*^, so that all points of S lying in a query region can be reported {or counted) quickly. A special case of range searching is halfspace range searching, in which the query region is a halfspace. Because of numerous applications, range searching has received much attention during the last twenty years. See [16,269] for recent surveys on range searching and its applications. If we define the dual of a point p = (a\,..., aj) to be the hyperplane p* :xcj = —a\xi — • • • — ad~\Xd-\ + ad, and the dual of a hyperplane h : Xd = b\x\ -\ h bd-\Xd-i + bd to be the point h* = (b\,..., bd), then p lies above (resp. below, on) h if and only if the hyperplane p* lies above (resp. below, on) the point h*. Hence, halfspace range searching has the following equivalent "dual" formulation: Preprocess a set F of n hyperplanes in R^ so that the hyperplanes of H lying below a query point can be reported quickly, or the level of a query point can be computed quickly. Using the point-location data structure for hyperplane arrangements given in [100], the level of a query point can be computed in O(logn) time using 0(n^/ \og^ n) space. This data structure can be modified to report all t hyperplanes lying below a query point in time 0(\ogn + t). Chazelle et al. [110] showed, using results on arrangements, that a two-dimensional halfspace range-reporting query can be answered in 0(logn + t) time using 0(n) space. In higher dimensions, by constructing (l/r)-cuttings for A^k(F), Matousek [264] developed a data structure that can answer a halfspace range-reporting query in time 0(\ogn + t) using 0(n^^^'^^ log^n) space, for some constant c. He also developed a data structure that can answer a query in time 0(n^~^^^^^^^ log'^n + t) using 0(nlog\ogn) space [264]. See also [6,112]. Using

Arrangements and their applications

99

linearization, a semialgebraic range-searching query, where one wants to report all points of S lying inside a semialgebraic set of constant description complexity, can be answered efficiently using some of the halfspace range-searching data structures [18,359]. Point location in hyperplane arrangements can be used for simplex range searching [113], ray shooting [17,18,271], and several other geometric searching problems [29].

14.2. Terrain visualization Let i^ be di polyhedral terrain in Mr" with n edges; that is, E is the graph of a continuous piecewise-linear bivariate function, so it intersects each vertical line in exactly one point. The orthographic view of X" in direction Z? G §^ is the decomposition of 77, a plane normal to the direction h and placed at infinity, into maximal regions so that the rays emerging in direction b from all points in such a region hit the same face of U, or none of them hit iJ. The perspective view of U from a point a G M^ is the decomposition of S^ into maximal connected regions so that, for each region /? c S^ and for all points b e R, either the first intersection point of E and the ray r emanating from a in direction b lie in the same face of E (which depends on R), or none of these rays meet E. The orthographic (resp. perspective) aspect graph of E represents all topologically different orthographic (resp. perspective) views of E. For background and a survey of recent research on aspect graphs, see [81]. Here we will show how the complexity bounds for lower envelopes can be used to derive near-optimal bounds on the aspect graphs of polyhedral terrains. A pair of parallel rays (pi, p2) is called critical if for each / = 1, 2, the source point of Pi lies on an edge at of E, pi passes through three edges of E (including a/), and pi does not intersect the (open) region lying below E. It can be shown that the number of topologically different orthographic views of E is 0(n^) plus the number of critical pairs of parallel rays. Fix a pair a\, a2 of edges of E. Agarwal and Sharir [23] define, for each pair (a\, ^2) of edges of E, a collection J-'a^aj of n trivariate functions, so that every pair (pi, P2) of critical rays, where pi emanates from a point on a/ (for / = 1, 2), corresponds to a vertex of A4(Tai,a2)- They also show that the graphs of the functions in J-'ai,a2 satisfy Assumptions (A1)-(A2). Using Theorem 3.1 and summing over all pairs of edges of iJ, we can conclude that the number of critical pairs of rays, and thus the number of topologically different orthographic views of E, is O(n^+0. Using a more careful analysis, Halperin and Sharir [218] proved that the number of different orthographic views is n De Berg et al. [131] have constructed a terrain for which there are Q(n^a(n)) topologically different orthographic views. If E is an arbitrary polyhedral set with n edges, the maximum possible number of topologically different orthographic views of E is 0(n^) [304]. De Berg et al. [131] showed that if i7 is a set of k pairwise-disjoint convex polytopes with a total of n vertices, then the number of orthographic views is 0{n^k^)\ the best known lower bound is Q {n^0). Agarwal and Sharir extended their approach to bound the number of perspective views of a terrain. They argue that the number of perspective views of E is proportional to the number of triples of rays emanating from a common point, each of which passes through three edges of E before intersecting the open region lying below E. Following a similar

100

RK. Agarwal and M. Sharir

approach to the one sketched above, they reduce the problem to the analysis of lower envelopes of O(n^) families of 5-variate functions, each family consisting of 0(n) functions that satisfy Assumptions (A1)-(A2). This leads to an overall bound of 0{n^^^) for the number of topologically different perspective views of E. This bound is also known to be almost tight in the worst case, as follows from another lower-bound construction given by De Berg et al. [131]. Again, in contrast, if U is an arbitrary polyhedral set with n edges, the maximum possible number of topologically different perspective views of i:is6)(n^) [304].

14.3. Transversals Let 5 be a set of n compact convex sets in R^. A hyperplane h is called a transversal of S if h intersects every member of 5. Let T{S) denote the space of all hyperplane transversals of S. We wish to study the structure of T{S). To facilitate this study, we apply the dual transform described in Section 14.1. Let h \ Xd = a\x\ -\h ad-\Xd~\ + ^j be a hyperplane that intersects a set ^ G 5. Translate h up and down until it becomes tangent to s. Denote the resulting upper and lower tangent hyperplanes by Xd=a\x\

H

Vad-\Xd-\ +Us{a\,..

.,ad-\)

Xd=a\x\

H

\-ad-\Xd-\ + L,.(ai,..., aj_i),

and

respectively. Then we have Ls{a\,...,ad-\) 0 in R^. The results in [21] concerning the complexity of the vertical decomposition of S(r, F') imply that T{S) can be constructed in 0{n^^^) time. No sharp bounds are known on T{S) in higher dimensions. However, in four dimensions, using the algorithm by Agarwal et al. [9] for point location in the minimization diagram of trivariate functions, we can preprocess S into a data structure of size 0(«^+^) so that we can determine in O(logn) time whether a hyperplane /? is a transversal of S. The problem can be generalized by considering lower-dimensional transversals. For example, in R^ we can also consider the space of all line transversals of S (lines that meet

Arrangements and their applications

101

every member of S). By mapping lines in R^ into points in R"^, and by using an appropriate parametrization of the lines, the space of all line transversals of S can be represented as the region in M^ enclosed between the upper envelope and the lower envelope of two respective collections of surfaces. Pellegrini and Shor [302] showed that if 5* is a set of triangles in R^, then the space of line transversals of S has ^320(Viogn) complexity. The bound was slightly improved by Agarwal [4] to 0(n^ logn). He reduced the problem to bounding the complexity of a family of cells in an arrangement of 0{n) hyperplanes in R^. Agarwal et al. [10] proved that the complexity of the space of line transversals for a set of n balls in R^ is 0(^z^+^). Their argument works even if 5* is a set of homothets of a convex region of constant description complexity in R^. Smorodinsky et al. [333] showed that n disjoint balls in R^ can be stabbed by a line in 0{n^~^) different ways.

14.4. Geometric optimization In the past few years, many problems in geometric optimization have been attacked by techniques that reduce the problem to constructing and searching in various substructures of surface arrangements. Hence, the area of geometric optimization is a natural extension, and a good application area, of the study of arrangements. See [24] for a recent survey on geometric optimization. One of the basic techniques for geometric optimization is the parametric searching technique, originally proposed by Megiddo [275]. This technique reduces the optimization problem to a decision problem, where one needs to compare the optimal value to a given parameter. In most cases, the decision problem is easier to solve than the optimization problem. The parametric searching technique proceeds by a parallel simulation of a generic version of the decision procedure with the (unknown) optimum value as an input parameter. In most applications, careful implementation of this technique leads to a solution of the optimization problem whose running time is larger than that of the decision algorithm only by a polylogarithmic factor. See [24] for a more detailed survey of parametric searching and its applications. Several alternatives to parametric searching have been developed during the past decade. They use randomization [25,97,262], expander graphs [238], and searching in monotone matrices [183]. Like parametric searching, all these techniques are based on the availability of an efficient procedure for the decision problem. When applicable, they lead to algorithms with running times that are similar to, and sometimes slightly better than, those yielded by parametric searching. These methods have been used to solve a wide range of geometric optimization problems, many of which involve arrangements. We mention a sample of such results. Slope selection. Given a set S of n points in R^ and an integer k, find the line with the ^th smallest slope among the lines passing through pairs of points of 5. If we dualize the points in 5 to a set F of lines in R^, the problem becomes that of computing the ^th leftmost vertex of A{r). Cole et al. [122] developed a rather sophisticated 0(n log n)-time algorithm for this problem, which is based on parametric searching. (Here the decision problem is to determine whether at most k vertices of A{r) lie to the left of a given vertical

102

RK. Agarwal and M. Sharir

line.) A considerably simpler algorithm, based on (l/r)-cuttings, was later proposed by Bronnimann and Chazelle [84]. See also [237,262]. Distance selection. Given a set 5 of n points in R^ and a parameter k ^ (2), find the A:-th largest distance among the points of S [12,238]. The corresponding decision problem reduces to point location in an arrangement of congruent disks in R^. Specifically, given a set r of m congruent disks in the plane, we wish to count efficiently the number of containments between disks of F and points of S. This problem can be solved using parametric searching [12], expander graphs [238], or randomization [262]. The best known deterministic algorithm, given by Katz and Sharir [238], runs in 0(«^/-^ log^"^^ n) time. Segment center. Given a set S of n points in R^ and a line segment e, find a placement of e that minimizes the largest distance from the points of 5 to ^ [15,169]. The decision problem reduces to determining whether given two families F and F' of bivariate surfaces, S{F, F'), the region lying between Lp and Up', is empty. Exploiting the special properties of F and F\ Efrat and Sharir [169] show that the complexity of S{F, F') is 0{n\ogn). They describe an 0(n*"^^)-time algorithm to determine whether S{F, F') is empty, which leads to an 0(n'"^^)-time algorithm for the segment-center problem. Extremal polygon placement. Given a convex m-gon P and a closed polygonal environment Q with n vertices, find the largest similar copy of P that is fully contained in Q [331]. Here the decision problem is to determine whether P, with a fixed scaling factor, can be placed inside Q\ this is a variant of the corresponding motion-planning problem for P inside Q, and is solved by constructing an appropriate representation of the 3-dimensional free configurafion space, as a collection of cells in a corresponding 3dimensional arrangement of surfaces. The running time of the whole algorithm is only slightly larger than the time needed to solve the fixed-size placement problem. The best running time is 0{mnk(i{mn)\og'mn\o^n) [11]; see also [243,331]. If Q is a convex n-gon, the largest similar copy of P that can be placed inside Q can be computed in 0{mn^\ogn) time [5]. Diameter in 3D. Given a set S of n points in R^^, determine the maximum distance between a pair of points in S. The problem is reduced to determining whether S lies in the intersection of a given set Fofn congruent balls. A randomized algorithm with 0(n logn) expected time was proposed by Clarkson and Shor [120]. A series of papers [105,272, 310,309] describe near-linear-time deterministic algorithms. The best known deterministic algorithm runs in 0(n log^ n) time [73,309]. Width in 3D. Given a set S of n points in R^^, determine the smallest distance between two parallel planes enclosing S between them. This problem has been studied in a series of papers [9,25,105], and the currently best known randomized algorithms computes the width in 0(n^/^+^) expected time [25]. The technique used in attacking the decision problems for this and the two following problems reduce them to point location in the region above the lower envelope of a collection of trivariate functions in R^.

Arrangements and their applications

103

Biggest stick in a simple polygon. Compute the longest line segment that can fit inside a given simple polygon with n edges. The current best solution is 0(n^/^"^^) [25] (see also [9,28]). Minimum-width annulus. Compute the annulus of smallest width that encloses a given set of n points in the plane. This problem arises in fitting a circle through a set of points in the plane. Again, the current best solution is 0{n^^'^^^) [25] (see also [9,28]). Geometric matching. Consider the problem where we are given two sets S\, S2 of n points in the plane, and we wish to compute a minimum-weight matching in the complete bipartite graph Si x S2, where the weight of an edge (p,q) is the Euclidean distance between p and q. One can also consider the analogous nonbipartite version of the problem, which involves just one set S of 2n points, and the complete graph on S. The goal is to explore the underlying geometric structure of these graphs, to obtain faster algorithms than those available for general abstract graphs. Vaidya [347] had shown that both the bipartite and the nonbipartite versions of the problem can be solved in time close to 0(n^^^). A fairly sophisticated application of vertical decomposition in three-dimensional arrangements, given in [14], has improved the running time for the bipartite case to 0(n^^^). Recently, Varadarajan [350] proposed an 0(n^^^log'^ n)-timt algorithm for the nonbipartite case. Center point. A center point of a set S of n points in the plane is a point TT G M^ so that each line i passing through n has the property that at least [n/?>\ points lie in each halfplane bounded hy I. It is well known that such a center point always exists [142]. If we dualize 5 to a set F of n lines in the plane, then n*, the line dual to n, lies between Ain/3\ (r) and Ai2n/3] (^). Cole et al. [123] described an 0(n log^ n)-timc algorithm for computing a center point of S, using parametric searching. The problem of computing the set of all center points reduces to computing the convex hull of Ak(r) for a given k. Matousek [260] described an 0(n log^ n)-time algorithm for computing the convex hull of Ak(r) for any k ^n; recall, in contrast, that the best known upper bound for Ak(r) is 0(n(k + 1)^/^); see [119] for an approximation algorithm. Ham sandwich cuts. Let Si, S2,. --, Sd be d sets of points in R^, each containing n points. Suppose n is even. A ham sandwich cut is a hyperplane h so that each open halfspace bounded by h contains at most n/2 points of 5*/, for / = 1 , . . . , d. It is known [142, 358] that such a cut always exists. Let Ft be the set of hyperplanes dual to 5/. Then the problem reduces to computing a vertex of the intersection of An/2(^i) and .4^/2(^2)Megiddo [276] developed a linear-time algorithm for computing a ham sandwich cut in the plane if Si and ^2 can be separated by a line. For arbitrary point sets in the plane, a linear-time algorithms was later developed by Lo et al. [257]. Lo et al. also described an algorithm for computing a ham sandwich cut in M^ whose running time is 0(i/n/2 (^) log^ n), where ij/kin) is the maximum complexity of the /^-level in an arrangement of n lines in the plane. By Dey's result on /^-levels [134], the running time of their algorithm is Oin'^/Hog^n).

104

P.K. Agarwal and M. Sharir

14.5. Robotics As mentioned in the introduction, motion planning for a robot system has been a major motivation for the study of arrangements. Let 5 be a robot system with d degrees of freedom, which is allowed to move freely within a given two- or three-dimensional environment cluttered with obstacles. Given two placements / and F of B, determining whether there exists a collision-free path between these placements reduces to determining whether / and F lie in the same cell of the arrangement of the family F of "contact surfaces" in W^, regarded as the configuration space of B (see the introduction for more details). If / and F lie in the same cell, then a path between / and F in R^ that does not intersect any surface of F corresponds to a collision-free path of B in the physical environment from / to F. Schwartz and Sharir [317] developed an n^ -time algorithm for this problem. If J is a part of the input, the problem was later proved to be PSPACE-complete [91,312]. Canny [90,92] gave an n^^^^-time algorithm to compute the roadmap of a single cell in an arrangement A(F) of a set F of n surfaces in R^ provided that the cells in A(F) form a Whitney regular stratification of R^ (see [194] for the definition of Whitney stratification). Using a perturbation argument, he showed that his approach can be extended to obtain a Monte Carlo algorithm to determine whether two points lie in the same cell of A(F). The algorithms was subsequently extended and improved by many researchers see [69, 195,230]. The best known algorithm, due to Basu et al. [69], can compute the roadmap in time n^-^^tf^(^ \ Much work has been done on developing efficient algorithms for robots with a small number of degrees of freedom, say, two or three [211,221,242]. The result by Schwarzkopf and Sharir [320] gives an efficient algorithm for computing a collision-free path between two given placements for fairly general robot systems with three degrees of freedom. See [214,318,329] for surveys on motion-planning algorithms. It is impractical to compute the roadmap, or any other explicit representation, of a single cell in A(F) if d is large. A general Monte Carlo algorithm for computing a probabilistic roadmap of a cell in A(F) is described by Kavraki et al. [240]. This approach avoids computing the cell explicitly. Instead, it samples a large number of random points in the configuration space and only those configurations that lie in the free configuration space (FP) are retained (they are called milestones); we also add / and F as milestones. The algorithm then builds a "connectivity graph" whose nodes are these milestones, and whose edges connect pairs of milestones if the line segment joining them in configuration space lies in F P (or if they satisfy some other "local reachability" rule). Various strategies have been proposed for choosing random configurations [40,66,234,239]. The algorithm returns a path from / to F if they lie in the same connected component of the resulting network. Note that this algorithm may fail to return a collision-free path from I to F even if there exists one. This technique nevertheless has been successful in several real-world applications. Assembly planning is another area in which the theory of arrangements has led to efficient algorithms. An assembly is a collection of objects (called parts) placed rigidly in some specified relative positions so that no two objects overlap. A subassembly of an assembly A is a subset of objects in A in their relative placements in A. An assembly operation is a motion that merges some subassemblies of A into a new and larger subassembly. An assembly sequence for A is a sequence of assembly operations that starts with the individual parts separated from each other and ends up with the full assembly A. The goal of assembly

Arrangements and their applications

105

planning is to compute an assembly sequence for a given assembly. A classical approach to assembly sequencing is disassembly sequencing, which separates an assembly into its individual parts [233]. The reverse order of a sequence of disassemblying operations yields an assembly sequence. Several kinds of motion have been considered in separating parts of an assembly, including translating a subassembly along a straight line, arbitrary translational motion, rigid motion, etc. A common approach to generate a disassembly sequence is the so-called nondirectional blocking graph approach. It partitions the space of all allowable motions of separation into a finite number of cells so that within each cell the set of "blocking relations" between all pairs of parts remains fixed. The problem is then reduced to computing representative points in cells of the arrangement of a family of surfaces. This approach has been successful in many instances, including polyhedral assembly with infinitesimal rigid motion [201]; see also [214,215]. Other problems in robotics that have exploited arrangements include fixturing [311], MEMS (micro electronic mechanical systems) [77], path planning with uncertainty [129], and manufacturing [30]. 14.6. Molecular modeling In the introduction, we described the Van der Waals model, in which a molecule M is represented as a collection F of spheres in R^. (See [125,145,277] for other geometric models of molecules.) Let i7 = 9(IJ T). 17 is called the "surface" of M. Many problems in molecular biology, especially those that study the interaction of a protein with another molecule, involve computing the molecular surface, a portion of the surface (e.g., the socalled active site of a protein), or various features of the molecular surface [147,216,253, 351]. We briefly describe two problems in molecular modeling that can be formulated in terms of arrangements. The chemical behavior of solute molecules in a solution is strongly dependent on the interactions between the solute and solvent molecules. These interactions are critically dependent on those molecular fragments that are accessible to the solvent molecules. Suppose we use the Van der Waals model for the solute molecule and model the solvent by a sphere S. By rolling S on the molecular surface i7, we obtain a new surface U\ traced by the center of the rolling sphere. If we enlarge each sphere of F by the radius of 5, U^ is the boundary of the union of the enlarged spheres. As mentioned above, several methods have been proposed to model the surface of a molecule. The best choice of the model depends on the chemical problem the molecular surface is supposed to represent. For example, the Van der Waals model represents the space requirement of molecular conformations, while isodensity contours and molecular electrostatic potential contour surfaces [277] are useful in studying molecular interactions. An important problem in molecular modeling is to study the interrelations among various molecular surfaces of the same molecule. For example, let X" = { U i , . . . , iJ^j be a family of molecular surfaces of the same molecule. We may want to compute the arrangement A{IJ), or we may want to compute the subdivision of Et induced by

[EjCMJill^Ji^i^m}. Researchers have also been interested in computing "connectivity" of a molecule, e.g., computing voids, tunnels, and pockets of 17. A void of U is a bounded component of

106

RK. Agarwal and M. Sharir

^^ \ (U ^ ) ' ^ tunnel is a hole through | J F that is accessible from the outside, i.e., an "inner" part of a non-contractible loop in R^ \ | J F ; and ?i pocket is a depression or cavity on E. Pockets are not holes in the topological sense and are not well defined; see [126,147] for some of the definitions proposed so far. Pockets and tunnels are interesting because they are good candidates to be binding sites for other molecules. Efficient algorithms have been developed for computing 17, connectivity of i7, and the arrangement A{r) [145,216,351]. Halperin and Shelton [222] describe an efficient perturbation scheme to handle degeneracies while constructing A{r) or U. Some applications require computing the measure of different substructures of A{r), including the volume of U, the surface area of E, or the volume of a void of E. Edelsbrunner et al. [146] describe an efficient algorithm for computing these measures; see also [144,145].

15. Conclusions In this survey we reviewed a wide range of topics on arrangements of surfaces. We mentioned a few old results, but the emphasis of the survey was on the tremendous progress made in this area during the last fifteen years. We discussed combinatorial complexity of arrangements and their substructures, representation of arrangements, algorithms for computing arrangements and their substructures, and several geometric problems in which arrangements play pivotal roles. Although the survey covered a broad spectrum of results, many topics on arrangements were either not included or very briefly touched upon. For example, we did not discuss arrangements of pseudo-lines and oriented matroids, we discussed algebraic and topological issues very briefly, and we mentioned a rather short list of applications that have exploited arrangements. There are numerous other sources where more details on arrangements and their applications can be found; see e.g. the books [74, 281,291,294,330] and the survey papers [192,213,283,293].

Acknowledgments The authors thank Boris Aronov, Saugata Basu, Bernard Chazelle, Herbert Edelsbrunner, Jeff Erickson, Leo Guibas, Dan Halperin, Sariel Har-Peled, Jifi Matousek, Ricky Pollack, Marie-Francoise Roy, Raimund Seidel, and Emo Welzl for several useful discussions, for valuable comments on an earlier version of the paper, and for pointing out a number of relevant papers.

References [1] RK. Agarwal, Partitioning arrangements of lines: I. An efficient deterministic algorithm. Discrete Comput. Geom. 5 (1990), 449-483. [2] RK. Agarwal, Partitioning arrangements of lines: II. Applications, Discrete Comput. Geom. 5 (1990), 533-573. [3] RK. Agarwal, Geometric partitioning and its applications, Computational Geometry: Papers from the DIMACS Special Year, I.E. Goodman, R. Pollack and W. Steiger, eds, Amer. Math. Soc, Providence, RI (1991), 1-37.

Arrangements

and their applications

107

[4] P.K. Agarwal, On stabbing lines for polyhedra in 3d, Comput. Geom. Theory Appl. 4 (1994), 177-189. [5] P.K. Agarwal, N. Amenta and M. Sharir, Placement of one convex polygon inside another. Discrete Comput. Geom. 19 (1998), 95-104. [6] P.K. Agarwal, L. Arge, J. Erickson, P.G. Franciosa and J.S. Vitter, Efficient searching with linear constraints, Proc. Annu. ACM Sympos. Principles Database Syst. (1998), 169-178. [7] P.K. Agarwal and B. Aronov, Counting facets and incidences. Discrete Comput. Geom. 7 (1992), 359-369. [8] P.K. Agarwal, B. Aronov, T.M. Chan and M. Sharir, On levels in arrangements of lines, segments, planes and triangles. Discrete Comput. Geom. 19 (1998), 315-331. [9] P.K. Agarwal, B. Aronov and M. Sharir, Computing envelopes in four dimensions with applications, SIAM J. Comput. 26 (1997), 1714-1732. [10] P.K. Agarwal, B. Aronov and M. Sharir, Line traversals of balls and smallest enclosing cylinders in three dimensions. Discrete Comput. Geom. 21 (1999), 373-388. [11] P.K. Agarwal, B. Aronov and M. Sharir, Motion planning for a convex polygon in a polygonal environment. Discrete Comput. Geom. 22 (1999), 201-221. [12] P.K. Agarwal, B. Aronov, M. Sharir and S. Suri, Selecting distances in the plane, Algorithmica 9 (1993), 495-514. [13] P.K. Agarwal, M. de Berg, J. Matousek and O. Schwarzkopf, Constructing levels in arrangements and higher order Voronoi diagrams, SIAM J. Comput. 27 (1998), 654-667. [14] P.K. Agarwal, A. Efrat and M. Sharir, Vertical decomposition of shallow levels in ?>-dimensional arrangements and its applications, Proc. 11th Annu. Sympos. Comput. Geom. (1995), 39-50. [15] P.K. Agarwal, A. Efrat, M. Sharir and S. Toledo, Computing a segment center for a planar point set, J. Algorithms 15 (1993), 314-323. [16] P.K. Agarwal and J. Erickson, Geometric range searching and its relatives. Advances in Discrete and Computational Geometry, B. Chazelle, J.E. Goodman and R. Pollack, eds, Amer. Math. Soc, Providence, RI (1998), 1-56. [17] P.K. Agarwal and J. Matousek, Ray shooting and parametric search, SIAM J. Comput. 22 (1993), 794806. [18] P.K. Agarwal and J. Matousek, On range searching with semialgebraic sets. Discrete Comput. Geom. 11 (1994), 393^18. [19] P.K. Agarwal and J. Matousek, Dynamic half-space range reporting and its applications, Algorithmica 13 (1995), 325-345. [20] P.K. Agarwal, J. Matousek and O. Schwarzkopf, Computing many faces in arrangements of lines and segments, SIAM J. Comput. 27 (1998), 491-505. [21] P.K. Agarwal, O. Schwarzkopf and M. Sharir, The overlay of lower envelopes and its applications. Discrete Comput. Geom. 15 (1996), 1-13. [22] P.K. Agarwal and M. Sharir, Red-blue intersection detection algorithms, with applications to motion planning and collision detection, SIAM J. Comput. 19 (1990), 297-321. [23] P.K. Agarwal and M. Sharir, On the number of views of polyhedral terrains. Discrete Comput. Geom. 12 (1994), 177-182. [24] P.K. Agarwal and M. Sharir, Efficient algorithms for geometric optimization, ACM Comput. Surv. 30 (1998), 412^58. [25] P.K. Agarwal and M. Sharir, Efficient randomized algorithms for some geometric optimization problems. Discrete Comput. Geom. 16 (1996), 317-337. [26] P.K. Agarwal and M. Sharir, Motion planning of a ball amidst segments in three dimensions, Proc. 10th ACM SIAM Sympos. Discrete Algo. [27] P.K. Agarwal and M. Sharir, Pipes, cigars, and kreplach: The union of Minkowski sums in three dimensions, Proc. 15th Annual Sympos. on Comput. Geom. (1999), 143-153. [28] P.K. Agarwal, M. Sharir and S. Toledo, Applications of parametric searching in geometric optimization, J. Algorithms 17 (1994), 292-318. [29] P.K. Agarwal, M. van Kreveld and M. Overmars, Intersection queries in curved objects, J. Algorithms 15 (1993), 229-266. [30] H.-K. Ahn, M. de Berg, P. Bose, S.-W. Cheng, D. Halperin, J. Matousek and O. Schwarzkopf, Separating an object from its cast, Proc. 13th Annu. Sympos. Comput. Geom. (1997), 221-230.

108

RK. Agarwal and M. Sharir

[31] M. Ajtai, V. Chvatal, M. Newborn and E. Szemeredi, Crossing-free subgraphs, Ann. Discrete Math. 12 (1982), 9-12. [32] G.L. Alexanderson and J.E. Wetzel, Dissections of a plane oval, Amer. Math. Monthly 84 (1977), 442449. [33] G.L. Alexanderson and J.E. Wetzel, Simple partitions of space. Math. Mag. 51 (1978), 220-225. [34] G.L. Alexanderson and J.E. Wetzel, Arrangements ofplanes in space. Discrete Math. 34 (1981), 219-240. [35] N. Alon, Tools from higher algebra. Handbook of Combinatorics, R.L. Graham, M. Grotschel and L. Lovasz, eds, Elsevier, Amsterdam (1995), 1749-1783. [36] N. Alon, I. Barany, Z. Fiiredi and D. Kleitman, Point selections and weak s-netsfor convex hulls, Combin. Probab. Comput. 1 (1992), 189-200. [37] N. Alon and E. Gyori, The number of small semispaces of a finite set of points in the plane, J. Combin. Theory Ser. A 41 (1986), 154-157. [38] H. Alt, S. Felsner, F. Hurtado and M. Noy, Point sets with few k-sets, Proc. 14th Symp. Comput. Geom. (1998), 200-205. [39] H. Alt, R. Fleischer, M. Kaufmann, K. Mehlhom, S. Naher, S. Schirra and C. Uhrig, Approximate motion planning and the complexity of the boundary of the union of simple geometric figures, Algorithmica 8 (1992), 391^06. [40] N. Amato, O.B. Bayazit, L.K. Dale, C. Jones and D. Vallejo, OBPRM: An obstacle-based PRM for 3D workspaces, Proc. Workshop on Algorithmic Foundations of Robotics, P.K. Agarwal, L.E. Kavraki and M. Mason, eds, A.K. Peters, Wellesley, MA (1998), 155-168. [41] E.G. Anagnostou, L.J. Guibas and V.G. Polimenis, Topological sweeping in three dimensions, Proc. 1st Annu. SIGAL Intemat. Sympos. Algorithms, Lecture Notes in Comput. Sci. 450, Springer-Verlag (1990), 310-317. [42] A. Andrzejak, B. Aronov, S. Har-Peled, R. Seidel and E. Welzl, Results on k-sets and j-facets via continuous motion arguments, Proc. 14th Symp. Comput. Geom. (1998), 192-199. [43] D.S. Amon, G.E. Collins and S. McCallum, Cylindrical algebraic decomposition II: The adjacency algorithm for the plane, SIAM J. Comput. 13 (1984), 878-889. [44] B. Aronov, A lower bound on Voronoi diagram complexity. Unpublished manuscript (1998). [45] B. Aronov, M. Bern and D. Eppstein, Arrangements of convex polytopes. Unpublished manuscript (1995). [46] B. Aronov, B. Chazelle, H. Edelsbrunner, L.J. Guibas, M. Sharir and R. Wenger, Points and triangles in the plane and halving planes in space. Discrete Comput. Geom. 6 (1991), 435^42. [47] B. Aronov and T. Dey, Polytopes in arrangements, Proc. 15th Annual Sympos. on Comput. Geom. (1999), 154-162. [48] B. Aronov, H. Edelsbrunner, L.J. Guibas and M. Sharir, The number of edges of many faces in a line segment arrangement, Combinatorica 12 (1992), 261-274. [49] B. Aronov, A. Efrat, D. Halperin and M. Sharir, On the number of regular vertices on the union of Jordan regions, Proc. 6th Scandinavian Workshop on Algorithm Theory (1998), 322-334. [50] B. Aronov, J. Matousek and M. Sharir, On the sum of squares of cell complexities in hyperplane arrangements, J. Combin. Theory Ser. A 65 (1994), 311-321. [51] B. Aronov, M. Pellegrini and M. Sharir, On the zone of a surface in a hyperplane arrangement. Discrete Comput. Geom. 9 (1993), 177-186. [52] B. Aronov and M. Sharir, Triangles in space or building {and analyzing) castles in the air, Combinatorica 10(1990), 137-173. [53] B. Aronov and M. Sharir, The union of convex polyhedra in three dimensions, Proc. 34th Annu. IEEE Sympos. Found. Comput. Sci. (1993), 518-527. [54] B. Aronov and M. Sharir, Castles in the air revisited. Discrete Comput. Geom. 12 (1994), 119-150. [55] B. Aronov and M. Sharir, The common exterior of convex polygons in the plane, Comput. Geom. 8 (1997), 139-149. [56] B. Aronov and M. Sharir, On translational motion planning of a convex polyhedron in 3-space, SIAM J. Comput. 26 (1997), 1785-1803. [57] B. Aronov, M. Sharir and B. Tagansky, The union of convex polyhedra in three dimensions, SIAM J. Comput. 26 (1997), 1670-1688. [58] T. Asano, L.J. Guibas and T. Tokuyama, Walking on an arrangement topologically, Intemat. J. Comput. Geom. Appl. 4 (1994), 123-151.

Arrangements

and their applications

109

[59] F. Aurenhammer, Voronoi diagrams: A survey of a fundamental geometric data structure, ACM Comput. Surv. 23 (1991), 345-405. [60] D. Avis, D. Bremner and R. Seidel, How good are convex hull algorithms!, Comput. Geom. 7 (1997), 265-302. [61] D. Avis and K. Fukuda, A pivoting algorithm for convex hulls and vertex enumeration of arrangements andpolyhedra. Discrete Comput. Geom. 8 (1992), 295-313. [62] D. Avis and K. Fukuda, Reverse search for enumeration. Discrete Appl. Math. 65 (1996), 2 1 ^ 6 . [63] I.J. Balaban, An optimal algorithm for finding segment intersections, Proc. 11th Annu. ACM Sympos. Comput. Geom. (1995), 211-219. [64] I. Barany, Z. Fiiredi and L. Lovasz, On the number of halving planes, Combinatorica 10 (1990), 175-183. [65] I. Barany and W. Steiger, On the expected number ofk-sets. Discrete Comput. Geom. 11 (1994), 243-263. [66] J. Barraquand and J.-C. Latombe, Robot motion planning: A distributed representation approach, Intemat. J. Robot. Res. 10 (1991), 628-649. [67] S. Basu, On the combinatorial and topological complexity of a single cell, Proc. 39th Annual JEEE Sympos. on Foundations of Comp. Sci. (1999), 606-616. [68] S. Basu, R. Pollack and M.-F Roy, On the number of cells defined by a family ofpolynomials on a variety, Mathematika 43 (1994), 120-126. [69] S. Basu, R. Pollack and M.-F. Roy, Computing roadmaps of semi-algebraic sets, Proc. 28th Annu. ACM Sympos. Theory Comput. (1996), 168-173. [70] S. Basu, R. Pollack and M.-F. Roy, On computing a set of points meeting every semi-algebraically connected component of a family ofpolynomials on a variety, J. Complexity 13 (1997), 28-37. [71] S. Basu, R. Pollack and M.-F. Roy, Complexity of computing semi-algebraic descriptions of the connected components of a semi-algebraic set. Unpublished manuscript (1998). [72] J.L. Bentley and T.A. Ottmann, Algorithms for reporting and counting geometric intersections, IEEE Trans. Comput. C-28 (1979), 643-647. [73] S. Bespamyatnikh, An efficient algorithm for the three-dimensional diameter problem, Proc. 9th Annu. ACM-SIAM Sympos. Discrete Algorithms (1998), 137-146. [74] A. Bjomer, M. Las Vergnas, N. White, B. Sturmfels and G.M. Ziegler, Oriented Matroids, Cambridge University Press, Cambridge (1993). [75] A. Bjomer and G.M. Ziegler, Combinatorial stratification of complex arrangements, J. Amer. Math. Soc. 5 (1992), 105-149. [76] J. Bochnak, M. Coste and M.-F. Roy, Geometric Algebrique Reelle, Springer-Verlag, Heidelberg, West Germany (1987). [77] K.-F Bohringer, B. Donald and D. Halperin, The area bisectors of a polygon and force equilibria in programmable vector fields, Proc. 13th Annu. Sympos. Comput. Geom. (1997), 457^59. [78] J.-D. Boissonnat and K. Dobrindt, Randomized construction of the upper envelope of triangles in R , Proc. 4th Canad. Conf. Comput. Geom. (1992), 311-315. [79] J.-D. Boissonnat and K. Dobrindt, On-line construction of the upper envelope of triangles and surface patches in three dimensions, Comput. Geom. 5 (1996), 303-320. [80] J.-D. Boissonnat, M. Sharir, B. Tagansl^y and M. Yvinec, Voronoi diagrams in higher dimensions under certain polyhedral distance functions. Discrete Comput. Geom. 19 (1998), 485^19. [81 ] K. W. Bowyer and C.R. Dyer, Aspect graphs: An introduction and survey of recent results. Int. J. of Imaging Systems and Technology 2 (1990), 315-328. [82] D. Bremner, K. Fukuda and M. Marzetta, Primal-dual methods for vertex and facet enumeration, Proc. 13th Annu. ACM Sympos. Comput. Geom. (1997), 49-56. [83] E. Brisson, Representing geometric structures in d dimensions: Topology and order. Discrete Comput. Geom. 9 (1993), 387-426. [84] H. Bronnimann and B. Chazelle, Optimal slope selection via cuttings, Proc. 6th Canad. Conf. Comput. Geom. (1994), 99-103. [85] H. Bronnimann, B. Chazelle and J. Matousek, Product range spaces, sensitive sampling and derandomization, Proc. 34th Annu. IEEE Sympos. Found. Comput. Sci. (1993), 400-409. [86] U.A. Brousseau, A mathematician's progress, Math. Teacher 59 (1966), 722-727. [87] R.C. Buck, Partition of space, Amer. Math. Monthly 50 (1943), 541-544.

110

RK. Agarwal and M. Sharir

[88] C. Bumikel, K. Mehlhom and S. Schirra, On degeneracy in geometric computations, Proc. 5th ACMSIAM Sympos. Discrete Algorithms (1994), 16-23. [89] R. Canham, A theorem on arrangements of lines in the plane, Israel J. Math. 7 (1969), 393-397. [90] J. Canny, The Complexity of Robot Motion Planning, MIT Press, Cambridge, MA (1987). [91] J. Canny, Some algebraic and geometric computations in PSPACE, Proc. 20th Annu. ACM Sympos. Theory Comput. (1988), 460-467. [92] J. Canny, Computing roadmaps in general semialgebraic sets, Comput. J. 36 (1993), 409^18. [93] J. Canny, Improved algorithms for sign determination and existential quantifier elimination, Comput. J. 36(1993), 504-514. [94] J.F. Canny, D. Grigor'ev and N. Vorobjov, Finding connected components of a semi-algebraic set in subexponential time, Appl. Algebra Emg. Commun. Comput. 2 (1992), 217-238. [95] T.M. Chan, Optimal output-sensitive convex hull algorithms in two and three dimensions. Discrete Comput. Geom. 16 (1996), 361-368. [96] T.M. Chan, Output-sensitive results on convex hulls, extreme points and related problems. Discrete Comput. Geom. 16 (1996), 369-387. [97] T.M. Chan, Geometric applications of a randomized optimization technique, Proc. 14th ACM Symp. Comput. Geom. (1998), 269-278. [98] T. Chan, Remarks on k-level algorithms in the plane. Unpublished manuscript (1999). [99] T.M. Chan, J. Snoeyink and C.K. Yap, Primal dividing and dual pruning: Output-sensitive construction of 4-dpolytopes and 3-d Voronoi diagrams. Discrete Comput. Geom. 18 (1997), 433^54. [100] B. Chazelle, Cutting hyperplanes for divide-and-conquer. Discrete Comput. Geom. 9 (1993), 145-158. [101] B. Chazelle, An optimal convex hull algorithm in any fixed dimension, Discrete Comput. Geom. 10 (1993), 377^09. [102] B. Chazelle and H. Edelsbrunner, An optimal algorithm for intersecting line segments in the plane, J. ACM 39 (1992), 1-54. [103] B. Chazelle, H. Edelsbrunner, L.J. Guibas and M. Sharir, A singly-exponential stratification scheme for real semi-algebraic varieties and its applications, Proc. 16th Internal. Colloq. Automata Lang. Program., Lecture Notes Comput. Sci. 372, Springer-Verlag (1989), 179-192. [104] B. Chazelle, H. Edelsbrunner, L.J. Guibas and M. Sharir, A singly-exponential stratification scheme for real semi-algebraic varieties and its applications, Theoret. Comput. Sci. 84 (1991), 77-105. [105] B. Chazelle, H. Edelsbrunner, L.J, Guibas and M. Sharir, Diameter, width, closest line pair and parametric searching. Discrete Comput. Geom. 10 (1993), 183-196. [106] B. Chazelle, H. Edelsbrunner, L.J. Guibas, M. Sharir and J. Snoeyink, Computing a face in an arrangement of line segments and related problems, SIAM J. Comput. 22 (1993), 1286-1302. [107] B. Chazelle, H. Edelsbrunner, L.J. Guibas, M. Sharir and J. Stolfi, Lines in space: Combinatorics and algorithms, Algorithmica 15 (1996), 428-447. [108] B. Chazelle and J. Friedman, A deterministic view of random sampling and its use in geometry, Combinatorica 10 (1990), 229-249. [109] B. Chazelle and J. Friedman, Point location among hyperplanes and unidirectional ray-shooting, Comput. Geom. 4 (1994), 53-62. [110] B. Chazelle, L.J. Guibas and D.T. Lee, The power of geometric duality, BIT 25 (1985), 76-90. [ I l l ] B. Chazelle and J. Matousek, Derandomizing an output-sensitive convex hull algorithm in three dimensions, Comput. Geom. 5 (1995), 27-32. [112] B. Chazelle and F.P. Preparata, Half space range search: An algorithmic application of k-sets. Discrete Comput. Geom. 1 (1986), 83-93. [113] B. Chazelle, M. Sharir and E. Welzl, Quasi-optimal upper bounds for simplex range searching and new zone theorems, Algorithmica 8 (1992), 407^29. [114] L.P. Chew, Near-quadratic bounds for the L \ Voronoi diagram of moving points, Comput. Geom. 7 (1997), 73-80. [115] L.P. Chew, K. Kedem, M. Sharir, B. Tagansky and E. Welzl, Voronoi diagrams of lines in 3-space under polyhedral convex distance functions, J. Algorithms 29 (1998), 238-255. [116] K.L. Clarkson, New applications of random sampling in computational geometry. Discrete Comput. Geom. 2 (1987), 195-222. [117] K.L. Clarkson, A randomized algorithm for closest-point queries, SIAM J. Comput. 17 (1988), 830-847.

Arrangements

and their applications

111

[118] K. Clarkson, A bound on local minima of arrangements that implies the upper bound theorem. Discrete Comput. Geom. 10 (1993), 427-433. [119] K. Clarkson, D. Eppstein, G.L. Miller, C. Sturtivant and S.-H. Teng, Approximating center points with iterative Radon points, Intemat. J. Comput. Geom. Appl. 6 (1996), 357-377. [120] K.L. Clarkson and P.W. Shor, Applications of random sampling in computational geometry, II, Discrete Comput. Geom. 4 (1989), 387-421. [121] K.L. Clarkson, H. Edelsbrunner, L.J. Guibas, M. Sharir and E. Welzl, Combinatorial complexity bounds for arrangements of curves and spheres. Discrete Comput. Geom. 5 (1990), 99-160. [122] R. Cole, J. Salowe, W. Steiger and E. Szemeredi, An optimal-time algorithm for slope selection, SIAM J. Comput. 18 (1989), 792-810. [123] R. Cole, M. Sharir and C.K. Yap, On k-hulls and related problems, SIAM J. Comput. 16 (1987), 61-77. [124] G.E. Collins, Quantifier elimination for real closed fields by cylindrical algebraic decomposition, Proc. 2nd GI Conference on Automata Theory and Formal Languages, Lecture Notes in Comput. Sci. 33, Springer-Verlag, Berlin, West Germany (1975), 134-183. [125] M.L. Connolly, Analytical molecular surface calculation, J. Appl. Cryst. 16 (1983), 548-558. [126] T.H. Connolly, Molecular interstitial skeleton. Computer Chem. 15 (1991), 37-45. [127] M. de Berg, Ray Shooting, Depth Orders and Hidden Surface Removal, Springer-Verlag, Berlin, Germany (1993). [128] M. de Berg, K. Dobrindt and O. Schwarzkopf, On lazy randomized incremental construction. Discrete Comput. Geom. 14 (1995), 261-286. [129] M. de Berg, L. Guibas, D. Halperin, M. Overmars, O. Schwarzkopf, M. Sharir and M. Teillaud, Reaching a goal with directional uncertainty, Theoret. Comput. Sci. 140 (1995), 301-317. [130] M. de Berg, L.J. Guibas and D. Halperin, Vertical decompositions for triangles in 3-space, Discrete Comput. Geom. 15 (1996), 35-61. [131] M. de Berg, D. Halperin, M. Overmars and M. van Kreveld, Sparse arrangements and the number of views ofpolyhedral scenes, Intemat. J. Comput. Geom. Appl. 7 (1997), 175-195. [132] M. de Berg, M. van Kreveld, M. Overmars and O. Schwarzkopf, Computational Geometry: Algorithms and Applications, Springer-Verlag, Berlin, Germany (1997). [133] M. de Berg, M. van Kreveld, O. Schwarzkopf and J. Snoeyink, Point location in zones of k-flats in arrangements, Comput. Geom. 6 (1996), 131-143. [134] T.K. Dey, Improved bounds on planar k-sets and related problems. Discrete Comput. Geom. 19 (1998), 373-382. [135] T.K. Dey and H. Edelsbrunner, Counting triangle crossings and halving planes. Discrete Comput. Geom. 12(1994), 281-289. [136] D.R Dobkin, L.J. Guibas, J. Hershberger and J. Snoeyink, An efficient algorithm for finding the CSG representation of a simple polygon, Comput. Graph. 22 (1988), 31-40. Proc. SIGGRAPH '88. [137] D.P. Dobkin and D.G. Kirkpatrick, Fast detection of polyhedral intersection, Theoret. Comput. Sci. 27 (1983), 241-253. [138] D.P. Dobkin and M.J. Laszlo, Primitives for the manipulation of three-dimensional subdivisions, Algorithmica4(1989), 3-32. [139] D.P. Dobkin and S. Teller, Computer graphics. Handbook of Discrete and Computational Geometry, J.E. Goodman and J. O'Rourke, eds, CRC Press LLC, Boca Raton, PL (1997), 779-796. [140] R.A. Dwyer, Voronoi diagrams of random lines and flats. Discrete Comput. Geom. 17 (1997), 123-136. [141] H. Edelsbrunner, Edge-skeletons in arrangements with applications, Algorithmica 1 (1986), 93-109. [142] H. Edelsbrunner, Algorithms in Combinatorial Geometry, Springer-Verlag, Heidelberg (1987). [143] H. Edelsbrunner, The upper envelope of piecewise linear functions: Tight complexity bounds in higher dimensions. Discrete Comput. Geom. 4 (1989), 337-343. [144] H. Edelsbrunner, The union of balls and its dual shape. Discrete Comput. Geom. 13 (1995), 415^40. [145] H. Edelsbrunner, Geometry of modeling biomolecules, Proc. Workshop on Algorithmic Foundations of Robotics, PK. Agarwal, L.E. Kavraki and M. Mason, eds, A.K. Peters, Wellesley, MA (1998), 265-277. [146] H. Edelsbrunner, M. Facello, P. Fu and J. Liang, Measuring proteins and voids in proteins, Proc. 28th Ann. Hawaii Internat. Conf. System Sciences, Biotechnology Computing, Vol. V (1995), 256-264.

112

RK. Agarwal and M. Sharir

[147] H. Edelsbrunner, M. Facello and J. Liang, On the definition and the construction of pockets in macromolecules. Tech. Report IULU-ENG-95-1736, Department of Computer Science, University of Illinois at Urbana-Champaign (1995). [148] H. Edelsbrunner and L.J. Guibas, Topologically sweeping an arrangement, J. Comput. Syst. Sci. 38 (1989), 165-194. Corrigendum in 42 (1991), 249-251. [149] H. Edelsbrunner, L.J. Guibas, J. Hershberger, J. Pach, R. Pollack, R. Seidel, M. Sharir and J. Snoeyink, Arrangements of Jordan arcs with three intersections per pair, Discrete Comput. Geom. 4 (1989), 523539. [150] H. Edelsbrunner, L.J. Guibas, J. Pach, R. Pollack, R. Seidel and M. Sharir, Arrangements of curves in the plane: Topology, combinatorics and algorithms, Theoret. Comput. Sci. 92 (1992), 319-336. [151] H. Edelsbrunner, L.J. Guibas and M. Sharir, The upper envelope ofpiecewise linear functions: Algorithms and applications. Discrete Comput. Geom. 4 (1989), 311-336. [152] H. Edelsbrunner, L.J. Guibas and M. Sharir, The complexity and construction of many faces in arrangements of lines and of segments. Discrete Comput. Geom. 5 (1990), 161-196. [153] H. Edelsbrunner, L.J. Guibas and M. Sharir, The complexity of many cells in arrangements of planes and related problems. Discrete Comput. Geom. 5 (1990), 197-216. [154] H. Edelsbrunner and P. Hajnal, A lower bound on the number of unit distances between the vertices of a convex polygon, J. Combin. Theory Ser. A 56 (1991), 312-316. [155] H. Edelsbrunner and D. Haussler, The complexity of cells in three-dimensional arrangements. Discrete Math. 60 (1986), 139-146. [156] H. Edelsbrunner and E.P. Miicke, Simulation of simplicity: A technique to cope with degenerate cases in geometric algorithms, ACM Trans. Graph. 9 (1990), 66-104. [157] H. Edelsbrunner, J. O'Rourke and R. Seidel, Constructing arrangements of lines and hyperplanes with applications, SIAM J. Comput. 15 (1986), 341-363. [158] H. Edelsbrunner and R. Seidel, Voronoi diagrams and arrangements. Discrete Comput. Geom. 1 (1986), 25^4. [159] H. Edelsbrunner, R. Seidel and M. Sharir, On the zone theorem for hyperplane arrangements, SIAM J. Comput. 22 (1993), 418^29. [160] H. Edelsbrunner and M. Sharir, A hyperplane incidence problem with applications to counting distances, Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift, P. Gritzman and B. Sturmfels, eds, AMS Press, Providence, RI (1991), 253-263. [161] H. Edelsbrunner and D.L. Souvaine, Computing median-of-squares regression lines and guided topological sweep, J. Amer. Statist. Assoc. 85 (1990), 115-119. [162] H. Edelsbrunner, P. Valtr and E. Welzl, Cutting dense point sets in half. Discrete Comput. Geom. 17 (1997), 243-255. [163] H. Edelsbrunner and E. Welzl, On the number of line separations of a finite set in the plane, J. Combin. Theory Ser. A 40 (1985), 15-29. [164] H. Edelsbrunner and E. Welzl, Constructing belts in two-dimensional arrangements with applications, SIAM J. Comput. 15 (1986), 271-284. [165] H. Edelsbrunner and E. Welzl, On the maximal number of edges of many faces in an arrangement, J. Combin. Theory Ser. A 41 (1986), 159-166. [166] A. Efrat, The complexity of the union of {a, fi)-covered objects, Proc. 15th Annual Sympos. on Comput. Geom. (1999), 134-142. [167] A. Efrat and M. Katz, On the union ofa-curved objects, Proc. 14th Annual Sympos. on Comput. Geom. (1998), 206-213. [168] A. Efrat, G. Rote and M. Sharir, On the union of fat wedges and separating a collection of segments by a line, Comput. Geom. 3 (1993), 277-288. [169] A. Efrat and M. Sharir, A near-linear algorithm for the planar segment center problem. Discrete Comput. Geom. 16 (1996), 239-257. [170] A. Efrat and M. Sharir, On the complexity of the union of fat objects in the plane, Proc. 13th Annu. ACM Sympos. Comput. Geom. (1997), 104-112. [171] I. Emiris and J. Canny, A general approach to removing degeneracies, SIAM J. Comput. 24 (1995), 650664.

Arrangements

and their applications

113

[172] I.Z. Emiris, J.F. Canny and R. Seidel, Efficient perturbations for handling geometric degeneracies, Algorithmica 19 (1997), 219-242. [173] D. Eppstein, Improved bounds for intersecting triangles and halving planes, J. Combin. Theory Ser. A 62 (1993), 176-182. [174] D. Eppstein, Geometric lower bounds for parametric optimization problems. Discrete Comput. Geom. 20 (1998), 463^76. [175] P. Erdos, On a set of distances ofn points, Amer. Math. Monthly 53 (1946), 248-250. [176] P. Erdos, L. Lovasz, A. Simmons and E. Straus, Dissection graphs of planar point sets, A Survey of Combinatorial Theory, J.N. Srivastava, ed., North-Holland, Amsterdam, Netherlands (1973), 139-154. [177] J. Erickson, New lower bounds for Hopcroft's problem. Discrete Comput. Geom. 16 (1996), 389^18. [178] H. Everett, J.-M. Robert and M. van Kreveld, An optimal algorithm for the (^k)-levels, with applications to separation and transversal problems, Internat. J. Comput. Geom. Appl. 6 (1996), 247-261. [179] M. Falk and R. Randell, On the homotopy theory in arrangements. Complex Analytic Singularities, NorthHolland, Amsterdam (1987), 101-124. [180] S. Felsner, On the number of arrangements ofpseudolines. Discrete Comput. Geom. 18 (1997), 257-267. [181] S. Fortune, Progress in computational geometry. Directions in Computational Geometry, R. Martin, ed., Information Geometers (1993), 81-128. [182] S. Fortune and V. Milenkovic, Numerical stability of algorithms for line arrangements, Proc. 7th Annu. ACM Sympos. Comput. Geom. (1991), 334-341. [183] G.N. Frederickson and D.B. Johnson, The complexity of selection and ranking inX + Y and matrices with sorted rows and columns, J. Comput. Syst. Sci. 24 (1982), 197-208. [184] J.-J. Fu and R.C.T. Lee, Voronoi diagrams of moving points in the plane, Internat. J. Comput. Geom. Appl. 1 (1991), 23-32. [185] K. Fuioida, T. Liebling and F. Margot, Analysis of backtrack algorithms for listig all vertices and all faces of a convex polyhedron, Comput. Geom. 8 (1997), 1-12. [186] K. Fukuda, S. Saito and A. Tamura, Combinatorial face enumeration in arrangements and oriented matroids. Discrete Appl. Math. 31 (1991), 141-149. [187] K. Fukuda, S. Saito, A. Tamura and T. Tokuyama, Bounding the number of k-faces in arrangements of hyperplanes. Discrete Appl. Math. 31 (1991), 151-165. [188] Z. Fliredi, The maximum number of unit distances in a convex n-gon, J. Combin. Theory Ser. A 55 (1990), 316-320. [189] A. Gabrielov and N. Vorobjov, Complexity of stratification of semi-pfaffian sets. Discrete Comput. Geom. 14 (1995), 71-91. [190] J.E. Goodman and R. Pollack, On the number of k-subsets of a set ofn points in the plane, J. Combin. Theory Ser. A 36 (1984), 101-104. [191] J.E. Goodman and R. Pollack, Semispaces of configurations, cell complexes of arrangements, J. Combin. Theory Ser. A 37 (1984), 257-293. [192] J.E. Goodman and R. Pollack, Allowable sequences and order types in discrete and computational geometry. New Trends in Discrete and Computational Geometry, J. Pach, ed., Springer-Verlag (1993), 103-134. [193] M.T. Goodrich, Constructing arrangements optimally in parallel. Discrete Comput. Geom. 9 (1993), 371385. [194] M. Goresky and R. MacPherson, Stratified Morse Theory, Springer-Verlag, Heidelberg, Germany, 1987. [195] L. Goumay and J.-J. Risler, Construction of roadmaps in semi-algebraic sets, Appl. Algebra Engrg. Comm. Comput. 4 (1993), 239-252. [196] D.H. Greene and F.F. Yao, Finite-resolution computational geometry, Proc. 27th Annu. IEEE Sympos. Found. Comput. Sci. (1986), 143-152. [197] D. Grigor'ev and N. Vorobjov, Counting connected components of a semi-algebraic set in subexponential time, Comput. Complexity 2 (1992), 133-186. [198] B. Griinbaum, Arrangements of hyperplanes, Congr. Numer. 3 (1971), 41-106. [199] B. Griinbaum, Arrangements and Spreads, Amer. Math. Soc, Providence, RI (1972). [200] L. Guibas and D. Marimont, Rounding arrangements dynamically, Proc. 11th Annu. ACM Sympos. Comput. Geom. (1995), 190-199.

114

RK. Agarwal and M. Sharir

[201] LJ. Guibas, D. Halperin, H. Hirukawa, J.-C. Latombe and R.H. Wilson, A simple and efficient procedure for assembly partitioning under infinitesimal motions, Proc. IEEE Intemat. Conf. Robot. Autom. (1995), 2553-2560. [202] L.J. Guibas, D. Halperin, J. Matousek and M. Sharir, On vertical decomposition of arrangements ofhyperplanes in four dimensions. Discrete Comput. Geom. 14 (1995), 113-122. [203] L.J. Guibas, J.S.B. Mitchell and T. Roos, Voronoi diagrams of moving points in the plane, Proc. 17th Intemat. Workshop Graph-Theoret. Concepts Comput. Sci., Lecture Notes in Comput. Sci. 570, SpringerVerlag (1991), 113-125. [204] L.J. Guibas, M.H. Overmars and J.-M. Robert, The exact fitting problem for points, Comput. Geom. 6 (1996), 215-230. [205] L.J. Guibas and M. Sharir, Combinatorics and algorithms of arrangements. New Trends in Discrete and Computational Geometry, J. Pach, ed.. Springer-Verlag, Heidelberg, Germany (1993), 9-36. [206] L.J. Guibas, M. Sharir and S. Sifrony, On the general motion planning problem with two degrees of freedom. Discrete Comput. Geom. 4 (1989), 491-521. [207] L.J. Guibas and J. Stolfi, Primitives for the manipulation of general subdivisions and the computation of Voronoi diagrams, ACM Trans. Graph. 4 (1985), 74-123. [208] D. Gusfield, Bounds for the parametric spanning tree problem, Proc. West Coast Conf. Combinatorics, Graph Theory and Comput. (1979), 173-181. [209] H. Hadwiger, Eulers Charakteristik und kombinatorische Geometric, J. Reine Angew. Math. 134 (1955), 101-110. [210] T. Hagerup, H. Jung and E. Welzl, Efficient parallel computation of arrangements of hyperplanes in d dimensions, Proc. 2nd ACM Sympos. Parallel Algorithms Architect. (1990), 290-297. [211] D. Halperin, Algorithmic Motion Planning via Arrangements of Curves and of Surfaces, PhD thesis. Computer Science Department, Tel-Aviv University, Tel Aviv (1992). [212] D. Halperin, On the complexity of a single cell in certain arrangements of surfaces related to motion planning. Discrete Comput. Geom. 11 (1994), 1-34. [213] D. Halperin, Arrangements, Handbook of Discrete and Computational Geometry, J.E. Goodman and J. O'Rourke, eds, CRC Press LLC, Boca Raton, FL (1997), 389-412. [214] D. Halperin, L.E. Kavraki and J.-C. Latombe, Robotics, Handbook of Discrete and Computational Geometry, J.E. Goodman and J. O'Rourke, eds, CRC Press LLC, Boca Raton, FL (1997), 755-778. [215] D. Halperin, J.-C. Latombe and R.H. Wilson, A general framework for assembly planning: The motion space approach, Unpubhshed manuscript (1997). [216] D. Halperin and M.H. Overmars, Spheres, molecules and hidden surface removal, Comput. Geom. 11 (1998), 83-102. [217] D. Halperin and M. Sharir, On disjoint concave chains in arrangements ofipseudo) lines. Inform. Process. Lett. 40 (1991), 189-192. [218] D. Halperin and M. Sharir, New bounds for lower envelopes in three dimensions, with applications to visibility in terrains. Discrete Comput. Geom. 12 (1994), 313-326. [219] D. Halperin and M. Sharir, Almost tight upper bounds for the single cell and zone problems in three dimensions. Discrete Comput. Geom. 14 (1995), 385^10. [220] D. Halperin and M. Sharir, Arrangements and their applications in robotics: Recent developments, Proc. Workshop on Algorithmic Foundations of Robotics, K. Goldberg, D. Halperin, J.-C. Latombe and R. Wilson, eds, A.K. Peters, Wellesley, MA (1995), 495-511. [221] D. Halperin and M. Sharir, A near-quadratic algorithm for planning the motion of a polygon in a polygonal environment. Discrete Comput. Geom. 16 (1996), 121-134. [222] D. Halperin and C. Shelton, A perturbation scheme for spherical arrangements with application to molecular modeling, Comput. Geom. 10 (1998), 273-287. [223] D. Halperin and C.-K. Yap, Combinatorial complexity of translating a box in polyhedral 3-space, Comput. Geom. Theory Appl. 9 (1998), 181-196. [224] S. Har-Peled, Constructing cuttings in theory and practice, Proc. 14th Symp. Comput. Geom. (1998), 327-336. [225] S. Har-Peled, Talking a walk in a planar arrangement, Proc. 40th Annual Sympos. Foundations of Comp. Sci. (1999), to appear. [226] J. Harris, Algebraic Geometry (A First Course), Springer-Verlag, Berlin, Germany (1992).

Arrangements

and their applications

115

[227] D. Haussler and E. Welzl, Epsilon-nets and simplex range queries. Discrete Comput. Geom. 2 (1987), 127-151. [228] J. Heintz, T. Reico and M.-F. Roy, Algorithms in real algebraic geometry and applications to computational geometry. Discrete and Computational Geometry: Papers from the DIMACS Special Year, J.E. Goodman, R. Pollack and W. Steiger, eds, AMS Press, Providence, RI (1991), 137-163. [229] J. Heintz, M.-F. Roy and P. Solemo, On the complexity of semialgebraic sets, Proc. IFIP San Francisco (1989), 293-298. [230] J. Heintz, M.-F. Roy and P. Solemo, Single exponential path finding in semi-algebraic sets II: The general case. Algebraic Geometry and its Applications, C. Bajaj, ed.. Springer-Verlag, New York (1993), 467^81. [231] J. Heintz, M.-F. Roy and P. Solemo, Description of the connected components of a semialgebraic set in single exponential time. Discrete Comput. Geom. 11 (1994), 121-140. [232] J.E. Hershberger and J.S. Snoeyink, Erased arrangements of lines and convex decompositions ofpolyhedra, Comput. Geom. 9 (1998), 129-143. [233] L.S. Homem de Mello and A.C. Sanderson, A correct and complete algorithm for the generation of mechanical assembly sequences, IEEE Trans. Robot. Autom. 7 (1991), 228-240. [234] D. Hsu, L.E. Kavraki, J.-C. Latombe, R. Motwani and S. Sorkin, On finding narrow passages with probabilistic roadmap planners, Proc. 1998 Workshop on the Algorithmic Foundations of Robotics, P.K. Agarwal, L.E. Kavraki and M. Mason, eds, A.K. Peters, Wellesley, MA (1998), 141-153. [235] N. Katoh, H. Tamaki and T. Tokuyama, Parametric polymatroid optimization and its geometric applications, Proc. 10th ACM-SIAM Sympos. Discrete Algorithms (1999), 517-526. [236] N. Katoh and T. Tokuyama, Lovdsz lemma for the three-dimensional k-level of concave surfaces and its applications, Proc. 40th Annual Sympos. Foundations of Comp. Sci. (1999), to appear. [237] M.J. Katz and M. Sharir, Optimal slope selection via expanders. Inform. Process. Lett. 47 (1993), 115122. [238] M.J. Katz and M. Sharir, An expander-based approach to geometric optimization, SIAM J. Comput. 26 (1997), 1384-1408. [239] L.E. Kavraki, J.-C. Latombe, R. Motwani and P. Raghavan, Randomized query processing in robot path planning, Proc. 27th Annu. ACM Sympos. Theory Comput. (1995), 353-362. [240] L.E. Kavraki, P. Svestka, J.-C. Latombe and M.H. Overmars, Probabilistic roadmaps for path planning in high dimensional configuration spaces, IEEE Trans. Robot. Autom. 12 (1996), 566-580. [241] K. Kedem, R. Livne, J. Pach and M. Sharir, On the union of Jordan regions and collision-free translational motion amidst polygonal obstacles. Discrete Comput. Geom. 1 (1986), 59-71. [242] K. Kedem and M. Sharir, An efficient motion planning algorithm for a convex rigid polygonal object in 2-dimensional polygonal space. Discrete Comput. Geom. 5 (1990), 43-75. [243] K. Kedem, M. Sharir and S. Toledo, On critical orientations in the Kedem-Sharir motion planning algorithm for a convex polygon in the plane. Discrete Comput. Geom. 17 (1997), 227-240. [244] L, Kettner, Designing a data structure for polyhedral surfaces, Proc. 14th Symp. Comput. Geom. (1998), 146-154. [245] A.G. Khovanskii, Fewnomials, Amer. Math. Soc, Providence, RI (1991). [246] D.G. Kirkpatrick and R. Seidel, The ultimate planar convex hull algorithm!, SIAM J. Comput. 15 (1986), 287-299. [247] M. Klawe, M. Paterson and N. Pippenger, Inversions with ^2^+'^^^+^^°S«) transpositions at the median. Unpublished manuscript (1982). [248] V. IGee, On the complexity of d-dimensional Voronoi diagrams. Arch. Math. 34 (1980), 75-80. [249] M. Las Vergnas, Convexity in oriented matroids, J. Comb. Theory, Ser. B 29 (1980), 231-243. [250] J.-C. Latombe, Robot Motion Planning, Kluwer Acad. Publ., Boston (1991). [251] B. Lee and F.M. Richards, The interpretation of protein structure: Estimation of static accessibility, J. Molecular Biology 55 (1971), 379-400. [252] F.T. Leighton, Complexity Issues in VLSI, MIT Press, Cambridge, MA (1983). [253] T. Lengauer, Algorithmic research problems in molecular bioinformatics, Proc. 2nd IEEE Israeli Sympos. Theory of Comput. and Systems (1993), 177-192. [254] D, Leven and M. Sharir, On the number of critical free contacts of a convex polygonal object moving in two-dimensional polygonal space. Discrete Comput. Geom. 2 (1987), 255-270.

116

RK. Agarwal and M. Sharir

[255] P. Lienhardt, Topological models for boundary representation: A comparison with n-dimensional generalized maps, Comput. Aided Design 23 (1991), 59-82. [256] P. Lienhardt, N-dimensional generalized combinatorial maps and cellular quasi-manifolds, Intemat. J. Comput. Geom. Appl. 4 (1994), 275-324. [257] C.-Y. Lo, J. Matousek and W.L. Steiger, Algorithms for ham-sandwich cuts, Discrete Comput. Geom. 11 (1994), 433^52. [258] L. Lovasz, On the number of halving lines, Ann. Uni. Sci. Budapest de Rolando Eotvos Nominatae, Sectio Math. 14(1971), 107-108. [259] J. Matousek, Construction of e-nets. Discrete Comput. Geom. 5 (1990), 427-448. [260] J. Matousek, Computing the center of planar point sets. Computational Geometry: Papers from the DIMACS Special Year, J.E. Goodman, R. Pollack and W. Steiger, eds, Amer. Math. Soc, Providence, RI (1991), 221-230. [261] J. Matousek, Lower bounds on the length of monotone paths in arrangements. Discrete Comput. Geom. 6 (1991), 129-134. [262] J. Matousek, Randomized optimal algorithm for slope selection. Inform. Process. Lett. 39 (1991), 183187. [263] J. Matousek, Efficient partition trees. Discrete Comput. Geom. 8 (1992), 315-334. [264] J. Matousek, Reporting points in halfspaces, Comput. Geom. 2 (1992), 169-186. [265] J. Matousek, Epsilon-nets and computational geometry. New Trends in Discrete and Computational Geometry, J, Pach, ed.. Springer-Verlag, Heidelberg, Germany (1993), 69-89. [266] J. Matousek, Linear optimization queries, J. Algorithms 14 (1993), 432^148. [267] J. Matousek, On vertical ray shooting in arrangements, Comput. Geom. 2 (1993), 279-285. [268] J. Matousek, Range searching with efficient hierarchical cuttings. Discrete Comput. Geom. 10 (1993), 157-182. [269] J. Matousek, Geometric range searching, ACM Comput. Surv. 26 (1994), 421^61. [270] J. Matousek, J. Pach, M. Sharir, S. Sifrony and E. Welzl, Fat triangles determine linearly many holes, SIAM J. Comput. 23 (1994), 154-169. [271] J. Matousek and O. Schwarzkopf, On ray shooting in convex polytopes. Discrete Comput. Geom. 10 (1993), 215-232. [272] J. Matousek and O. Schwarzkopf, A deterministic algorithm for the three-dimensional diameter problem, Comput. Geom. 6 (1996), 253-262. [273] P. McMullen and G.C. Shephard, Convex Polytopes and the Upper Bound Conjecture, Cambridge University Press, Cambridge (1971). [274] J. Mecke, Random tesselations generated by hyperplanes. Stochastic Geometry, Geometric Statistics, Stereology, Teubner, Leipzig (1984), 104-109. [275] N. Megiddo, Applying parallel computation algorithms in the design of serial algorithms, J. ACM 30 (1983), 852-865. [276] N. Megiddo, Partitioning with two lines in the plane, J. Algorithms 6 (1985), 430^33. [277] P.G. Mezey, Molecular surfaces, Reviews in Computational Chemistry, K.B. Lipkowitz and D.B. Boyd, eds. Vol. 1, VCH Publishers (1990). [278] V. Milenkovic, Calculating approximate curve arrangements using rounded arithmetic, Proc. 5th Annu. ACM Sympos. Comput. Geom. (1989), 197-207. [279] V. Milenkovic, Double precision geometry: A general technique for calculating line and segment intersections using rounded arithmetic, Proc. 30th Annu. IEEE Sympos. Found. Comput. Sci. (1989), 500-505. [280] V. Milenkovic, Robust polygon modeling, Comput. Aided Design 25 (1993). [281] J.W. Milnor, Morse Theory, Princeton University Press, Princeton, NJ (1963). [282] J.W. Milnor, On the Betti numbers of real algebraic varieties, Proc. Amer. Math. Soc. 15 (1964), 275-280. [283] B. Mishra, Computational real algebraic geometry. Handbook of Discrete and Computational Geometry, J.E. Goodman and J. O'Rourke, eds, CRC Press LLC, Boca Raton, PL (1997), 537-558. [284] K. Mulmuley, A fast planar partition algorithm, I, J. Symbolic Comput. 10 (1990), 253-280. [285] K. Mulmuley, A fast planar partition algorithm, II, J. ACM 38 (1991), 74-103. [286] K. Mulmuley, On levels in arrangements and Voronoi diagrams. Discrete Comput. Geom. 6 (1991), 307338.

Arrangements

and their applications

111

[287] K. Mulmuley and S. Sen, Dynamic point location in arrangements of hyperplanes. Discrete Comput. Geom. 8 (1992), 335-360. [288] P. Orlik, Introduction to Arrangements, Amer. Math. Soc, Providence, RI (1989). [289] P. Orlik, Arrangements in topology, Discrete and Computational Geometry: Papers from the DIMACS Special Year, J.E. Goodman, R. Pollack and W. Steiger, eds, AMS Press, Providence, RI (1991), 263-272. [290] P. Orlik and L. Solomon, Combinatorics and topology of complements of hyperplanes. Invent. Math. 59 (1980), 77-94. [291] P. Orlik and H. Terao, Arrangements of Hyperplanes, Springer-Verlag, Berlin, West Germany (1991). [292] M.H. Overmars and C.-K. Yap, New upper bounds in Klee's measure problem, SIAM J. Comput. 20 (1991), 1034-1045. [293] J. Pach, Finite point configurations. Handbook of Discrete and Computational Geometry, J.E. Goodman and J. O'Rourke, eds, CRC Press LLC, Boca Raton, FL (1997), 3-18. [294] J. Pach and P.K. Agarwal, Combinatorial Geometry, Wiley, New York, NY (1995). [295] J. Pach and M. Sharir, The upper envelope of piecewise linear functions and the boundary of a region enclosed by convex plates: Combinatorial analysis. Discrete Comput. Geom. 4 (1989), 291-309. [296] J. Pach and M. Sharir, On the boundary of the union of planar convex sets. Discrete Comput. Geom. 21 (1999), 321-328. [297] J. Pach and M. Sharir, On the number of incidences between points and curves. Combinatorics, Probability and Computing 7 (1998), 121-127. [298] J. Pach, W. Steiger and E. Szemeredi, An upper bound on the number of planar k-sets. Discrete Comput. Geom. 7 (1992), 109-123. [299] G.W Peck, On k-sets in the plane. Discrete Math. 56 (1985), 73-74. [300] M. Pellegrini, Lower bounds on stabbing lines in 3-space, Comput. Geom. 3 (1993), 53-58. [301] M. Pellegrini, Ray shooting and lines in space. Handbook of Discrete and Computational Geometry, J.E. Goodman and J. O'Rourke, eds, CRC Press LLC, Boca Raton, FL (1997), 599-614. [302] M. Pellegrini and P. Shor, Finding stabbing lines in 3-space, Discrete Comput. Geom. 8 (1992), 191-208. [303] I.G. Petrovskii and O.A. Oleinik, On the topology of real algebraic surfaces, Isvestia Akad. Nauk SSSR. Ser. Mat. 13 (1949), 389-^02. In Russian. [304] H. Plantinga and C.R. Dyer, Visibility, occlusion and the aspect graph. Internal. J. Comput. Vision 5 (1990), 137-160. [305] R. Pollack and M.F Roy, On the number of cells defined by a set of polynomials, C. R. Acad. Sci. Paris 316 (1993), 573-577. [306] A. Postnikov and R. Stanley, Deformation of Coxeter hyperplane arrangements. Unpublished manuscript (1999). [307] F.P. Preparata and R. Tamassia, Efficient point location in a convex spatial cell-complex, SIAM J. Comput. 21 (1992), 267-280. [308] S. Raab, Controlled perturbation of arrangement of polyhedral surfaces with applications of swept volumes, Proc. 15th Annual Sympos. on Comput. Geom. (1999), 163-172. [309] E. Ramos, Construction of 1-d lower envelopes and applications, Proc. 13th Annu. Sympos. Comput. Geom. (1997), 57-66. [310] E. Ramos, Intersection of unit-balls and diameter of a point set in R , Comput. Geom. 8 (1997), 57-65. [311] A. Rao and K. Goldberg, Placing registration marks, IEEE Transactions on Industrial Electronics 41 (1994). [312] J.H. Reif, Complexity of the generalized movers problem. Planning, Geometry and Complexity of Robot Motion, J. Hopcroft, J. Schwartz and M. Sharir, eds, Ablex Pub. Corp., Norwood, NJ (1987), 267-281. [313] FM. Richards, Areas, volumes, packing and protein structure, Annu. Rev. Biophys. Bioeng. 6 (1977), 151-176. [314] J. Richter-Gebert and G.M. Ziegler, Oriented matroids. Handbook of Discrete and Computational Geometry, J.E. Goodman and J. O'Rourke, eds, CRC Press LLC, Boca Raton, FL (1997), 111-132. [315] S. Roberts, On thefiguresformed by the intercepts of a system of straight lines in a plane and an analogous relations in space of three dimensions, Proc. London Math. Soc. 19 (1888), 405^22. [316] R. Schneider, Tessellations generated by hyperplanes. Discrete Comput. Geom. 2 (1987), 223-232. [317] J.T. Schwartz and M. Sharir, On the "piano movers" problem II: General techniques for computing topological properties of real algebraic manifolds. Adv. Appl. Math. 4 (1983), 298-351.

118

RK. Agarwal and M. Sharir

[318] J.T. Schwartz and M. Sharir, Algorithmic motion planning in robotics, Algorithms and Complexity, Handbook of Theoretical Computer Science, J. van Leeuwen, ed., Vol. A, Elsevier, Amsterdam (1990) 391^30. [319] J.T. Schwartz and M. Sharir, On the two-dimensional Davenport-Schinzel problem, J. Symbolic Comput. 10 (1990), 371-393. [320] O. Schwarzkopf and M. Sharir, Vertical decomposition of a single cell in a three-dimensional arrangement of surfaces and its applications. Discrete Comput. Geom. 18 (1997), 269-288. [321] R. Seidel, Constructing higher-dimensional convex hulls at logarithmic cost per face, Proc. 18th Annu. ACM Sympos. Theory Comput. (1986), 404-^13. [322] R. Seidel, Exact upper bounds for the number offaces in d-dimensional Voronoi diagrams, Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift, R Gritzman and B. Sturmfels, eds, AMS Press, Providence, RI (1991), 517-530. [323] R. Seidel, Small-dimensional linear programming and convex hulls made easy. Discrete Comput. Geom. 6 (1991), 423-434. [324] R. Seidel, Convex hull computations. Handbook of Discrete and Computational Geometry, J.E. Goodman and J. O'Rourke, eds, CRC Press LLC, Boca Raton, FL (1997), 361-376. [325] R. Seidel, The nature and meaning of perturbations in geometric computing. Discrete Comput. Geom. 19 (1998), 1-17. [326] M. Sharir, On k-sets in arrangements of curves and surfaces. Discrete Comput. Geom. 6 (1991), 593-613. [327] M. Sharir, Arrangements of surfaces in higher dimensions: Envelopes, single cells and other recent developments, Proc. 5th Canad. Conf. Comput. Geom. (1993), 181-186. [328] M. Sharir, Almost tight upper bounds for lower envelopes in higher dimensions. Discrete Comput. Geom. 12 (1994), 327-345. [329] M. Sharir, Algorithmic motion planning. Handbook of Discrete and Computational Geometry, J.E. Goodman and J. O'Rourke, eds, CRC Press LLC, Boca Raton, FL (1997), 733-754. [330] M. Sharir and PK. Agarwal, Davenport-Schinzel Sequences and Their Geometric Applications, Cambridge University Press, New York, NY (1995). [331] M. Sharir and S. Toledo, Extremal polygon containment problems, Comput. Geom. 4 (1994), 99-118. [332] PW. Shor, Stretchability of pseudolines is NP-hard, Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift, P Gritzman and B. Sturmfels, eds, AMS Press (1991), 531-554. [333] S. Smorodinsky, J. Mitchell and M. Sharir, Sharp bounds on geometric premutations ofpairwise disjoint balls in M^, Proc. 15th Annual Sympos. on Comput. Geom. (1999), 400-406. [334] D.M.Y. Sommerville, Analytical Geometry in Three Dimensions, Cambridge University Press, Cambridge (1951). [335] J. Spencer, E. Szemeredi and W.T. Trotter, Unit distances in the Euclidean plane. Graph Theory and Combinatorics, B. Bollobas, ed., Academic Press, New York, NY (1984), 293-303. [336] R. Stanley, Hyperplane arrangements, interval orders, and trees, Proc. Nat. Acad. Sci. 93 (1996), 26202625. [337] J. Steiner, Einige Gesetze liber die Theilung der Ebene und des Raumes, J. Reine Angew. Math. 1 (1826), 349-364. [338] L. Szekely, Crossing numbers and hard Erdos problems in discrete geometry. Combinatorics, Probability and Computing 6 (1997), 353-358. [339] E. Szemeredi and W. Trotter, Jr., A combinatorial distinction between Euclidean and projective planes, European J. Combin. 4 (1983), 385-394. [340] E. Szemeredi and W. Trotter, Jr., Extremal problems in discrete geometry, Combinatorica 3 (1983), 381392. [341] B. Tagansky, The Complexity of Substructures in Arrangements of Surfaces, PhD thesis, Tel Aviv University, Tel Aviv (1996). [342] B. Tagansky, A new technique for analyzing substructures in arrangements of piecewise linear surfaces, Discrete Comput. Geom. 16 (1996), 455-479. [343] H. Tamaki and T. Tokuyama, A characterization of planar graphs by pseudo-line arrangements, Proc. 8th Annu. Intemat. Sympos. Algorithms Comput., Lecture Notes in Comput. Sci. 1350, Springer-Verlag (1997), 123-132. [344] H. Tamaki and T. Tokuyama, How to cut pseudo-parabolas into segments. Discrete Comput. Geom. 19 (1998), 265-290.

Arrangements

and their applications

119

[345] R. Thorn, Sur I'homologie des varietes algebriques reelles, Differential and Combinatorial Topology, S.S. Cairns, ed., Princeton Univ. Press, Princeton, NJ (1965). [346] G. Toth, Point sets with many k-sets, in preparation. [347] PM. Vaidya, Geometry helps in matching, SIAM J. Comput. 18 (1989), 1201-1225. [348] P. Valtr, Lines, line-point incidences and crossing families in dense sets, Combinatorica 16 (1996), 269294. [349] M. van Kreveld, On fat partitioning, fat covering and the union size ofpolygons, Comput. Geom. 9 (1998), 197-210. [350] K.R. Varadarajan, A divide-and-conquer algorithm for min-cost perfect matching in the plane, Proc. 39th Annual Sympos. on Foundations of Comp. Sci. (1998), 320-329. [351] A. Varshney, F.P. Brooks, Jr. and W.V. Wright, Computing smooth molecular surfaces, IEEE Comput. Graph. Appl. 15 (1994), 19-25. [352] S. Vrecica and R. Zivaljevic, The colored Tverberg's problem and complexes of injective functions, J. Combin. Theory Sen A 61 (1992), 309-318. [353] H.E. Warren, Lower bound for approximation by nonlinear manifolds. Trans. Amer. Math. Soc. 133 (1968), 167-178. [354] K. Weiler, Edge-based data structures for solid modeling in a curved surface environment, IEEE Comput. Graph. Appl. 5 (1985), 21-40. [355] E. Welzl, More on k-sets of finite sets in the plane. Discrete Comput. Geom. 1 (1986), 95-100. [356] J.E. Wetzel, On the division of the plane by lines, Amer. Math. Monthly 85 (1978), 648-656. [357] A. Wiemik and M. Sharir, Planar realizations of nonlinear Davenport-Schinzel sequences by segments. Discrete Comput. Geom. 3 (1988), 15^7. [358] D.E. Willard, Polygon retrieval, SIAM J. Comput. 11 (1982), 149-165. [359] A.C. Yao and F.F. Yao, A general approach to D-dimensional geometric queries, Proc. 17th Annu. ACM Sympos. Theory Comput. (1985), 163-168. [360] T. Zaslavsky, Facing up to Arrangements: Face-Count Formulas for Partitions of Space by Hyperplanes, Amer. Math. Soc, Providence, RI (1975). [361] T. Zaslavsky, A combinatorial analysis of topological dissections. Adv. Math. 25 (1977), 267-285. [362] G.M. Ziegler, Lectures on Polytopes, Springer-Verlag, Heidelberg, Germany (1994).

This Page Intentionally Left Blank

CHAPTER 3

Discrete Geometric Shapes: Matching, Interpolation, and Approximation* Helmut Alt Institut fiir Informatik, Freie Universitdt Berlin, Takustrafie 9, D-14195 Berlin, Germany

Leonidas J. Guibas Computer Science Department, Stanford University, Stanford, CA 94305, USA

Contents 1. Introduction 2. Point pattern matching 2.1. Exact point pattern matching 2.2. Approximate point pattern matching 3. Matching of curves and areas 3.1. Optimal matching of line segment patterns 3.2. Approximate matching 3.3. Distance functions for non-point objects 4. Shape simplification and approximation 4.1. Two-dimensional results 4.2. Three dimensions 5. Shape interpolation References

123 124 124 126 133 133 136 139 142 142 145 146 150

Abstract In this chapter we survey geometric techniques which have been used to measure the similarity or distance between shapes, as well as to approximate shapes, or interpolate between shapes. Shape is a modality which plays a key role in many disciplines, ranging from computer vision to molecular biology. We focus on algorithmic techniques based on computational geometry that have been developed for shape matching, simplification, and morphing.

*Partially supported by Deutsche Forschungsgemeinschaft (DFG), Grant No. Al 253/4-2. HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V. All rights reserved 121

This Page Intentionally Left Blank

Discrete geometric shapes: Matching, interpolation, and approximation

123

1. Introduction The matching and analysis of geometric patterns and shapes is of importance in various appHcation areas, in particular in computer vision and pattern recognition, but also in other disciplines concerned with the form of objects such as cartography, molecular biology, and computer animation. The general situation is that we are given two objects A, B and want to know how much they resemble each other. Usually one of the objects may undergo certain transformations like translations, rotations or scalings in order to be matched with the other as well as possible. Variants of this problem include partial matching, i.e. when A resembles only some part of B, and a data structures version where, for a given object A, the most similar one in a fixed preprocessed set of objects has to be found, e.g., in character or traffic sign recognition. Another related problem is that of simplification of objects. Namely, given an object A find the most simple object A' resembling A within a given tolerance. For example, A could be a smooth curve and A' a polygonal line with as few edges as possible. We also will discuss shape interpolation ("morphing"), a problem that has become very interesting recently, especially in computer animation. The objective is to find for two given shapes A and B a continuous transformation that transforms A into B via natural intermediate shapes. First, it is necessary to formally define the notions of objects, resemblance, matching, and transformations. Objects are usually finite sets of points ("point patterns") or "shapes" given in two dimensions by polygons. Generalizations to, for example, polyhedral surfaces in three and higher dimensions are possible, but most of the work has concentrated on two or three dimensions. In order to measure "resemblance" various distance functions have been used, in particular much work has been based on the so-called Hausdorff distance. For two compact subsets A, B of the J-dimensional space M^, we define the one-sided Hausdorff distance from A to B as 5//(A, B) = maxmin||a — b\\, aeA beB

where || • || is the Euclidean distance in M^ (if not explicitly stated otherwise). The (bidirectional) Hausdorff distance between A and B then is defined as SH{A, B) = max(5//(A, B), 8H(B, A)). The Hausdorff distance simply assigns to each point of one set the distance to its closest point on the other and takes the maximum over all these values. It performs reasonably well in practice but may fail if there is noise in the images. An variant intended to be more robust will be presented in Section 2.2.3. What kind of geometric transformations are allowed to match objects A and B depends on the application. The most simple kind are certainly translations. The matching problem usually becomes much more difficult if we allow rotations and translations (these transfor-

124

H. Alt and L.J. Guibas

mations are called rigid motions, or Euclidean transformations). In most cases reflections can be included as well without any further difficulty. Scaling means the linear transformation that "stretches" an object by a certain factor X about the origin and is represented by the matrix (^ J) in two dimensions. We call combinations of translations and scalings homotheties and combinations of Euclidean transformations and scalings similarities. The most general kind of transformations we will consider are arbitrary ajfine transformations which can occur, e.g., in orthographic 2-dimensional projections of 3-dimensional objects. Considerable research on these topics has been done in computational geometry in recent years. This chapter will give a survey on these results.

2. Point pattern matching In this section we present a variety of geometric techniques for matching points sets exactly or approximately, under some allowed transformation group. We discuss methods of both theoretical and practical interest.

2.1. Exact point pattern matching A seemingly very natural question is whether two finite sets A, 5 C M^ of n points each can be matched exactly by say, rigid motions, i.e. whether are A and B congruent. Of course unless we assume that the input consists of points on a grid, this problem is numerically very unstable. Nevertheless, studying it assuming a "real RAM" model of computation gives some insight into the nature of matching problems and may help in designing algorithms for more realistic cases. Furthermore, it could be possible to implement the algorithms for rational inputs directly using arbitrary precision computations as they are provided, for example, by the LEDA library [24] but to our knowledge this possibility has not been investigated in detail yet. In two dimensions exact point-pattern matching can easily be reduced to string matching, as is shown by the following algorithm which was invented independently by several authors, for example Atkinson [19]. 1. Determine the centroids CA, CB (i.e. arithmetic means) of the sets A and B, respectively. 2. Determine the polar coordinates of all points in A using CA as the origin. Then sort A lexicographically with respect to these polar coordinates (angle first, length second) obtaining a sequence (0i, r i ) , . . . , (0«, r^). Let u be the sequence (i/^i, r i ) , . . . , {\lrn, rti) where T/^/ = 0/ — 0(/+i) mod «• Compute in the same way the corresponding sequence i; of the set B. 3. Determine whether i; is a cyclic shift of w, i.e. a substring of uu by some fast stringmatching algorithm. It is easy to see that A and B are congruent exactly if the algorithm gives a positive answer. The running time is 0{n \ogn) because of the sorting in step 2; all other operations take linear time.

Discrete geometric shapes: Matching, interpolation, and approximation

125

For exact point pattern matching in three dimensions the following algorithm is given byAltetal. [13]: 1. Determine the centroid CA and project all points of A onto the unit sphere around CA obtaining a set A' of points on the sphere. Label each point a e A' with the sorted list of distances from CA of all points that have been mapped onto a. 2. Compute the 3-d convex hull CA of A \ 3. In addition to the labeling of step 2 attach to each point a e A^ an adjacency list of vertices connected to a by an edge of CA sorted in clockwise order (seen from outside). This list should contain all distances of a to adjacent points and all angles between neighboring edges. 4. Execute steps 1-3 with set B, as well. 5. The hulls CA and CB can be considered as labeled planar graphs. The point sets A and B are congruent exactly if these graphs are isomorphic. This isomorphism can be decided by a variant of the partition algorithm of Hopcroft (see [10], Section 4.13). A detailed analysis shows that the running time of this algorithm is 0(n logn). Using similar techniques it can be shown that the matching problem in arbitrary dimension d can be reduced to n problems in dimension d — 1. Consequently we have that the exact point pattern matching problem can be solved for patterns of n points in time 0(n log w) in 2 dimensions and in time 0(n^~^ logn) for arbitrary dimension d ^3. An alternative approach yielding the same bound for dimension 3 was developed by Atkinson [19]. Concerning transformations other than rigid motions, in some cases there are obvious optimal algorithms for exact point pattern matching in arbitrary dimensions. For translations, for example, it suffices to match those two points with the lexicographically smallest coordinate vectors and then to check, whether the other points match as well. If scaling of the pattern B to be matched is allowed, one can first determine the diameters dA, ds of both sets. Their ratio dA/ds gives the correct scaling factor for B. Therefore, there is an easy reduction of homotheties to translations and of similarities to rigid motions. Reflections can easily be incorporated by trying to match the set B as well as the set B' which is B reflected through some arbitrary hyperplane, for example, jci = 0. Exact point pattern matching under arbitrary ajfine transformations is considered by Sprinzak and Werman [88]. First the sets A and B are brought into "canonical form" by transforming their second moment matrices into unit matrices. Then it is shown that A, B can be matched under affine transformations exactly if their canonical forms can be matched under rotations. Since the canonical forms can be computed in linear time the asymptotic time bounds for matching under linear transformations are the same as the ones for rigid motions described above. A natural generalization of deciding whether two point patterns A and B are congruent is to ask for the largest subsets of both that are and to find the corresponding rigid motion. Akutsu et al. [20] address this problem and solve it by a voting algorithm. More specifically, to any pair (/?, ^) G A x ^ all pairs (r, 5") G A x 5 are determined where the lengths of the line segments ~pr and 'qs are the same. In this case the rigid motion that maps ~pf to qs gets a vote. In the end the rigid motion that obtained the most votes is the one matching the largest subsets. The running time of the deterministic version of this algorithm is

126

H. Alt and LJ. Guibas

0((X(n,m) -\-n^) logn). Here A-(w, m) is a combinatorial geometric quantity related to the number of possible occurrences of a fixed length line segment in a set of points and it has an upper bound of 0(n^-^^m^-^^). Also a more efficient Monte-Carlo algorithm based on random sampling is given that produces an approximate solution to the problem.

2.2. Approximate point pattern matching More realistic than exact point pattern matching is approximate point pattern matching. Here, given two finite sets of points A, 5 , the problem is to find a transformation matching each point b e B into the ^-neighborhood (e ^ 0) of some point A G A. On the other hand each point in A should lie in the ^-neighborhood of some transformed point of B. Clearly, there are many variants to this problem. The first distinction we make is whether A and B must have the same number of points and the matching must be a one-one-mapping, or whether several points in one set may be matched to the same point in the other. Obviously in the latter case we consider matching with respect to the Hausdorjf-distance. 2.2.1. One-to-one matching. Alt et al. [13] give polynomial time algorithms for many variants of one-one-matching of finite point sets. These variants are obtained by the following characteristics: • Different types of transformations that are allowed. • Solving either the decision problem: given e, is there a matching? or the optimization problem: find the minimal e allowing a matching. • A fixed one-one-mapping between A and B is either already given or one should be found. • Different metrics, a concept generalized by Arkin et al. [11] to arbitrary "noise regions" around the points. We will demonstrate the techniques used with the example of solving the decision problem, for a given £ as a Euclidean tolerance, of matching under arbitrary rigid motions without a predetermined one-one-mapping between the point sets A = [a\,...,an} and B = {bu...,bn}. First, it can be shown by an easy geometric argument that, if there exists a valid matching of 5 to A then there is one where two points Z?/, bj of B are matched exactly to the boundaries of the 6:-neighborhoods Us{ak), Ugiai) of two points in A. Consider this configuration for all 4-tuples of points ak, ai, Z?/, bj. Mapping bi, bj onto the boundaries of Us(ak) and Ue(ai) respectively in general leaves one degree of freedom which is parametrized by the angle 0 e [0, 27t) between the vector bi — ak and a horizontal line. Considering any other point bm ^ B,m^i, j , for all possible values of 0, that point will trace an algebraic curve Cm (of degree 6, in fact; see Figure 1). Being an algebraic curve of constant degree, any Cm intersects the boundary of any Ueiar) at most a constant number of times, in fact, at most 12 times. So there are at most 6 intervals of the parameter 0 where the image of bm lies inside Us(ar). All interval boundaries of this kind are collected. They partition the parameter space [0, In) into 0(n^) intervals, so that for all 0 in one interval the same points of B are mapped into the same neighborhoods of points of A. All these relationships are represented as edges in a

Discrete geometric shapes: Matching, interpolation,

and approximation

127

Fig. 1. Curve of point bm when bt, bj are moved on the boundaries of Usia^), Ueiag).

bipartite graph whose two sides of nodes are A and B. Clearly, the decision problem has a positive solution exactly if there is some (/) for which the corresponding graph has a perfect matching. This is checked by finding the graph for the first subinterval of [0, 27r) and constructing a maximum matching for it. Then, while traversing the subintervals from left to right, the maximum matching is updated until a perfect matching is found or it turns out that none exists. Observe, that this procedure is carried out 0(n^) times for all 4-tuples ak, ai, bi, bj. A detailed analysis shows that the total running time of the algorithm is O(w^). In addition, determining the intersection points of the curves of degree 6 with circles could cause nontrivial numerical problems. However, simpler and faster algorithms were found for easier variants of the one-one-matching problem. For the case of translation only, Efrat and Itai [39] improve the bounds of [13] using geometric arguments to speed up the bipartite graph matching involved (for fixed sets, they can compute what they call the optimum bottleneck matching in time 0(n^-^ logn)). Arkin et al. [11] give numerous efficient algorithms mostly assuming that the 6:-neighborhoods or other noise-regions of the points are disjoint. For example, the problem considered above is shown to be solvable in 0(n^ log n) time under this assumption. Also a generalization from rigid motions to similarity transformations is given in that article. Heffeman and Schirra [59] take an alternative approach to reduce the complexity of the decision problem in point pattern matching, which they call approximate decision algorithms. They only require the algorithm to give a correct answer if the given tolerance s is not too close to the optimal solution; more precisely, it has to lie outside the interval [^opt -• oi, ^opt + )^] for fixed a, fi ^0. This way, using network flow algorithms, they can reduce the running time for solving the problem described above to 0(n^-^). Behrends [22] also considers approximate decision algorithms. Assuming in addition that the £-neighborhoods are disjoint, he obtains a running time of 0(n^logn). The best results

128

H. Alt and L.J. Guibas

Fig. 2. Voronoi surface of A.

in the case that the mapping between A and B is predetermined, are due to Imai, Sumino, and Imai [66] who analyze the lower envelope of multivariate functions in order to find the optimal solution. 2.2.2. Point pattern matching with respect to Hausdorjf-distance. Now A and B may have different cardinalities, let A = {^i,..., «„} and B = {b\,... .bm]- The Hausdorffdistance between A and B can be computed straightforwardly in time 0(nm). It is more efficient to construct the Voronoi-diagrams VD(A) and VD(B) and to locate each point of A in VD(B) and vice versa in order to determine its nearest neighbor in the other set. This way the running time can be reduced to 0((n -\- m) \og(n + m)) (see [2]). Algorithms for optimally matching A, B under translations for arbitrary L,;-metrics in 2 and 3 dimensions using Voronoi diagrams are given by Huttenlocher, Kedem, and Sharir [56]. The idea of these algorithms is as follows: The Voronoi-surface of the set A is the graph of the function d{x) =min \\x — a\\ aeA

which assigns to each point x the distance to the nearest point in A. Clearly, d{x) is the lower envelope of all da {x) = || jc — a ||, where aeA. For example, for L2 and dimension 2 the graph of da{x) is an infinite cone in 3-dimensional space whose apex lies in a (see Figure 2). The graph of J(JC) is piecewise composed of these cones and the projection of the boundaries of these pieces is the Voronoi diagram of A. If B is translated by some vector t the distance of any b e B to its nearest neighbor in A is 8b(t) =min\\a — (b-\-t)\\ =min\\(a — b) — t\\= aGA

aGA

min dx(t) xeA-b

Discrete geometric shapes: Matching, interpolation, and approximation

129

SO the graph of 8t> is the Voronoi surface of A translated by the vector —b. The directed Hausdorff distance IH{B -\-t, A) is the function /(0=max5/,(0 beB

and, consequently, the upper envelope of m Voronoi surfaces, namely those of A — /?i, A — Z?2,..., A — Z?^. On the other hand we consider g{t) =

lH{A,B^-t).

Since git) = ^//(A + (—t), B) we can define g{t) by upper envelopes like / ( / ) , interchanging the roles of A and B and replacing thy—t. The Hausdorff-distance between A and 5 + f is then /i(0=max(/(0,^(0). Again, for L2 the graph of h is composed of piecewise "conic" segments. We are searching for mmth{t). This minimum is found by determining all local minima of h(t). By the bounds on the number of these minima derived in [56], algorithms are obtained for matching 2- and 3-dimensional finite point sets under translations minimizing the Hausdorff distance. In 2 dimensions their running times are 0(nm(n + m) lognm) for the Liand Loo-metric and 0(nm(n + m)a(nm) login + m)) for other L^-metrics, r = 2, 3 , . . . . In 3 dimensions time Oiinm)^in + m)^+^) is obtained for the L2-metric. At first glance it is not clear how the technique described earlier can be generalized from translations to arbitrary rigid motions. However, this is done by Huttenlocher et al. in [54] by considering so called dynamic Voronoi diagrams. Here, it is assumed that we have a point set consisting of k rigid subsets of n points each. Each of the subsets may move according to some continuous function of time. The dynamic Voronoi diagram is the subdivision of the 3-dimensional space-time such that every cross section obtained by fixing some time t equals the Voronoi diagram at time t. The authors investigate how many topological changes the Voronoi diagram can undergo as time passes which gives upper bounds on the complexity of the dynamic Voronoi diagram. These results are applied to matching under rigid motion by representing the optimal solution as DiA, B) = minSn(reiA),

B-{-x)

where x € R^ is the translation vector, and re is the rotation around the origin by angle 0 e [0, 27r). For fixed 0 we have the situation described before in the case of translations. The (directed) optimal Hausdorff-distance can be determined by finding the minimum of the upper envelope of m Voronoi surfaces, namely the ones of r^ (A) — b\,.. .,r@ (A) — bm. The minimum algorithm keeps track of this for changing values of © by considering the dynamic Voronoi diagram of these sets where © is identified with the time parameter. As a consequence, an optimal match of two point sets under arbitrary rigid motions can be found in time 0((m + n)^ logimn)).

130

H. Alt and L.J. Guibas

Matching of point patterns under translations in higher dimensions is investigated by Chew et al. [27]. For the decision problem in case of the Loo-metric, the space of feasible translations is an intersection of unions of unit boxes. This space is maintained using a modification of the data structure of orthogonal partition trees by Overmars and Yap [79]. This gives algorithms for the decision problem which are used to solve the optimization problem by parametric search [74,32]. In particular, for the Loo-metric an algorithm of running time 0(n^^^~^^/^log^n) is obtained where n is the number of points in both patterns. For J-dimensional point patterns under the L2-metric, the matching takes time 0(nr^^/2^+4og^n). The methods described before are probably quite difficult to implement and numerically unstable due to the necessary computation of intersection points of algebraic surfaces. A much simpler but practically more promising method is given by Goodrich et al. in [48]. For a "pattern" P and a "background" B of m and n points, respectively, it approximates the optimal directed Hausdorff-distance min7 5//(r(P), B) up to some constant factor. T ranges over all possible transformations which are translations in arbitrary dimensions or rigid motions in 2 or 3 dimensions. The approximation factors are between 2 + ^ for translations in R^ and 8 + 6^ for rigid motions in M^. The running times are considerably faster than the ones of algorithms computing the optimum exactly. The algorithm for rigid motions in R^ essentially works as follows: 1. Fix a pair (p,q) of diametrically opposing points in P. 2. Match the pair by some rigid motion as good as possible to each pair of points of B. 3. For each such rigid motion determine the distance of the image of each point in P to its nearest neighbor. Choose that match where the maximum of these distances is minimized. In higher dimensions the nearest neighbor search is done approximately by the data structure of Arya et al. [12]. 2.2.3. Practical variations Percentile-Based Hausdorff Distance

As was mentioned above, the Hausdorff-distance is probably the most natural function for measuring the distance between sets of points. Furthermore, it can easily be applied to partial matching problems. In fact, suppose that sets A and B are given where A is a large "image" and 5 is a "model" of which we want to know whether it matches some part of A. This is the case exactly if there is some allowable transformation T such that the one-way Hausdorff-distance 8H(T(B), A) is small. In fact, many of the matching algorithms with respect to Hausdorff-distance presented previously can be applied to partial matching as well. An application of this property to the matching of binary images is given by Huttenlocher et al. in [55] where a discrete variant of the Voronoi-diagram approach for matching under translation with respect to Hausdorff-distance is used. In the same article a modification of the Hausdorff-distance is suggested for the case that it is not necessary to match B completely to some part of A but only at least k of the m points in 5 . In fact, the distance measure being used is hk(B, A) = mmkTmn\\a — b\\,

Discrete geometric shapes: Matching, interpolation, and approximation

131

where min/: denotes the k-ih smallest rather than the largest value. This percentile definition allows us to overcome the sensitivity of the Hausdorff-distance to outliers, which is very important in practice. The paper [55] is also interesting in that the authors show how to adapt some of the conceptual geometric ideas presented earlier to their rasterized context so as to obtain, after several other optimizations they invented, efficient practical algorithms for the partial Hausdorff matching described above. Alignment and Geometric Hashing

A number of other techniques for point pattern matching have been developed and used in computer vision and computational molecular biology. In computer vision the point pattern matching problem arises in the context of model-based recognition [38,21] — in this case we are interested in knowing whether a known object (we will call it the model M) appears in an image of a scene (which we will denote by S). Both the model and the scene are subjected to a feature extraction process whose details need not concern us here. The outcome of this process is to represent both M and 5 as a collection of geometric objects, most commonly points. The same principle applies to the molecular biology applications — typically molecular docking [77]. In that context we are trying to decide if the pattern, usually a small molecule (the ligand), can sterically fit into a cavity, usually the active site of some protein. Again through a feature extraction process, both the ligand and the active site can be modeled by point sets. The dimensionalities of the point sets M and 5, as well as the transformation group we are allowed to use in matching M to 5, are application dependent. In computer vision S is always 2-D while M can be either 2-D or 3-D; both are 3-D in the biology context. To illustrate the methods of alignment and of geometric hashing we use below an example in which both M and S are planar point sets of cardinalities m and s respectively. In the example we will assume that the allowed transformation group when matching M to 5 is the group of similarities, i.e. Euclidean transformations and scalings. We are interested in one-way matches from M to 5, in the sense of the one-way percentile Hausdorff distance: we will be looking for transformations that place many (most) points of M near points of S. The extension of these ideas to the case of other dimensions and other transformation groups is in general straightforward; the one exception is when the allowed transformations include a (dimension-reducing) projection — about which more below. In the alignment method [62], two points a and b of M are first chosen to define a reference coordinate frame [a; b] for the model. We can think of the first point as the origin (0,0) and the second point as the unit along the x-axis (1,0). This choice also fixes the _y-axis and thus an orthogonal coordinate system in which all points of M can be represented by two real values. Note that this representation of the points in M is invariant under translations, rotations, and scalings. We now align the points of a and b of M with two chosen points p and q of S respectively. Up to a reflection, this fixes a proposed similarity mapping M to 5 (we will ignore the reflection case in what follows). In order to test the goodness of this proposed transformation, we express all points of S using coordinates in the frame [p; q]. Now that we have a common coordinate system for two sets, we just check for every point of M to see if there is a point of S nearby (within

132

H. Alt and LJ. Guibas

some preselected error tolerance). The number of points on M that can thus match in this verification step is the score for the particular transformation we are considering. The alignment method consists of trying in this way to align pairs of points of M with pairs of points of S and in the process discover those transformations that have the highest matching score. If we could assume that all points of M are present in S, in principle we could get by with matching and verifying a specific pair from M with all its counterparts in S. Because of occlusions, however, this assumption cannot be made in practice and usually we need to try many pairs of points from M before the correct match is found. Alignment is thus an exhaustive method and its worst-case combinatorial complexity is bad, Oinr's^) — even assuming only a linear 0(m) verification cost (0(m log^) would be a more theoretically correct bound). Things get worse as the size of the frames we need to consider increases with higher dimensions or larger transformation groups. Thus in the vision context a lot of attention must be given to the feature extraction process, so that only the most critical and significant features of each of M and S are used during the ahgnment. Since we may want to match the same model M into many scenes or, conversely, we may be looking for the presence of any one of a set of models Mi, M 2 , . . . , M^ in a given scene, it makes sense to try to speed up the matching computation through the use of preprocessing. This leads to to the idea of geometric hashing [72,70,71]. Let us describe geometric hashing in the same context as the in the above alignment problem, but with several models Mi, M 2 , . . . , M^. As above, for each model M/ and each frame [a\ Z?] of two points for that model, we calculate coordinates for all other points of M/ in that frame. The novel aspect of geometric hashing is that these coordinates are then used as a key to hash into a global hash table. We record at that hash table entry the model index and frame pair the entry came from. The computation of this hash table completes the preprocessing phase of the algorithm. Note that there is a hash table entry for each triplet (model, frame for that model, other point in that model); multiple triplets may hash to the same table entry, in which case they are linked together in a standard fashion. At recognition time, we choose a frame pair [p\q] in the scene S and compute the coordinates of all points of S in that frame. Using then these coordinates as a key, we then hash into the global hash table and 'vote' for each (model, frame) pair stored at that hash table entry. If we were lucky to choose two scene points which correspond to two points [a-, Z?] in an instance of some model M/, we can then expect that the pair (M/, [a\ bY) will get many votes during that process, thus signahng the presence of M, in the scene and also indicating the matching transformation involved. In general, of course, we cannot expect to be so lucky in choosing p and q the first time around, so we will have to repeat the voting experiment several times. In the worst case, preprocessing for a model M of size m costs O(m^) and the recognition by voting also costs 0{s^) (we assume throughout that the cost of accessing the hash table is constant). Note that by appropriately rounding the coordinates used as a key to the hash table we can allow for a certain error tolerance in matching points of M and S. Also, once some promising (model, frame) pairs have been identified, the votes for the winner actually give us a proposed correspondence between model and scene points. The matching transformation can then be calculated more accurately using a leastsquares fit [71]. As was mentioned above for the alignment problem, these ideas also extend to matching point sets in 3-D, as well to other transformation groups, such as the group of affine maps.

Discrete geometric shapes: Matching, interpolation, and approximation

133

One noteworthy aspect of this in the vision case is the dimension-reducing projection maps that must be allowed in matching (as when M is 3-D but S is 2-D). This makes the problem harder, as the projection map is not invertible and a point in S has an inverse image which is a line in 3-D. This can be handled in geometric hashing by having each point of S, after a frame has been chosen, generate samples along a line of possible matching points from M in 3-D and vote for each of them separately [72]. Geometric hashing has been successfully used in both identifying CAD/CAM models in scenes, as well in the molecular docking problem [72,70,71,77,78]. Performance in practice tends to be much better than the above combinatorial worst-case bounds would indicate. Recent theoretical studies also suggest that randomization can improve the above bounds for alignment and geometric hashing, especially in cases where the model is not present in the the scene, or when the point sets involved have limited 'self-similarity' [65]. 3. Matching of curves and areas Apart from point patterns, research has been done in recent years also on the resemblance of more complex patterns and shapes, mostly in two dimensions. These objects usually are assumed to be given by polygonal chains or one or more simple polygons representing their boundary. As a measure for their resemblance usually the Hausdorff-distance is used, though some articles are concerned with other variants, as described in Section 2.2.3. 3.1. Optimal matching of line segment patterns Throughout this section we will assume, if not explicitly stated otherwise, that the input to the algorithms consists of two sets A, B of n, m line segments in two dimensions, respectively. The aim is to find an optimal match between A and B, i.e. a transformation T minimizing the Hausdorff-distance 8H(A, T(B)). Here, A and B are identified with the sets of points lying on their line segments and the metric underlying the Hausdorff-distance is L2. Notice that while there is a straightforward 0(nm) algorithm for computing the Hausdorff distance between fixed finite point sets A and B, this is no longer the case for sets of line segments. In the case of convex polygons Atallah [18] gave a linear time algorithm. For arbitrary sets of line segments an asymptotically optimal 0(nlogn) algorithm was given by Alt et al. in [2]. This algorithm is based on the fact, that the Hausdorff distance can only occur at endpoints of line segments or at intersection points of line segments of A with the Voronoi diagram of B or vice versa. Furthermore, for any Voronoi cdgQ this can happen only at the two extreme intersection points with line segments of the other set. These points are then determined by a line sweep algorithm. In [2] it was also observed that the matching problem under translations or rigid motions can be solved in polynomial time. These results are based on the fact that if the transformation has k degrees of freedom (e.g. ^ = 3 for rigid motions) then in the optimal position the Hausdorff-distance essentially must occur in at least k-\-l different places. More sophisticated techniques leading to asymptotically faster, but probably practically quite complicated algorithms are used by Agarwal et al. [16] for translations and Chew et

134

H. Alt and L.J. Guibas

Fig. 3. The set Ag

al. [29] for arbitrary rigid motions. Both articles start with essentially the same idea. They first solve the decision problem whether for given A, B, and ^ > 0 there exists a transformation T such that 5//(A, T{B)) ^ e. Let us consider the one-way Hausdorff-distance in detail; with just a few technical details it can be extended to the two way Hausdorff distance. Let Cs be the disk of radius e around the origin and A^ the Minkowski sum A 0 Q . Clearly, As is the union of so called "racetracks" ([16]), i.e. rectangles of width 2e with semidisks of radius e attached at their ends (see Figure 3). Now, for a transformation T the one-way Hausdorff-distance 8H(T(B), A) ^S exactly if T(B) C Ag. Let us consider the case of translations first. Suppose r is a translation vector not satisfying the inclusion above. So we have 5 -f- f ^ A^, in particular t -\-bi gl Ag for some line segment bi e B. This is equivalent to r G A^ 0 (—^/), where A^ denotes the complement B? \ A^ of A^. Conversely, a translation t moves B into A^ exactly if r G Ag 0 {—bi) for / = 1 , . . . , m. The latter set we will denote by A^, (see Figure 4) so there is a one way match exactly if the set

S^^A^ i=\

is nonempty. Figure 4 shows that in the construction of the sets A^, / =: 1 , . . . , m, we use the circular arcs and line segments bounding As and, additionally, these curves translated by the vector bi; altogether there are 0(nm) circular arcs and line segments. Each A^ is a union of some of the cells of the arrangement A defined by these curves. The decision problem for translations can be solved by a sweep line algorithm. While sweeping across the arrangement the algorithm computes the depth of the cells, i.e. the number of different A^ that cover a cell. Clearly, S is nonempty, exactly if there is a cell of depth m. Since the complexity of the arrangement is 0{{mn)^) the sweep line algorithm solves the decision problem for translations in 0((mn)^log(mn)) time (see [16]). In the case of rigid motions (see [29]) we assume that first the set B is rotated around the origin and then translated in order to match the set A. For each orientation 0 € [0, 2n) we consider the sets A^.{0) which are defined like A?, only that bi is rotated by angle 0 around the origin. Likewise the arrangement A(0) depends on the orientation 0.

Discrete geometric shapes: Matching, interpolation,

and

approximation

135

-h

Fig. 4. The set Af (shaded).

b) Fig. 5. Orientations where the arrangement changes: (a) double event; (b) triple event.

^(0) and the depth of its cells can be determined as described before. Then while increasing 0 the algorithm keeps track of the changes that occur in the arrangement and of the depth of the newly appearing cells. The arrangement only changes topologically for orientations 0 where two sets of the form {aj 0 £) 0 (—^j) touch each other, a so-called "double event" (see Figure 5(a)), or the boundaries of three of them intersect in one point, a "triple event" (see Figure 5(b)). Altogether, there are O(m^n^) events. The algorithm determines them and sorts them in a preprocessing phase. Then it makes use of a suitable data structure where all local changes in the arrangement due to an event can be processed in constant time. In this

136

H. Alt and LJ. Guibas

manner, starting from ^(0) all arrangements are inspected whether for some 0 there is a cell in A{0) of depth m. If so, a positive answer is given to the decision problem. Altogether the algorithm requires Oirri^n^ log(mn)) time. In both cases the algorithms for the decision problem can be turned into ones for the optimization problem hy parametric search (see [74,32]) on the parameter e. For matching by translation, Agarwal et al. [16] describe a parallel algorithm which first computes the arrangement A. Then it determines the depth of its cells by considering the dual graph, finding an Eulerian path in it and using that to traverse the cells systematically. This parallel algorithm is used to direct the parametric search. Altogether, an 0((mn)^log^(mn)) algorithm for finding the optimal matching under translations is obtained. For optimal matching by rigid motions Chew et al. [29] use an EREW-PRAM sorting algorithm for the events to direct the parametric search. Whenever this algorithm attempts to compare two events 0\ (s), 62(8), the set of critical parameters e is determined and sorted. Then a binary search is done on these critical parameters in each step involving the decision algorithm described before to determine the interval containing the optimal e. Altogether an 0((mn)^ \og^(mn)) algorithm is obtained for optimal matching under rigid motions. With essentially the same ideas and the usage of dynamic Voronoi diagrams efficient algorithms for point pattern matching are obtained in [29], as was mentioned in Section 2. Huttenlocher et al. [56] also extend the Voronoi diagram approach described in Section 2 to sets of fine segments. This method leads to rather complicated surfaces if the Hausdorffdistance with respect to the L2-metric is considered. However, if the underlying metric is L\ or Loo the situation is simpler and an 0({mn)^a(mn)) algorithm can be obtained.

3.2. Approximate matching As we have seen in the previous sections the algorithms for finding the optimal solution of the matching problem use quite sophisticated techniques from computational geometry and, therefore, are probably too complicated to implement. Also the asymptotic running times are, although polynomial, rather high. One approach to overcome these problems are approximation algorithms for the optimization problem. These are algorithms that do not necessarily find the optimal solution, but find one whose distance is within a constant factor c of the optimum. Again, if not explicitly stated otherwise we will consider matching of sets of line segments with respect to Hausdorff-distance based on the L2-metric. Approximation algorithms in this context were considered first by Alt et al. [2,3] using so called reference points. These are points rA, rg that are assigned to the sets A and B and have the property that when B is transformed to match A optimally then the distance of the transformed r^ to r^ is also bounded by a constant factor a times the Hausdorff-distance of the matching. The factor a is called the quality of the reference point. A reference point can be found very easily in the case of translations as allowable transformations. In fact, to a set A assign the point r^ = (-^min' ^min^ where x^-^^ (Jmin) ^^ ^^^ lowest X-coordinate (j-coordinate) of any point in A. So rA is the lower left comer of the smallest axes-parallel rectangle enclosing A. Observe that if in an optimal match A and the translated image B^ of B have Hausdorff-distance 8 then \x^- — Xj?- | ^ 8 and

Discrete geometric shapes: Matching, interpolation, and

approximation

137


^min ~ ^^minl ^ ^ (^^^ Figure 6). Consequently, the distance of TA and r^/, which is the image of fB under the translation, is at most \/28. So r^ is a reference point for A of quality

V2. Now, suppose that instead of finding the optimal translation of B to match A we use just the one that matches rs to r^ obtaining an image B^^ of B. Then, since B^ is obtained from B^^ by translation by the vector r^/ — r^, SH(B\ B") < V25. Consequently SH{A,B')

^

8H{A,B')^8H{B\B')

< (v^+l)5. So the Hausdorff-distance of the match found by the reference points is at most by a factor V2 + 1 ^ 2.4 worse than the optimal one. In general, matching with respect to a reference point of quality a for translations yields a match that is at most a factor a + \ worse than the optimal one. Observe that the approximation algorithm has linear running time, since only the reference points need to be determined. So it is much faster and much simpler than the best known algorithm for finding the optimal match which has running time

Oiimnflog^mn) [16]. Reference points for rigid motions are not that easy to find. Obviously the one for translations given above does not work any more. Nor do the seemingly obvious choices like the center of gravity of the convex hull of a set A or the center of the smallest enclosing circle. It was shown by Alt et al. [2] that for an arbitrary bounded set A the center of gravity of the boundary of the convex hull is a reference point with respect to rigid motions. However, the upper bound given for the quality of this reference point is rather large, in fact, it is 47r + 4 ^ 1 6 . 6 .

138

H. Alt and L.J. Guibas

Aichholzer et al. in [1] found a better reference point. In fact, imagine that the axesparallel wedge determining the reference point for translations given above is circled around the set A always keeping the bounding half-lines tangent to A. Then its apex describes a closed curve. For the "average point" on this curve no particular direction is preferred, so it might be a candidate for a reference point under rigid motions. Formalizing and slightly simplifying this idea, we obtain the following definition for arbitrary bounded sets A: ^2n

s{A):=-

1 r"" /

n Jo

hAm

/cos0\

. l]d(t>^

ysmcpj

where HA (0) is the so called support function of A which assigns to 0 the largest extent of A in direction 0. The point s(A) is the so called Steiner-point of A. The Steiner point is well investigated in the field of convex geometry [85,52] as well as in functional analysis [80]. Using these results it is shown in [1] that the Steiner point is a reference point not only for translations and rigid motions, but even for similarities and not only for two but for arbitrary dimensions. Its quality is A/n ^ 1.27 in two, 1.5 in three and between ->J2/n\/d and V2/7rV^ + 1 in d dimensions. Usually the Steiner point is defined for convex bodies only, it can be extended to arbitrary bounded sets by taking the Steiner point of the convex hull. In the case of sets of line segments we obtain convex polygons for which the Steiner point can easily be computed. In fact it is the weighted average of the vertices where each vertex is weighted by its exterior angle divided by In. Furthermore, Przeslawski and Yost [80] showed that for translations there is no reference point whose quality is better than the one of the Steiner point. In the case of translations the usage of a reference point for approximate matching was obvious. In the case of rigid motions first the two reference points are matched by a translation and then the optimal matching is sought under rotations around the common reference point. This is easier than the general optimization problem since the matching of the reference points reduces the number of degrees of freedom by two (in two dimensions). In the case of similarities, figure B is first stretched by the factor dA/ds, where dA and dg are the diameters of A and B, respectively. Then the algorithm for rigid motions is applied. In [1] it is shown that from reference points of quality a approximation algorithms are obtained yielding a solution within a factor of a + 1 of the optimal one in the case of rigid motions and within a factor of a + 3 of the optimum in the case of similarities. The running times for both approximate matching algorithms are Oinm Xoginm) log*(nm)). Finally it should be mentioned that by an idea due to Schirra [82] it is possible to get the approximation constant of reference point based matching with respect to translations or rigid motions arbitrarily close to 1. In fact, suppose that the quality of the reference point is a. This means that in the optimal match the reference point r^ is mapped into the a5-neighborhood UofrA- In order to achieve an approximation constant 1 + ^ for a given s, we place onto U a sufficiently small grid so that no point in U has distance greater than eS from the nearest grid point. Instead of placing rg onto TA only we place it onto each grid point and proceed as described before. Since at some point rg is placed at a distance of at most e^ of its optimal position, the approximation constant is at most \-\- e. Notice,

Discrete geometric shapes: Matching, interpolation,

and

approximation

139

Fig. 7. Two curves with small Hausdorff-distance 8.

that for a constant s only constantly many grid points are considered so the running time only changes by a constant factor.

3.3. Distance functions for non-point objects In some applications, the simplicity of the Hausdorff-distance can be a disadvantage. In fact, when the distance between curves is measured the Hausdorff-distance may give a wrong picture. Figure 7 shows an example where two curves have a small Hausdorffdistance although they have no resemblance at all. The reason for this problem is that the Hausdorff-distance is only concerned with the point sets but not with the course of the curves. A distance considering the curves' courses can informally be illustrated as follows: Suppose a man is walking his dog, he is walking on one curve, the dog on the other. Both are allowed to control their speed but not to go backward. What is the shortest length of a leash that is possible? Formally, this distance measure between two curves in J-dimensional space can be described as follows 8F{f g) = inf max \\f{a(t)) - g{p(t)) \\ where / , ^ : [0, 1] ^ M*^ are parameterizations of the two curves and a, ^ : [0, 1] -^ [0, 1] range over all continuous and monotone increasing functions. This distance measure is known under the name Frechet-distance. The Frechet-distance seems considerably more difficult to handle than the Hausdorffdistance and no matching algorithms have been developed yet. The following algorithm for measuring the Frechet-distance between two polygonal chains has been given by Alt andGodau[8,9]. Let P and Q be the given polygonal chains consisting of n and m line segments respectively. First we consider the decision problem, so in addition to P and Q some 6: > 0 is given and we want to decide whether (5/r(P, Q) ^s. We first consider the m x n-diagram DsiP, Q) shown in Figure 8 which indicates by the white area for which points p e P, ^ ^ Q \\p — ^W ^ £' The horizontal direction of the diagram corresponds to the natural parameterization of P and the vertical one to that of Q. One square cell of the diagram corresponds to a pair of edges one from P and one from Q and can easily be computed since it is the intersection of the bounding square with an ellipse.

140

H. Alt and L.J. Guibas

Fig. 8. P, Q, 6 and the diagram D^iP, Q).

0 i

Fig. 9. Turning function of a simple polygon.

Now it follows from the definition that S^iP, Q) ^ e exactly if there is a monotone increasing curve from the lower left to the upper right comer of the diagram. These considerations lead to an algorithm of running time 0(mn) for the decision problem. Then Cole's variant of parametric search [33] can be used to obtain an algorithm of running time 0(mnlog(mn)) to compute the Frechet-distance between P and Q. In practice, it seems more reasonable to determine 8f(P, Q) bit by bit using binary search where in each step the algorithm for the decision problem is applied. Arkin et al. [6] consider a distance function between shapes in two dimensions that is based on the slope of the bounding curves. In fact, for a given shape A some starting point O on the bounding curve is chosen and the curve is then parametrized with the natural parameterization HA (i.e. parameterization by arc length) which is normalized so that the parameter interval is [0,1]. To each parameter r € [0,1] the angle &A{t) between the counter-clockwise tangent of A in the point n^CO and a horizontal line is assigned. 0A is called the turning function of A. OA is piecewise constant for simple polygons (see Figure 9), it is invariant under translations and, because of the normalization of the parameterization under scaling. A rotation of A corresponds simply to a shift in 6)-direction and a change of the origin to a shift in ^-direction. Now, as a distance measure between shapes A and B the L^-metric {p e N) between

Discrete geometric shapes: Matching, interpolation, and approximation

141

&A and GB is being used, i.e. we define 8,(A,B)=(j

\0A(s)-eB(s)\'ds\

.

Then this distance measure is made invariant under translations and the choice of the starting point 0: ^P

dp(A,B)=

min

( f

ISAis + t) - 0B(S) ^ 0\^ ds]

It is shown that dp is a metric for all /? € N. An algorithm is given for computing d2(A, B) where A, B are simple polygons with n and m edges, respectively. It can be shown that the minimum in the definition of dp can occur for at most 0(mn) "critical" values of ^ By considering partial derivatives with respect to G it is shown that for any fixed t the optimal 0 can easily be computed. Altogether, an 0(mn logmn) algorithm is obtained. An extension of this algorithm to deal with scaling as well was given by Cohen and Guibas [28]. The drawback of this distance measure is its sensitivity to noise, especially nonuniformly distributed noise, but it works properly if the curves are sufficiently smooth. A generalization of this distance to a distance measure that is invariant under affine transformations is given by Huttenlocher and Kedem [53]. With respect to this distance, matching of two polygons with n and m vertices under affine transformations is possible in time 0(mn log mn). A different idea is to represent shapes by their areas rather than by their boundaries. In this context the probably most natural distance measure between two shapes is the area of their symmetric difference. However, this measure seems to be much more difficult to handle than Hausdorff-distance. Within computational geometry it was first considered in a paper by Alt et al. [4] in connection with the very special problem of optimally approximating a convex polygon by an axes-parallel rectangle. Meanwhile some more results on the symmetric difference have been obtained. In fact, de Berg et al. in [34] consider matching algorithms maximizing the area of overlap of convex polygons which is the same as minimizing the symmetric difference. They obtain an algorithm of running time 0((n + m) \og{n + m)) for translations where n and m are the numbers of vertices of the two polygons. In addition, it is shown that if just the two centers of gravity are being matched by a translation this yields a position of two convex figures where the area of overlap is within a constant factor of the maximal one. A lower bound of 9/25 and an upper bound of 4/9 is obtained for this constant, so the center of gravity is a reference point with respect to maximizing the overlap under translations. As can easily be seen this does not imply directly that it is a reference point with respect to the area of the symmetric difference. However, this can be shown, as well. In fact, Alt et al. [7] show that if the centers of gravity of two convex figures are matched the area of the symmetric difference is at most 11/3 times the minimal one. It is demonstrated with an example that this bound is tight. The center of gravity is also a reference point for other sets of transformations such as rigid motions, homotheties, similarities, and arbitrary affine mappings.

142

H. Alt and L.J. Guibas

4. Shape simplification and approximation In manipulating shape representations, it is often advantageous to reduce the complexity of the representation as much as possible, while still staying geometrically close to the original data. There is a vast literature on shape simplification and approximation, both within computational geometry and in the various applied disciplines where shape representation and manipulation issues arise. In this section we cannot possibly attempt to survey all the approaches taken and the results obtained. We will focus on a small subset of results giving efficient algorithms for shape simplification under precise measures of the approximation involved.

4.1. Two-dimensional results To focus our attention, let us consider the problem of simplifying a shape represented as a polygonal chain in the plane. Our goal is to find another polygonal chain with a smaller number of links and which stays close to the original. Let the original chain be C, defined by n vertices v\,V2, •• -.Vn (we assume in this discussion that C is open), and let the desired approximating chain A be defined by m vertices ifi, w;2, • • •, w;^. A number of variations on the problem are possible, depending on exactly how the error, or distance, between C and A is measured, and on whether the Wj 's need to be a subset of the u/ 's, or can be arbitrary points of the plane. In general we have to solve one of two problems in finding A: The min-# problem: Find an A which minimizes m (the combinatorial complexity of A), given some a priori bound e on the maximum allowed distance between C and A, or The min-e problem: For a given m, find an A which minimizes the distance e between CandA. In order to illustrate some of the ideas involved in solving these problems, let us further assume that both C and A are jc-monotone chains — in other words, C and A are piecewise linear continuous functions C{x) and A{x). For such chains a very natural measure of distance is the so-called uniform or Chebyshev metric defined as J(A, C) = max|A(jc) - C(JC)|,

where the max is taken over the support on the x-axis for C Let us consider the case when the vertices of A can be arbitrary points, i.e. need not be vertices of C. Imai and Iri [63,64], and Hakimi and Schmeichel [57] gave optimal 0(n) algorithms for the min-# problem in this context. Their methods can be viewed as an extension of the linear-time algorithm of Suri [87] for computing a minimum link path between two given points inside a simple polygon (here the polygon is the chain C appropriately 'fattened' vertically by the allowed error s) and make crucial use of the concept of weak-visibility from an edge inside a simple polygon [17]. Similar ideas were also used by Aggarwal et al. [5] to give an

Discrete geometric shapes: Matching, interpolation, and approximation

I A3

0{n\ogk) algorithm for finding a convex polygon with the smallest number k of sides nested between two given convex polygons of total complexity n. For the min-6: variant of the chain simplification problem, Hakimi and Schmeichel gave an 0{n^ \ogn) algorithm by cleverly limiting the critical values of 6" to a set of size 0{n^) and then using a certain kind of binary search. More recently, Goodrich [49] used a number of new geometric insights to reduce the set of critical e values to 0{n) and thus obtained an 0{n\ogn) algorithm through several applications of pipelined parametric searching techniques. For a survey of results when A and C are not constrained to be jc-monotone, see the paper of Eu and Toussaint [41] and related work by Hershberger and Snoeyink [60], and Guibas et al. [46]. In general, algorithms for the min-# problem have linear complexity, while those for the min-^ have quadratic complexity. Next let us consider the problem variant where the vertices of A have to be a subset of the vertices of C Now we do not require that A or C be x-monotone (so we revert to using the Hausdorff distance function). One of the oldest and most popular algorithms for this problem is the heuristic Douglas-Peucker [37] line simplification algorithm from the Geographical Information Systems community; Hershberger and Snoeyink showed how to implement this algorithm to run in 0{n\ogn) time [58]. A more formal approach to the problem was initiated in the papers by Imai and Iri cited above. For the min-# variant, Imai and Iri reduce the problem to a graph-theoretic problem as follows. A line segment vJiTj joining vertices vi and Vj of C is called a shortcut for the subchain f /, i;/+i,..., f y of C. A shortcut is allowed if the error it induces is at most the prescribed error e\ the error of the shortcut 'vTv] is defined to be the maximum distance from Wv] to a point Vk, where i ^ A: ^ j . It is easy to see that this is also the Hausdorff distance from IJrv] to the subchain f/, f/+i,..., f J. Our goal is to replace C by a chain consisting of allowed shortcuts. So we consider a directed acyclic graph G whose nodes V are the vertices fi, i'2,..., f,2 of C and whose edges E are the pairs {vt, Vj) if and only if / < j and the shortcut vt Vj is allowed. A shortest (in terms of the number of edges) path from vi to Vn corresponds to a minimum vertex simplification of C; such a path can be found in time linear in the size of G by topological sorting [31]. The size of G is O(n^) and it can be computed by an obvious method in 0(n^) time. Thus constructing G is the bottleneck in the computation. Mellonan and O'Rourke [75] showed how to reduce the construction time to 0(n^ logn), and Chan and Chin [26] further reduced it to optimal O(w^). The Chan and Chin algorithm starts from the observation that the error of a shortcut WvJ, i < j , is the maximum of the errors of two half-lines: the one starting at vt and going towards Vj (call it Uj), and of the one starting at Vj and going towards vt (call it Ijt). To compute the graph G we intersect the graphs G\ and G2, where Gi contains the edge (f/, Vj) if and only if / < j and the error of Uj is less than s and G2 contains the edge {vi, Vj) if and only if / < j and the error of Ijt is less than e. We show how to compute G\ in O(n^) time; the computation of G2 is entirely symmetric by reversing the number of the vertices of C We examine the vertices of Gi in the sequence v\,V2, - • .,Vn. When we process vertex Vi, we calculate in turn the errors determined by all half-lines Uj, where j takes on the values / + 1, / -f 2 , . . . , n. Let Dk denote a closed disk of radius e centered at Vk. The error of Uj is at most e if and only if Uj intersects all disks Dk with i ^k ^ j . Thus the algorithm works by maintaining the cone of half-lines from vt which intersect the disks

144

H. Alt and L.J. Guibas

^z+l

Fig. 10. The computation of the allowed shortcuts starting at Vj.

D/+1, D/+2, • • •, ^y (which is nothing but the intersection for the corresponding cones for all these disks separately). When we process fy+i it suffices to update this cone, which is a constant time computation. If the cone stays non-empty, then (f,, fy+1) is in G i; otherwise we are done with Vi as all further half-lines will also have error which is too large. Thus the computation of G i, and therefore of G and of our desired shortest path can be done in 0(n^) time. Figure 10 illustrates the situation with the disks and the cone of vi a some intermediate point. Methods based on the Imai-Iri graph construction seem inherently quadratic as, if e is large enough, the graph will have Q{n^) allowed shortcuts. However, Varadarajan [89] was able to use graph chque compression techniques such as those proposed by Feder and Motwani [44] to obtain an 0(«^/'^"^'^) algorithm for this min-# problem in the case of X-monotone chains. Varadarajan gave a randomized and a more complex deterministic algorithm for the min-^ version of this problem as well, with the same time bound. In the general (non-x-monotone) case of the above chain simplification problems it is quite possible that A may end up being self-intersecting, even though C itself is simple. This is clearly undesirable in many application contexts. Even worse, one is often simplifying several chains at once, as in the case of boundaries between regions in, say, a geographical map. In this case it is important that the topological structure of the regions be maintained after simplification, so the simplifications of disjoint chains are not allowed to end up crossing each other. Guibas et al. [46] showed that the min-# problem is NP-hard in this case, when the positions of the approximating vertices can be arbitrary. When the vertices of the approximating chain have to be a subset of the original chain and we are in the x-monotone setting, de Berg et al. [35] gave an 0{n(n + m) log^i) algorithm for the min-# problem for a chain C of n vertices so that the resulting approximating chain A is guaranteed to be simple and to be on the same side of each of m given points as C is. Using this algorithm as a local subroutine, they give a method for polygonal subdivision simpHfication which is guaranteed to avoid topological inconsistencies (but which need not be globally optimal).

Discrete geometric shapes: Matching, interpolation, and approximation

145

4.2. Three dimensions Unfortunately the situation is not an equally happy one in three dimensions. Nearly all natural extensions of the above problems are NP-hard, so most of the extant work to-date has focused on approximation algorithms. The analog of an x-monotone chain in 3-D is that of a polyhedral terrain, which is just a continuous bivariate (and possibly non-total) function z = T{x,y) which happens to be piecewise linear. The complexity of a terrain can be measured by its total number of vertices, edges, and faces. The numbers of these are linearly related by Ruler's relation, so most often we will use the number of faces as our measure of the complexity of a terrain. There is a plethora of techniques in the literature for simplifying polyhedral terrains, by effectively deleting vertices which lie in relatively flat neighborhoods (and retriangulating the hole thus created). Unfortunately not much has been proved about such methods. In fact, Agarwal and Suri [14] have shown that even the simpler problem of deciding the approximability within a vertical tolerance 6: of a collection of n isolated points by a polyhedral terrain with at most k faces is NP-hard. Similarly, though more surprisingly. Das and Joseph [36] showed that finding a convex polytope of at most k facets nested between two other convex polytopes P and Q with a total of n facets is also NP-hard, thus settling an old question first posed by Klee. With these results in sight, researchers turned their effort to approximation algorithms. Mitchell and Suri [76] formalized the nested convex polytope problem as a set-cover problem by considering the regions defined on the outer polytope by tangent planes to the inner polytope. They showed that the greedy method of set covering computes a nested polytope with 0(/c \ogn) facets, where K is the complexity of an optimal nested polytope. The same approach also works for the approximation by a convex surface of n points themselves sampled from another convex surface. These algorithms run in O(n^) time. These results were extended by Clarkson [30], who gave a randomized algorithm with expected time complexity 0(/cn^+^) for computing a nested polytope of size 0(/clog/c). Bronnimann and Goodrich [23] further improved the set cover algorithm using VC-dimension ideas to obtain a deterministic algorithm that computes a nested polytope which is to within a constant factor of the optimal one. The set cover formulation, unfortunately, does not work for the terrain approximation problem, as we cannot independently select faces in the approximating surface. Agarwal and Suri in the paper cited above formulate instead the terrain fitting problem as a geometric partitioning problem: first we project all the points to be approximated on the x j-plane; then we seek a disjoint collection of triangles in the xj-plane which cover all these points and such that each triangle satisfies a certain legality constraint w.r.t. the points it covers. This constraint, which can be formulated as a linear program, is that the triangle can be lifted to 3-D so that it 6:-approximates all the points it contains. Agarwal and Suri gave an approximation algorithm which solves this problem and produces a covering set of legitimate triangles whose size is 0(/c log/c), where again K is the minimum number of triangles possible. Unfortunately their algorithm has a very high complexity 0(n^).

146

H. Alt and LJ. Guibas

5. Shape interpolation Shape interpolation, more commonly known as morphing, has recently become a topic of active research interest in computer graphics, computer animation and entertainment, solid reconstruction, and image compression. The general problem is that of continuously transforming one geometric shape into another in a way that makes apparent the salient similarities between the two shapes and 'smoothes over' their dissimilarities. Some of the relevant graphics papers are [68,83,67,84]. The morphing transformation may be thought of as taking place either in the time domain, as in animation, or in the space domain, as in surface reconstruction from planar sections. The latter area has already an extensive literature of its own [43], and computational geometric techniques have been used with good results [90,25] (though the problem is not always solvable [50]). In general there are numerous ways to interpolate between two shapes and little has been said about criteria for comparing the quality of different morphs and notions of optimality. Often, the class of allowable or desirable morphing transforms in only vaguely specified, with the result that the problem ends up being either under- or over-constrained. In this section we will survey the rather small amount of work in this area which relates to Computational Geometry. Let P and Q be the two shapes we wish to morph. For now we do not specify how the shapes P and Q are described, or what is the ambient space (2-D, 3-D, etc.). Most morphing algorithms operate according to the following paradigm: firstly relevant 'features' of P and Q are identified and matched pairwise; secondly, smooth motions are planned that will bring into alignment these pairs of corresponding features; and thirdly the whole morphing transformation is generated, while respecting other global constraints which the shapes P, Q, and their interpolants must satisfy. This paradigm works well when the class of shapes to be morphed consists of fairly similar objects. In cases, however, when we want to be able to morph a great variety of different shapes, the above process is commonly subdivided into a series of stages. The shapes P and Q are first 'canonicalized' by mapping them into their canonical forms K(P) and K(Q) respectively — these canonical forms are more standardized and therefore easier to morph to each other. The whole transformation is then done by going from P to /c(P) to /c((2) to Q. As the above description makes clear, there are numerous connections between the problems of shape matching and shape interpolation, and several of the matching techniques already discussed are applicable to the morphing problem, especially in the feature matching stage. It is not so obvious, but it is equally true that morphing techniques can also be used for shape comparison problems. In a certain sense, the optimum morph from P to Q is the transformation that distorts P as little as possible in order to make it look like Q. In a morphing algorithm we can assign a notion of 'work' or cost to the distortions the algorithm needs to perform. The minimum work then required to morph P into Q can then serve as a measure of the distance from shape P to shape Q. Note that such a distance function based on morphing clearly satisfies the triangle inequality. To make these matters concrete, let us discuss a few simple examples in 2-D and 3-D. Let P be an open simple polygonal chain of m vertices P = p\p2- • -Pm and Q be an open simple polygonal chain of n vertices Q = qxqi-•-qn- This problem was considered by Sederberg and Greenwood [83]. According to our paradigm above, we first need to establish a correspondence between the 'features' of P and Q — for polygonal chains the

Discrete geometric shapes: Matching, interpolation,

and

approximation

Ul

Fig. 11. An example of matching polygonal chain vertices.

natural notion is that of vertices (though other choices also make sense in applications). Now since m might be different from n, in general this will have to be a many-to-one, or one-to-many mapping. But where should these duplicate vertices be added? We can represent all possible ways to pair up vertices of P with vertices of Q in sequence by considering monotone paths in the [l..m] x [\..n\ grid. An example is shown in Figure 11, which shows a particular matching between a chain P of 8 vertices and a chain 2 of 10 vertices. A diagonal move on the monotone path corresponds to advancing on both P and Q, while a horizontal move corresponds to advancing on Q only (and thus duplicating the corresponding vertex of P. We can choose an optimum correspondence, by selecting the path n to be of minimal cost in some appropriate sense. For example, we may want to minimize the sum of the distances of all the corresponding pairs of vertices. Sederberg and Greenwood developed a physics-based measure of the energy required to stretch and bend P into Q once the correspondence by n is given. The optimal n under such measures can be computed by classical dynamic programming techniques [31] in time 0(mn). Similar ideas have been used to fit polyhedral sleeves to polygonal slices in parallel planes [25]. Once we have the correspondence, we can then move corresponding vertices to each other through linear interpolation. Implicitly, at each time t, this defines an interpolating polygonal chain Rt and thus our construction of a morph between the polygonal chains is complete. Note also that in order to extend this method to closed polygonal chains we

148

H. Alt and L.J. Guibas

(a)

(b) Fig. 12. Examples of polygonal chain morphs: (a) a good case, (b) a bad case.

must decide first on an appropriate 'origin' for each chain, and this is not a trivial matter. Figure 12 shows some successful and unsuccessful examples of this method, depending on the origin chosen. Note in particular that the interpolating chain Rf can self-intersect, even though neither P and Q do. Sederberg et al. also proposed another simple method for polygon interpolation based on interpolating side lengths and angles, once a vertex correspondence is established [84] — but now the challenge becomes to get the polygons to 'close up'. Preserving structural properties during a morph, such as the simplicity of a chain in the example above, is a difficult problem. Guibas and Hershberger [45], consider how to morph two parallel polygons to each other while maintaining simplicity. The setting is now that P and Q are two simple polygons in the plane of n sides each, and there is a 1-1 correspondence between the sides of P and Q so that corresponding sides are parallel. The goal is to continuously deform P io Q while at all times the interpolating polygon Rt has its corresponding sides parallel to those of P and Q and stays simple. In this case the very statement of the problem provides the correspondence between features of the polygons. Even so, the two polygons P and Q can look quite different and the existence of a morph which remains parallel and simple is not obvious; see Figure 13 (a morph between these two spiraling polygons can happen by simulating the way recording tape can move from one reel to another). Guibas and Hershberger showed that this is, nevertheless, always possible and gave an algorithm which uses 0(n^^^^^) primitive operations called 'parallel moves'; this was later improved to O(nlogn) by Hershberger and Suri [61]. A parallel move is a translation of a side of a polygon parallel to itself, with appropriate modifications of the polygon at the endpoints of the edge. Guibas and Hershberger first showed that parallel moves can be used to take each polygon to a fractal-like canonical or reduced form in which portions of the polygon's boundary have been shrunk to micro-structures of widely different scales.

Discrete geometric shapes: Matching, interpolation,

and

approximation

149

I

L 1

Q Fig. 13. These oppositely spiraling parallel polygons are still morphable.

A polygon in this canonical form corresponds, roughly speaking, to a binary tree whose leaf weights are the angles of the original polygon. Once P and Q are in this canonical form, the corresponding trees can be morphed into each other through a series of standard tree rotation transformations; certain validity conditions have to hold throughout this process. These tree rotations can be realized geometrically through parallel moves on the polygons. The fractal-like structure of the canonical form helps in arguing that the translations required to implement particular rotations do not interfere with other parts of the polygon. Clearly the Guibas-Hershberger morph solves only a limited problem and even for that the canonical form used introduces unnecessarily large distortions into the interpolating shapes. Different polygon morphing techniques were developed by Shapira and Rappoport [86], based on the star-skeleton representation of a polygon (a decomposition of the polygon into star-shaped pieces). Such methods do much better in preserving the metric properties of the polygons, but unfortunately they still do not preserve global constraints, such as simplicity — plus they are expensive, requiring O(n^) time. Another idea for morphing polygons can be based on the compatible triangulations result of Aronov, Seidel, and Souvaine [15]. They showed that P and Q can always be 'compatibly'-triangulated by adding O(n^) Steiner points (compatibility means that the triangulations are combinatorially equivalent). The use of conformal mappings has also been suggested. Let us now also look at some work in three dimensions. The only case that has been extensively studied is that of morphing convex polytopes [67]. If P and Q are convex polyhedra, a natural way to construct a matching between their surfaces is to match points on the two polyhedra that admit of the same (outwards) normal. In general, this will match all points on each face of P to a vertex of Q and vice versa, as well as matching (the points of) certain pairs of edges, one from P and one from g . If we place the origin at an arbitrary point of space and compute the vector sums of corresponding pairs of points from P and 2 , the resulting set of points will form the boundary of another convex polytope, called the Minkowski sum of P and Q and denoted by P 0 2 [69,73]. Armed with this concept, we can then morph P to Q by constructing the mixed volume {\ — t)P ®tQ, diSi varies in the range 0 < f ^ 1. This type of morph was exploited by Kaul and Rossignac [68,81]. The same technique works, of course, in 2-D or dimensions higher than three. A nice way to visualize this morph in 2-D is to think of P and Q as two convex polygons placed on

150

H. Alt and LJ.

Guibas

parallel planes in 3-D. One then constructs the convex hull of the union of P and Q by adding a 'sleeve' wrapping around P and Q. The sections of this sleeve by a plane moving parallel to itself from that containing P to that containing Q gives us the morph. The 'kinetic framework' of [51] allows the extension of this type of morph to general polygons in the plane. Also, since regular subdivisions of the plane or alpha shapes [40] can be viewed as projections of convex polytopes in one dimension higher, the above method also gives us some possibilities for morphing such subdivisions or alpha shapes. Other approaches to morphing 2-D or 3-D shapes are given in [47,67,42].

References [1] O. Aichholzer, H. Alt and G. Rote, Matching shapes with a reference point, Intemat. J. Comput. Geom. Appl. 7 (1997), 349-363. [2] H. Alt, B. Behrends and J. Blomer, Approximate matching of polygonal shapes, Proc. 7th Annu. ACM Sympos. Comput. Geom. (1991), 186-193. [3] H. Alt, B. Behrends and J. Blomer, Approximate matching ofpolygonal shapes, Ann. Math. Artif. Intell. 13 (1995), 251-266. [4] H. Alt, J. Blomer, M. Godau and H. Wagener, Approximation of convex polygons, Proc. 17th Intemat. CoUoq. Automata Lang. Program., Lecture Notes in Comput. Sci. 443, Springer-Verlag (1990), 703-716. [5] A. Aggarwal, H. Booth, J. O'Rourke, S. Suri and C.K. Yap, Finding minimal convex nested polygons. Inform. Comput. 83(1) (October 1989), 98-110. [6] E.M. Arkin, L.P. Chew, D.P. Huttenlocher, K. Kedem and J.S.B. Mitchell, An efficiently computable metric for comparing polygonal shapes, IEEE Trans. Pattern Anal. Mach. Intell. 13 (3) (1991), 209-216. [7] H. Alt, U. Fuchs, G. Rote and G. Weber, Matching convex shapes with respect to the symmetric difference, Proc. 4th Annual European Symp. on Algorithms-ESA'96, Springer Lecture Notes in Comput. Sci. 1136 (1996), 320-333. [8] H. Alt and M. Godau, Measuring the resemblance of polygonal curves, Proc. 8th Annu. ACM Sympos. Comput. Geom. (1992), 102-109. [9] H. Alt and M. Godau, Computing the Frechet distance between two polygonal curves, Intemat. J. Comput. Geom. Appl. 5 (1995), 75-91. [10] A.V. Aho, I.E. Hopcroft and J.D. Ullman, The Design and Analysis of Computer Algorithms, AddisonWesley, Reading, MA (1974). [11] E.M. Arkin, K. Kedem, J.S.B. Mitchell, J. Sprinzak and M. Werman, Matching points into pairwise-disjoint noise regions: Combinatorial bounds and algorithms, ORSA J. Comput. 4 (4) (1992), 375-386. [12] S. Arya, D.M. Mount, N.S. Netanyahu, R. Silverman and A. Wu, An optimal algorithm for approximate nearest neighbor searching, Proc. 5th ACM-SIAM Sympos. Discrete Algorithms (1994), 573-582. [13] H. Alt, K. Mehlhom, H. Wagener and E. Welzl, Congruence, similarity and symmetries of geometric objects. Discrete Comput. Geom. 3 (1988), 237-256. [14] P.K. Agarwal and S.Suri, Surface approximation and geometric partitions, Proc. 5th ACM-SIAM Sympos. Discrete Algorithms (1994), 24-33. [15] B. Aronov, R. Seidel and D. Souvaine, On compatible triangulations of simple polygons, Comput. Geom. 3 (1) (1993), 27-35. [16] P.K. Agarwal, M. Sharir and S. Toledo, Applications of parametric searching in geometric optimization, J. Algorithms 17 (1994), 292-318. [17] D. Avis and G.T. Toussaint, An optimal algorithm for determining the visibility of a polygon from an edge, IEEE Trans. Comput. C-30 (1981), 910-1014. [18] M.J. Atallah, A linear time algorithm for the Hausdorff distance between convex polygons. Inform. Process. Lett. 17 (1983), 207-209. [19] M.D. Atkinson, An optimal algorithm for geometrical congruence, J. Algorithms 8 (1987), 159-172.

Discrete geometric shapes: Matching, interpolation,

and approximation

151

[20] T. Akutsu, H. Tamaki and T. Tokuyama, Distribution of distances and triangles in a point set and algorithms for computing the largest common point set, Proc. 13th Annu. ACM Sympos. Comput. Geom. (1997), 314323. [21] H.S. Baird, Model-Based Image Matching Using Location, Distinguished Dissertation Series, MIT Press (1984). [22] B. Belirends, Algorithmen zur Erkennung der e-Kongruenz von Punktmengen und Polygonen, M.S. thesis, Freie Univ. Berlin, Institute for Computer Science (1990). [23] H. Bronnimann and M.T. Goodrich, Almost optimal set covers in finite VC-dimension, Discrete Comput. Geom. 14 (1995), 263-279. [24] C. Burnikel, J. Konnemann, K. Mehlhom, S. Naher, S. Schirra and C. Uhrig, Exact geometric computation in LEDA, Proc. 11th Annu. ACM Sympos. Comput. Geom. (1995), C18-C19. [25] G. Barequet and M. Sharir, Piecewise-linear interpolation between polygonal slices, Proc. 10th Annu. ACM Sympos. Comput. Geom. (1994), 93-102. [26] W.S. Chan and F. Chin, Approximation of polygonal curves with minimum number of line segments or minimum error, Intemat. J. Comput. Geom. Appl. 6 (1996), 59-77. [27] L.P. Chew, D. Dor, A. Efrat and K. Kedem, Geometric pattern matching in d-dimensional space, Proc. 2nd Annu. European Sympos. Algorithms, Lecture Notes in Comput. Sci. 979, Springer-Verlag (1995), 264-279. [28] S.D. Cohen and L.J. Guibas, Partial matching of planar polylines under similarity transformations, Proc. 8th ACM-SIAM Sympos. Discrete Algorithms (January 1997), 777-786. [29] L.P. Chew, M.T. Goodrich, D.P Huttenlocher, K. Kedem, J.M. Kleinberg and D. Kravets, Geometric pattern matching under Euclidean motion, Comput. Geom. 7 (1997), 113-124. [30] K.L. Clarkson, Algorithms for polytope covering and approximation, Proc. 3rd Workshop Algorithms Data Struct., Lecture Notes in Comput. Sci. 709, Springer-Veriag (1993), 246-252. [31] T.H. Cormen, C.E. Leiserson and R.L. Rivest, Introduction to Algorithms, MIT Press, Cambridge, MA (1990). [32] R. Cole, Slowing down sorting networks to obtain faster sorting algorithms, Proc. 25th Annu. IEEE Sympos. Found. Comput. Sci. (1984), 255-260. [33] R. Cole, Slowing down sorting networks to obtain faster sorting algorithms, J. ACM 34 (1987), 200-208. [34] M. de Berg, O. Devillers, M. van Kreveld, O. Schwarzkopf and M. Teillaud, Computing the maximum overlap of two convex polygons under translation, Proc. 7th Annu. Intemat. Sympos. Algorithms Comput. (1996). [35] M. de Berg, M. van Kreveld and S. Schirra, A new approach to subdivision simplification, Proc. 12th Internal. Sympos. Comput.-Assist. Cartog. (1995), 79-88. [36] G. Das and D. Joseph, The complexity of minimum convex nested polyhedra, Proc. 2nd Canad. Conf. Comput. Geom. (1990), 296-301. [37] D.H. Douglas and T.K. Peucker, Algorithms for the reduction of the number of points required to represent a digitized line or its caricature, Canadian Cartographer 10 (2) (December 1973), 112-122. [38] W. Eric and L. Grimson, Object Recognition by Computer: The Role of Geometric Constraints, MIT Press (1990). [39] A. Efrat and A. Itai, Improvements on bottleneck matching and related problems using geometry, Proc. 12th Annu. ACM Sympos. Comput. Geom. (1996), 301-310. [40] H. Edelsbrunner and E.P Miicke, Three-dimensional alpha shapes, ACM Trans. Graph. 13 (1) (January 1994), 43-72. [41] D. Eu and G. Toussaint, On approximating polygonal curves in two and three dimensions, Comput. Vision Graph. Image Process. 56 (1994), 231-246. [42] H. Edelsbrunner and R. Waupotitsch, A combinatorial approach to cartograms, Proc. 11th Annu. ACM Sympos. Comput. Geom. (1995), 98-108. [43] H. Fuchs, Z.M. Kedem and S.P. Uselton, Optimal surface reconstruction from planar contours, Comm. ACM 20 (1977), 693-702. [44] T. Feder and R. Motwani, Clique partitions, graph compression and speeding up algorithms, Proc. 23rd ACM Symp. Theory of Computing (1991), 123-133. [45] L. Guibas and J. Hershberger, Morphing simple polygons, Proc. 10th Annu. ACM Sympos. Comput. Geom. (1994), 267-276.

152

H. Alt and LJ.

Guibas

[46] LJ. Guibas, J.E. Hershberger, J.S.B. Mitchell and J.S. Snoeyink, Approximating polygons and subdivisions with minimum link paths, Intemat. J. Comput. Geom. Appl. 3(4) (December 1993), 383^15. [47] A. Glassner, Metamorphosis ofpolyhedra, Manuscript (1991). [48] M.T. Goodrich, J.S. Mitchell and M.W. Orietsky, Practical methods for approximate geometric pattern matching under rigid motion, Proc. 10th Annu. ACM Sympos. Comput. Geom. (1994), 103-112. [49] M.T. Goodrich, Efficient piecewise-linear function approximation using the uniform metric. Discrete Comput. Geom. 14 (1995), 445-462. [50] C. Gitlin, J. O'Rourke and V. Subramanian, On reconstructing polyhedra from parallel slices, Intemat. J. Comput. Geom. Appl. 6 (1) (1996), 103-122. [51] L.J. Guibas, L. Ramshaw and J. Stolfi, A kinetic framework for computational geometry, Proc. 24th Annu. IEEE Sympos. Found. Comput. Sci. (1983), 100-111. [52] B. Grunbaum, Convex Polytopes, Wiley, New York, NY (1967). [53] D.P. Huttenlocher and K. Kedem, Computing the minimum Hausdorff distance for point sets under translation, Proc. 6th Annu. ACM Sympos. Comput. Geom. (1990), 340-349. [54] D.P. Huttenlocher, K. Kedem and J.M. Kleinberg, On dynamic Voronoi diagrams and the minimum Hausdorff distance for point sets under Euclidean motion in the plane, Proc. 8th Annu. ACM Sympos. Comput. Geom. (1992), 110-120. [55] D.P. Huttenlocher, G.A. Klanderman and W.J. Rucklidge, Comparing images using the Hausdorff distance, IEEE Trans, on Pattern Analysis and Machine Intelligence 15 (1993), 850-863. [56] D.P. Huttenlocher, K. Kedem and M. Sharir, The upper envelope of Voronoi surfaces and its applications. Discrete Comput. Geom. 9 (1993), 267-291. [57] S.L. Hakimi and E.F. Schmeichel, Fitting polygonal functions to a set ofpoints in the plane, CVGIP: Graph. Models Image Process. 53 (2) (1991), 132-136. [58] J. Hershberger and J. Snoeyink, Speeding up the Douglas-Peucker line simplification algorithm, Proc. 5th Inti. Symp. Spatial Data Handling. IGU Commission on GIS (1992), 134-143. [59] P.J. Heffeman and S. Schirra, Approximate decision algorithms for point set congruence, Comput. Geom. 4 (1994), 137-156. [60] J. Hershberger and J. Snoeyink, Computing minimum length paths of a given homotopy class, Comput. Geom. 4 (1994), 63-98. [61] J. Hershberger and S. Suri, Morphing binary trees, Proc. 6th ACM-SIAM Sympos. Discrete Algorithms (1995), 396-404. [62] D. Huttenlocher and S. Ullman, Recognizing solid objects by alignment with an image, Intemat. J. Computer Vision 5 (1990), 195-212. [63] H. Imai and M. Iri, Computational-geometric methods for polygonal approximations of a curve, Comput. Vision Graph. Image Process. 36 (1986), 3 1 ^ 1 . [64] H. Imai and M. Iri, Polygonal approximations of a curve-formulations and algorithms. Computational Morphology, G.T. Toussaint, ed., North-Holland, Amsterdam, Netherlands (1988), 71-86. [65] S. Irani and P. Raghavan, Combinatorial and experimental results for randomized point matching algorithms, Proc. 12th Annu. ACM Sympos. Comput. Geom. (1996), 68-77. [66] K. Imai, S. Sumino and H. Imai, Minimax geometric fitting of two corresponding sets of points, Proc. 5th Annu. ACM Sympos. Comput. Geom. (1989), 266-275. [67] J. Kent, W. Carlson and R. Parent, Shape transformation for polyhedral objects. Computer Graphics (SIGGRAPH '92 Proceedings), Vol. 26 (1992), 47-54. [68] A. Kaul and J. Rossignac, Solid-interpolating deformations: Construction and animation of PIPs, Proc. Eurographics '91 (1991), 493-505. [69] J.-C. Latombe, Robot Motion Planning, Kluwer Acad. Publ., Boston (1991). [70] Y. Lamdan, J.T. Schwartz and H.J. Wolfson, Object recognition by affine invariant matching. Proceedings of Computer Vision and Pattern Recognition (1988), 335-344. [71] Y Lamdan, J.T. Schwartz and H.J. Wolfson, On recognition o/3-d objects from 2-d images. Proceedings of the 1988 IEEE International Conference on Robotics and Automation (1988), 1407-1413. [72] Y Lamdan and H.J. Wolfson, Geometric hashing: A general and efficient model-based recognition scheme, Second Intemational Conference on Computer Vision (1988), 238-249. [73] L.A. Lyustemik, Convex Figures and Polyhedra, D.C. Heath, Boston, MA (1966).

Discrete geometric shapes: Matching, interpolation,

and approximation

153

[74] N. Megiddo, Applying parallel computation algorithms in the design of serial algorithms, J. ACM 30 (1983), 852-865. [75] A. Melkman and J. O'Rourke, On polygonal chain approximation. Computational Morphology, G.T. Toussaint, ed., North-Holland, Amsterdam, Netherlands (1988), 87-95. [76] J.S.B. Mitchell and S. Suri, Separation and approximation ofpolyhedral objects, Comput. Geom. 5 (1995), 95-114. [77] R. Norel, D. Fischer, H. Wolfson and R. Nussinov, Molecular surface recognition by a computer visionbased technique. Protein Engineering 7 (1994), 3 9 ^ 6 . [78] R. Norel, S.L. Lin, H. Wolfson and R. Nussinov, Shape complimentarity at protein-protein interfaces, Biopolymers 34 (1994), 933-940. [79] M.H. Overmars and C.-K. Yap, New upper bounds in Klee's measure problem, SIAM J. Comput. 20 (1991), 1034-1045. [80] K. Przeslawski and D. Yost, Continuity properties of selectors and Michael's theorem, Michigan Math. J. 36(1989), 113-134. [81] J. Rossignac and A. Kaul, Agrels and bips: Metamorphosis as a Bezier curve in the space of polyhedra, Eurographics '94 Proceedings, Vol. 13 (1994), 179-184. [82] S. Schirra, Uber die Bitkomplexitdt der s-Kongruenz, M.S. thesis, Univ. des Saarlandes, Computer Science Department (1988). [83] T. Sederberg and E. Greenwood, A physically based approach to 2D shape blending. Computer Graphics (SIGGRAPH '92 Proceedings), Vol. 26 (1992), 25-34. [84] T. Sederberg, P. Gao, G. Wang and H. Mu, 2D shape blending: An intrinsic solution to the vertex path problem. Computer Graphics (SIGGRAPH '93 Proceedings), Vol. 27 (1993), 15-18. [85] G.C. Shephard, The Steiner point of a convex poly tope, Canadian J. Math. 18 (1966), 1294-1300. [86] M. Shapira and A. Rappoport, Shape blending using the skeleton representation, IEEE Computer Graphics and Appl. 16 (1995), 44-50. [87] S. Suri, A linear time algorithm for minimum link paths inside a simple polygon, Comput. Vision Graph. Image Process. 35 (1986), 99-110. [88] J. Sprinzak and M. Werman, Affine point matching. Pattern Recogn. Lett. 15 (1994), 337-339. [89] K. Varadarajan, Approximating monotone polygonal curves using the unfirm metric, Proc. 12th ACM Symp. Computational Geometry (1996). [90] E. Welzl and B. Wolfers, Surface reconstruction between simple polygons via angle criteria, J. Symbolic Comput. 17 (1994), 351-369.

This Page Intentionally Left Blank

CHAPTER 4

Deterministic Parallel Computational Geometry Mikhail J. Atallah* Department of Computer Sciences, Purdue University, West Lafayette, IN 47907, USA E-mail: [email protected]

Danny Z. Chen^ Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA E-mail: [email protected]

Contents 1. Introduction 2. PRAM models 3. Basic subproblems 3.1. Sorting and merging 3.2. Parallel prefix 3.3. List ranking 3.4. Tree contraction 3.5. Brent's theorem 3.6. Euler tour technique 3.7. Lowest common ancestors (LCA) 4. Inherently sequential geometric problems 4.1. Plane-sweep triangulation 4.2. Weighted planar partitioning 4.3. Visibility layers 4.4. Open problems 5. Parallel divide and conquer 5.1. Two-way divide and conquer 5.2. "Rootish" divide and conquer 5.3. Example: visibility in a polygon 6. Cascading 6.1. A rough sketch of cascading 6.2. Cascading merge sort

157 157 159 160 160 160 160 161 161 162 163 163 163 163 164 164 164 165 165 168 168 169

* Portions of the work of this author were supported by the National Science Foundation under Grant CCR9202807, and by sponsors of the COAST Laboratory. ^The work of this author was supported in part by the National Science Foundation under Grant CCR-9623585. HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V. All rights reserved

155

156

M.J. Atallah and D.Z. Chen

7. Fractional cascading in parallel 8. Cascading with labeling functions 8.1. The 3-dimensional maxima problem 8.2. The two-set dominance counting problem 8.3. Other applications of cascading 9. Cascading without concurrent reads 10. Matrix searching techniques 10.1. Row minima 10.2. Tube minima 10.3. Generalized monotone matrices 11. Other useful PRAM techniques 11.1. Geometric hierarchies 11.2. From CREW to EREW 11.3. Array-of-trees 11.4. Deterministic geometric sampling 11.5. Output-sensitive algorithms 11.6. Stratified decomposition trees 11.7. Prune-and-search 12. Further remarks References

172 176 176 178 179 180 181 181 184 187 187 187 187 188 189 189 190 191 191 192

Abstract We describe general methods for designing deterministic parallel algorithms in computational geometry. We focus on techniques for shared-memory parallel machines, which we describe and illustrate with examples. We also discuss some open problems in this area.

Deterministic parallel computational geometry

157

1. Introduction Many problems in computational geometry come from application areas (such as pattern recognition, computer graphics, operations research, computer-aided design, robotics, etc.) that require real-time speeds. This need for speed makes parallelism a natural candidate for helping achieve the desired performance. For many of these problems, we are already at the limits of what can be achieved through sequential computation. The traditional sequential methods can be inadequate for those applications in which speed is important and that involve a large number of geometric objects. Thus, it is important to study what kinds of speed-ups can be achieved through parallel computing. As an indication of the importance of this research direction, we note that four of the eleven problems used as benchmark problems to evaluate parallel architectures for the DARPA Architecture Workshop Benchmark Study of 1986 were computational geometry problems. In parallel computation, it is the rule rather than the exception that the known sequential techniques do not translate well into a parallel setting; this is also the case in parallel computational geometry. The difficulty is usually that these techniques use methods which either seem to be inherently sequential, or would result in inefficient parallel implementations. Thus new paradigms are needed for parallel computational geometry. The goal of this chapter is to give a detailed look at the currently most successful techniques in parallel computational geometry, while simultaneously highlighting some open problems, and discussing possible extensions to these techniques. It differs from [33] in that it gives a much more detailed coverage of the shared-memory model at the expense of the coverage of networks of processors. Since it is impossible to describe all the parallel geometric algorithms known, our focus is on general algorithmic techniques rather than on specific problems; no attempt is made to list exhaustively all of the known deterministic parallel complexity bounds for geometric problems. For more discussion of parallel geometric algorithms, the reader is referred to [15,139]. The rest of the chapter is organized as follows. Section 2 briefly reviews the PRAM parallel model and the notion of efficiency in that model. Section 3 reviews basic subproblems that tend to arise in the solutions of geometric problems on the PRAM, Section 4 is about inherently sequential (i.e. non-parallelizable) geometric problems, Section 5 discusses the parallel divide and conquer techniques. Section 6 discusses the cascading technique, Section 7 discusses parallel fractional cascading, Section 8 discusses cascading with labeling functions. Section 9 discusses cascading in the EREW model. Section 10 discusses parallel matrix searching techniques, Section 11 discusses a number of other useful PRAM techniques, and Section 12 concludes.

2. PRAM models The PRAM (Parallel Random Access Machine) has so far been the main vehicle used to study the parallel algorithmics of geometric problems, and hence it is the focus of this chapter. This section briefly reviews the PRAM model and its variants. The PRAM model of parallel computation is the shared-memory model in which the processors operate synchronously [153], as illustrated in Figure 1. A step on a PRAM

158

MJ. Atallah and D.Z. Chen Processors

Shared Memory Cell 1

p, ^

Cell 2 Cell 3

P2

P3

• • •

-

• • •

r^ Fig. 1. The PRAM model.

consists of each processor reading the content of a cell in the shared memory, writing data in a cell of the shared memory, or performing a computation within its own registers. Thus all communication is done via the shared memory. There are many variants of the PRAM, differing from one another in the way read and/or write conflicts are treated. The CREW (Concurrent Read Exclusive Write) version of this model allows many processors to simultaneously read the content of a memory location, but forbids any two processors from simultaneously attempting to write in the same memory location (even if they are trying to write the same thing). The CRCW (Concurrent Read Concurrent Write) version of the PRAM differs from the CREW one in that it also allows many processors to write simultaneously in the same memory location: In any such common-write contest, only one processor succeeds, but it is not known in advance which one. (There are other versions of the CRCW PRAM but we shall not concern ourselves with these here.) The EREW PRAM is the weakest version of the PRAM: It forbids both concurrent reading and concurrent writing. The PRAM has been extensively used in theoretical studies as a vehicle for designing parallel algorithms. Although it captures important parameters of a parallel computation, the PRAM does not account for communication and synchronization (more recent variations of the model do account for these factors, but we do not discuss them since essentially no parallel geometric algorithms have yet been designed for them). The PRAM is generally considered to be a rather unrealistic model of parallel computation. However, although there are no PRAMs commercially available, algorithms designed for PRAMs can often be efficiently simulated on some of the more realistic parallel models. The PRAM enables the algorithm designer to focus on the structure of the problem itself, without being distracted by architecture-specific issues. Another advantage of the PRAM is that, if one can give strong evidence (in the sense explained in the next paragraph) that a problem has no fast parallel solution on the PRAM, then there is no point in looking for a fast solution to it on more realistic parallel models (since these are weaker than the PRAM). We now review some basic notions concerning the speed and efficiency of PRAM computations. The time x processor product of a PRAM algorithm is called its work (i.e., the total number of operations performed by that algorithm). A parallel algorithm is said to run

Deterministic parallel computational geometry

159

in poly logarithmic time if its time complexity is 0(log^ n), where n is the problem size and ^ is a constant independent of n (i.e., k = 0{\)). A problem solvable in polylogarithmic time using a polynomial number of processors is said to be in the class NC. It is strongly believed (but not proved) that not all problems solvable in polynomial time sequentially are solvable in polylogarithmic time using a polynomial number of processors (i.e., it is believed that P ^ NC). As in the theory of NP-completeness, there is an analogous theory in parallel computation for showing that a particular problem is probably not in NC: By showing that the membership of that problem in NC would imply that P = NC. Such a proof consists of showing that each problem in P admits an NC reduction to the problem at hand (an NC reduction is a reduction that takes polylogarithmic time and uses a polynomial number of processors). Such a problem is said to be V-complete. For a more detailed discussion of the class NC and parallel complexity theory, see (for example) [189] or [156]. A proof establishing P-completeness of a problem is viewed as strong evidence that the problem is "inherently sequential". Once one has established that a geometric problem is in NC, the next step is to design a PRAM algorithm for it that runs as fast as possible, while being efficient in the sense that it uses as few processors as possible. Ideally, the parallel time complexity of the PRAM algorithm should match the parallel lower bound of the problem (assuming such a lower bound is known), and its work complexity should match the best known sequential time bound of the problem. A parallel lower bound for a geometric problem is usually established by showing that such an algorithm can be used to solve some other (perhaps non-geometric) problem having that lower bound. For example, it is well known [94] that computing the logical OR of n bits has an Q (logn) time lower bound on a CREW PRAM. This can easily be used to show that detecting whether the boundaries of two convex polygons intersect also has an Q(\ogn) time lower bound in that same model, by encoding the n bits whose OR we wish to compute in two concentric regular n-gons such that the /th bit governs the relative positions of the /th vertices of the two n-gons. Interestingly, if the word "boundaries" is removed from the previous sentence then the lower bound argument falls apart and it becomes possible to solve the problem in constant time on a CREW PRAM, even using a sublinear number of processors [43,224]. Before reviewing the techniques that have resulted in many PRAM geometric algorithms that are fast and efficient in the above sense, a word of caution is in order. From a theoretical point of view, the class NC and the requirement that a "fast" parallel algorithm should run in polylogarithmic time, are eminently reasonable. But from a more practical point of view, not having a polylogarithmic time algorithm does not entirely doom a problem to being "non-parallelizable". One can indeed argue [221] that, e.g., a problem of sequential complexity G){n) that is solvable in 0{^) time by using ^ processors is "parallelizable" in a very real sense, even if no polylogarithmic time algorithm is known for it.

3. Basic subproblems This section reviews some basic subproblems that tend to be used as subroutines in the design of PRAM geometric algorithms.

160

M.J. Atallah and D.Z. Chen

3.1. Sorting and merging Sorting is probably the most frequently used subroutine in parallel geometric algorithms. Fortunately, for PRAM models we know how to sort n numbers optimally: 0(log«) time and n processors on the EREW PRAM [87,12]. Merging on the PRAM is easier than sorting [211,222,54,147]; on the CREW PRAM it is O(loglogn) time with n/\og\ogn processors, and O(logn) time with n/\ogn processors on the EREW PRAM.

3.2. Parallel prefix Given an array A of n elements and an associative operation denoted by +, the parallel prefix problem is that of computing an array Bofn elements such that B(i) = J2k=i Mk)Parallel prefix can be performed in O(logn) time and n/\ogn processors on an EREW PRAM [159,164], 0(log«/loglogn) time and n\og\ogn/\ogn processors on a CRCW PRAM [92]. Computing the smallest element in array A is a special case of parallel prefix: In the CRCW model, this can be done faster than general parallel prefix — in 0(1) time with n^~^^ processors for any positive constant e or, alternatively, in O(loglogn) time with n/loglogn processors [211].

3.3. List ranking List ranking is a more general version of the parallel prefix problem: The elements are given as a linked list, i.e., we are given an array A each entry of which contains an element as well as a pointer to the entry of A containing the predecessor of that element in the linked list. The problem is to compute an array B such that B(i) is the "sum" of the first / elements in the linked list. This problem is considerably harder than parallel prefix, and most tree computations as well as many graph computations reduce, via the Euler tour technique (described below), to solving this problem. EREW PRAM algorithms that run in O(logn) time and n/ \ogn processors are known for list ranking [90,24].

3.4. Tree contraction Given a (not necessarily balanced) rooted tree 7, the problem is to reduce T to a 3-node tree by a sequence of rake operations. A rake operation can be applied at a leaf v by removing V and the parent of v, and making the sibling of i; a child of f's grandparent (note that rake cannot be applied at a leaf whose parent is the root). This is done as follows: Number the leaves 1,2,..., etc., in left to right order, and apply the rake operation first to all the odd-numbered leaves that are left children, then to the other odd-numbered leaves. Renumber the remaining leaves and repeat this operation, until done. The number of iterations performed by such an algorithm is logarithmic because the number of leaves is reduced by half at each iteration. Note that applying the rake to all the odd-numbered leaves at the same time would not work, as can be seen by considering the situation where

Deterministic parallel computational

geometry

161

Fig. 2. The Euler tour of a tree T.

V is an odd-numbered left child, w is an odd-numbered right child, and the parent of v is the grandparent of w (what goes wrong in that case is that i; wants to remove its parent p and simultaneously w wants its sibling to become child of p). Tree contraction is an abstraction of many other problems, including that of evaluating an arithmetic expression tree [175]. Many elegant optimal EREW PRAM algorithms for it are known [1,90,128, 158], running in 0(logn) time with n/logn processors.

3.5. Brent's theorem This technique is frequently used to reduce the processor complexity of an algorithm without any increase in the time complexity. THEOREM 3.1 (Brent). Any synchronous parallel algorithm taking time T that consists of a total ofW operations can be simulated by P processors in time 0{{W/P) + T).

There are actually two qualifications to the above Brent's theorem [60] before one can apply it to a PRAM: (i) At the beginning of the /-th parallel step, we must be able to compute the amount of work Wt done by that step, in time 0{Wi/P) and with P processors, and (ii) we must know how to assign each processor to its task. Both (i) and (ii) are generally (but not always) easily satisfied in parallel geometric algorithms, so that the hard part is usually achieving W operations in time T.

3.6. Euler tour technique "Wrapping a chain" around a tree defines an Euler tour of that tree. More formally, if T is an undirected tree, the Euler tour of T is obtained in parallel by doing the following for every node vofT\ Letting WQ,W\, .. .,Wk-\ be the nodes of T adjacent to f, in the order in which they appear in the adjacency list of i;, we set the successor of each (directed)

162

MJ. Atallah and D.Z. Chen

edge {wi, v) equal to the (directed) edge (u, u;(/+i)mod k)- See Figure 2 for an illustration. Most tree computations as well as many graph computations reduce, via the Euler tour technique, to the list ranking problem [216], as the following examples demonstrate. 1. Rooting an undirected tree at a designated node i;: Create the Euler tour of the tree, "open" the tour at v (thus making the tour an Euler path), then do a parallel prefix along the linked list of arcs described by the successor function (with a weight of 1 for each arc). For each undirected edge [x,y] of the tree, if the prefix sum of the directed edge {x,y)m the Euler tour is less than that of the directed edge (y, x), then set X to be the parent of y. This parallel computation of parents makes the tree rooted at V. 2. Computing post-order numbers of a rooted tree, where the left-to-right ordering of the children of a vertex is the one implied by the Euler path: Do a list ranking on the Euler path, where each directed edge (jc, j ) of the Euler path has a weight of unity if y is the parent of x, and a weight of zero if x is the parent of y. 3. Computing the depth of each node of a rooted tree: Do a list ranking on the Euler path, where each directed edge {x,y) of the Euler path has a weight of — 1 if y is the parent of x, and a weight of -hi if JC is the parent ofy. 4. Numbering of descendants of each node: Same computation as for post-order numbers, followed by making use of the observation that the number of descendants of i; equals the fist rank of (f, parent(i;)) minus that of (parent(i;), i;). These examples of reductions of so many problems to list ranking demonstrate the importance of the list ranking problem. See [216,153] for other examples, and for more details about the above reductions.

3.7. Lowest common ancestors (LCA) The problem is to preprocess a rooted tree T so that a lowest common ancestor (LCA) query of any two nodes x,y of T can be answered in constant time by one processor. The preprocessing is to be done in logarithmic time and n/\ogn EREW PRAM processors. The problem is easy when T is a simple path (list ranking does the job), or when it is a complete binary tree (it is then solved by comparing the binary representations of the inorder numbers of x and y, specifically finding the leftmost bit where they disagree). For a general tree, the problem is reduced to that of the range minima: Create an Euler tour of the tree, where each node v knows its first and last appearance on that tour as well as its depth (i.e., the distance from the root) in the tree. This reduces the problem of answering an LCA query to determining, in constant sequential time, the smallest entry in between two given indices /, j in an array. This last problem is called the range-minima problem. The book [153] contains more details on the reduction of LCA to range minima, a solution to range minima, and references to the relevant literature for this problem. The above list of basic subproblems is not exhaustive in that (i) many techniques that are basic for general combinatorial problems were omitted (we have focused only on those most relevant to geometric problems rather than to general combinatorial problems), and (ii) among the techniques applicable to geometric problems, we have postponed covering the more specialized ones.

Deterministic parallel computational geometry

163

4. Inherently sequential geometric problems Most of the problems shown to be P-complete to date are not geometric (most are graph or algebra problems). This is no accident: Geometric problems in the plane tend to have enough structures to enable membership in NC. Even the otherwise P-complete problem of linear programming [119,120] is in NC when restricted to the plane. In the rest of this section, we mention the (very few) planar geometric problems that are known to be P-complete, and also a problem that is conjectured to be P-complete. Several of the problems known to be P-complete involve a collection of line segments in the plane. The P-completeness proofs of the three problems in the following subsections were given in [28]; for the third problem see also [149]. The proofs consist of giving NC reductions from the monotone circuit value problem and planar circuit value problem, which are known to be P-complete [129,160,181]. These reductions typically involve the use of geometry to simulate a circuit, by utilizing the relative positions of objects in the plane. 4.1. Plane-sweep triangulation One is given a simple n-vertex polygon P (which may contain holes) and asked to produce the triangulation that would be constructed by the following sequential algorithm: Sweep the plane from left to right with a vertical line L, such that each time L encounters a vertex f of P one draws all diagonals of P from v that do not cross previously drawn diagonals. This problem is a special case of the well-known polygon triangulation problem (see [124, 193]), and it clearly has a polynomial time sequential solution. 4.2. Weighted planar partitioning Suppose one is given a collection of n non-intersecting line segments in the plane, such that each segment s is given a distinct weight w{s), and asked to construct the partitioning of the plane produced by extending the segments in the sorted order of their weights. The extension of a segment "stops" at the first segment (or segment extension) that is "hit" by the extension. This problem has applications to "art gallery problems" [98,186], and is P-complete even if there are only 3 possible orientations for the line segments. It is straightforward to solve it sequentially in O(nlog^n) time (by using the dynamic point-location data structure of [194]), and in 0(n \ogn) time by a more sophisticated method [98]. 4.3. Visibility layers One is given a collection of n non-intersecting line segments in the plane, and asked to label each segment by its "depth" in terms of the following layering process (which starts with / = 0 ) : Find the segments that are (partially) visible from point (0, +oo), label each such segment as being at depth /, remove each such segment, increment /, and repeat until no segments are left. This is an example of a class of problems in computational geometry known as layering problems or onion peeling problems [63,165,187], and is P-complete even if all the segments are horizontal.

164

M.J. Atallah and D.Z. Chen

*:' I *

Fig. 3. The convex layers of a point set in the plane.

4.4. Open problems Perhaps the most famous open problem in the area of geometric P-completeness is that of the convex layers problem [63]: Given n points in the plane, mark the points on the convex hull of the n points as being layer zero, then remove layer zero and repeat the process, generating layers 1,2,..., etc. See Figure 3 for an example. Some recent work has shown that a generalization of the convex layers problem, called multi-list ranking [114], is indeed P-complete. Although the results in [114] shed some light on the convex layers problem, the P-completeness of the convex layers problem remains open. In view of the P-completeness of the above-mentioned visibility layers problem, it seems reasonable to conjecture that the convex layers problem is also P-complete; however not all layering problems are P-complete. For example, the layers of maxima problem (defined analogously to convex layers but with the words "maximal elements" playing the role of "convex hull") is easily shown to be in NC by a straightforward reduction to the computation of longest paths in a directed acyclic graph [31] (each input point is a vertex, and there is a directed edge from point p to point q iff both the x and y coordinates of p are ^ those of ^, respectively).

5. Parallel divide and conquer As in sequential algorithms, divide and conquer is a useful technique in parallel computation. Parallel divide and conquer comes in many flavors which we discuss next.

5.1. Two-way divide and conquer The sequential divide and conquer algorithms that have efficient PRAM implementations are those for which the "conquer" step can be done extremely fast (e.g., in constant time). Take, for example, an 0{n \ogn) time sequential algorithm that works by recursively solving two problems of size n/2 each, and then combining the answers they return in linear time. In order for a PRAM implementation of such an algorithm to run in O(logn) time with n processors, the n processors must be capable of performing the "combine" stage in constant time. For some geometric problems this is indeed possible (e.g., the convex hull

Deterministic parallel computational geometry

165

problem [43,224]). The time complexity T(n) and processor complexity P(n) of such a PRAM implementation then obey the recurrences T(n)^T(n/2)^cu P(n)^max{n,2P(n/2)}, with boundary conditions T(l) < C2 and P ( l ) = 1, where c\ and C2 are positive constants. These imply that T(n) = 0(logn) and P(n) = n. But for many problems, such an attempt at implementing a sequential algorithm fails because of the impossibility of performing the "conquer" stage in constant time. For these, the next approach often works.

5.2. ''Rootish " divide and conquer By "rootish", we mean partitioning a problem into n^/^ subproblems to be solved recursively in parallel, for some positive constant integer k (usually, k = 2). For example, instead of dividing the problem into two subproblems of size n/2 each, we divide it into (say) ^/n subproblems of size ^/n each, which we recursively solve in parallel. That the conquer stage takes O(logn) time (assuming it does) causes no harm with this subdivision scheme, since the time and processor recurrences in that case would be T(n)^T{^/n)-\-cilogn, P(n) < maxjw, ^/nP(^^/n)], with boundary conditions T(l) ^C2 and P ( l ) = 1, where ci and C2 are positive constants. These imply that T(n) = O(logn) and P(n) = n. The problems that can be solved using rootish divide and conquer scheme include the convex hull of points [4,42,131], the visibility of non-intersecting planar segments from a point [53], the visibility of a polygonal chain from a point [36], the convex hull of a simple polygon [71], detecting various types of weak visibility of a simple polygon [69,72-74], triangulating useful classes of simple polygons [71,132], and the determination of monotonicity of a simple polygon [76]. This scheme also finds success in other computational models such as the hypercube [32,70]. The scheme is useful in various ways and forms, and sometimes with recurrences very different from the above-mentioned ones, as the next example demonstrates.

5.3. Example: visibility in a polygon Given a source point q and a simple n-vertex polygonal chain P in the plane, this visibility problem is that of finding all the points of P that are visible from ^ if P is "opaque". We seek an algorithm that takes 0(log n) time and uses 0(n/ log n) FREW PRAM processors,

166

MJ. Atallah and D.Z. Chen

Fig. 4. The visibility chain VIS{C) of a polygonal chain C from the point q = (0, oo).

which is optimal to within a constant factor because (i) there is an obvious Q{n) sequential lower bound for the problem, and (ii) an Q(\ogn) lower bound on its EREW PRAM time complexity can be obtained by reducing to it the problem of computing the maximum of n entries, a problem with a known logarithmic time lower bound [94]. This is one instance of a problem in which one has to use a hybrid of two-way divide and conquer and rootish divide and conquer, in order to obtain the desired complexity bounds. The recursive procedure we sketch below follows [36] and takes two input parameters (one of which is the problem size) and uses either fourth-root divide and conquer or twoway divide and conquer, depending on the relative sizes of these two parameters. The role played by the geometry is central to the "combine" step (other algorithms of this kind can be found in [68,71-74,130-132] for solving many problems on polygons and point sets). We call VisChain the recursive procedure for computing the visibility chain of a simple polygonal chain from the given source point q (see Figure 4 for an example of the visibility chain VIS{C) from the point q = (0, oo)). The procedure is outlined below. Let \C\ denote the number of vertices of a polygonal chain C. The initial call to the procedure is VisChaiii(P,«, log«), where P is a simple polygonal chain and n = \P\. VisChain(C,m,J) Input: A simple polygonal chain C, m = |C|, and a positive integer d of our choice. Output: The visibility chain of C, V75(C), from the point ^. Step 1. If m ^ J, then compute VIS{C) with one processor in 0(m) time, using any of the known sequential linear-time algorithms. Step 2. \i d -dimensional convex hull of a set of points on a mesh ofprocessors, Proc. Scandinavian Workshop on Algorithms and Theory (1988), 154-162. 112] F. Dehne and I. Stojmenovic, An O(yrt) time algorithm for the ECDF searching problem for arbitrary dimensions on a mesh of processors. Inform. Process. Lett. 28 (1988), 67-70. 113] X. Deng, An optimal parallel algorithm for linear programming in the plane. Inform. Process. Lett. 35 (1990), 213-217. 114] A. Dessmark, A. Lingas and A. Maheshwari, Multi-list ranking: Complexity and applications, Theoret. Comput. Sci. 141 (1995), 337-350. 115] O. Devillers and A. Fabri, Scalable algorithms for bichromatic line segment intersection problems on coarse grained multicomputers, Intemat. J. Comput. Geom. Appl. 6 (1996), 487-506. 116] D.P. Dobkin and D.G. Kirkpatrick, Fast detection of polyhedral intersections, Theoret. Comput. Sci. 27 (1983), 241-253. 117] D.P. Dobkin and D.G. Kirkpatrick, A linear time algorithm for determining the separation of convex polyhedra, J. Algorithms 6 (1985), 381-392. 118] D.P. Dobkin and D.G. Kirkpatrick, Determining the separation of preprocessed polyhedra — a unified approach, Proc. of the Int. CoUoq. on Automata, Lang, and Programming (1990), 154—165. 119] D.P. Dobkin, R.J. Lipton and S. Reiss, Linear programming is log-space hard for P, Inform. Process. Lett. 9 (1979), 96-97. 120] D.P Dobkin and S. Reiss, The complexity of linear programming, Theoret. Comput. Sci. 11 (1980), 1-18. 121] J.R. DriscoU, N. Samak, D.D. Sleator and R.E. Tarjan, Making data structures persistent, Proc. 18th Annual ACM Symp. on Theory of Computing (1986), 109-121.

Deterministic parallel computational

geometry

197

[122] M.E. Dyer, Linear time algorithms for two- and three-variable linear programs, SIAM J. Comput. 13 (1984), 3 1 ^ 5 . [123] M.E. Dyer, A parallel algorithm for linear programming infixed dimension, Proc. 11th Annual Symp. on Computational Geometry (1995), 345-349. [124] H. Edelsbrunner, Algorithms in Combinatorial Geometry, Springer-Verlag, New York (1987). [125] H. ElGindy and M.T. Goodrich, Parallel algorithms for shortest path problems in polygons. The Visual Computer 3 (1988), 371-378. [126] G.N. Frederickson and D.B. Johnson, The complexity of selection and ranking in X -{-Y and matrices with sorted columns, J. Comput. System Sci. 24 (1982), 197-208. [127] M. Ghouse and M.T. Goodrich, Fast randomized parallel methods for planar convex hull construction, Comput. Geom.: Theory and Applications, to appear. [128] A. Gibbons and W. Rytter, An optimal parallel algorithm for dynamic expression evaluation and its applications, Proc. Symp. on Found, of Software Technology and Theoretical Comp. Sci., Springer-Verlag (1986), 453^69. [129] L.M. Goldschlager, The monotone and planar circuit value problems are log space complete for P, SIGACT News 9 (1977), 25-29. [ 130] M.T. Goodrich, Efficient parallel techniques for computational geometry, PhD thesis. Department of Computer Sciences, Purdue University (1987). [131] M.T. Goodrich, Finding the convex hull of a sorted point set in parallel. Inform. Process. Lett. 26 (1987), 173-179. [132] M.T. Goodrich, Triangulating a polygon in parallel, J. Algorithms 10 (1989), 327-351. [133] M.T. Goodrich, Intersecting line segments in parallel with an output-sensitive number ofprocessors, SIAM J. Comput. 20 (1991), 737-755. [134] M.T. Goodrich, Using approximation algorithms to design parallel algorithms that may ignore processor allocation, Proc. 32nd IEEE Symp. on Foundations of Computer Science (1991), 711-722. [135] M.T. Goodrich, Constructing arrangements optimally in parallel. Discrete Comput. Geom. 9 (1993), 371385. [136] M.T. Goodrich, Geometric partitioning made easier, even in parallel, Proc. 9th Annual ACM Symp. Computational Geometry (1993), 73-82. [137] M.T. Goodrich, Planar separators and parallel polygon triangulation, J. Comput. System Sci. 51 (1995), 374-389. [138] M.T. Goodrich, Fixed-dimensional parallel linear programming via relative Epsilon-approximations, Proc. 7th Annual ACM-SIAM Symp. Discrete Algorithms (1996), 132-141. [139] M.T. Goodrich, Parallel computational geometry, CRC Handbook of Discrete and Computational Geometry, I.E. Goodman and J. O'Rourke, eds, CRC Press, Inc. (1997), 669-682. [140] M.T. Goodrich, M. Ghouse and J. Bright, Sweep methods for parallel computational geometry, Algorithmica 15, 126-153. [141] M.T. Goodrich, C. O'Dunlaing and C. Yap, Computing the Voronoi diagram of a set of line segments in parallel, Algorithmica 9 (1993), 128-141. [142] M.T. Goodrich and E.A. Ramos, Bounded-independence derandomization of geometric partitioning with applications to parallel fixed-dimensional linear programming. Discrete Comput. Geom., to appear. [143] M.T. Goodrich, S.B. Shauck and S. Guha, Parallel method for visibility and shortest path problems in simple polygons, Algorithmica 8 (1992), 461^86. [144] S. Guha, Parallel computation of internal and external farthest neighbours in simple polygons, Intemat. J. Comput. Geom. Appl. 2 (1992), 175-190. [145] S. Guha, Optimal mesh computer algorithms for simple polygons, Proc. 7th International Parallel Processing Symp., Newport Beach, California (April 1993), 182-187. [146] N. Gupta and S. Sen, Faster output-sensitive parallel convex hulls for d ^3: Optimal sublogarithmic algorithms for small outputs, Proc. 12th Annual ACM Symp. Computational Geometry (1996), 176-185. [147] T. Hagerup and C. Riib, Optimal merging and sorting on the EREW-PRAM, Inform. Process. Lett. 33 (1989), 181-185. [148] D. Haussler and E. Welzl, Epsilon-nets and simplex range queries. Discrete Comput. Geom. 2 (1987), 127-151.

198

M.J. Atallah and D.Z. Chen

[149] J. Hershberger, Upper envelope onion peeling, Proc. 2nd Scandinavian Workshop on Algorithm Theory, Springer-Verlag (1990), 368-379. [150] J. Hershberger, Optimal parallel algorithms for triangulated simple polygons, Proc. 8th Annual ACM Symp. Computational Geometry (1992), 3 3 ^ 2 . [151] J.A. Holey and O.H. Ibarra, Triangulation in a plane and 3-d convex hull on mesh-connected arrays and hypercubes. Tech. Rep., University of Minnesota, Dept. of Computer Science (1990). [152] J.-W. Hong and H.T. Kung, I/O complexity: The red-blue pebble game, Proc. 13th Annual ACM Symp. on Theory of Computing (1981), 326-333. [153] J. Jaja, An Introduction to Parallel Algorithms, Addison-Wesley, Reading, MA (1992). [154] C.S. Jeong and D.T. Lee, Parallel geometric algorithms for a mesh-connected computer, Algorithmica 5 (1990), 155-177. [155] S.L. Johnsson, Combining parallel and sequential sorting on a Boolean n-cube, Proc. International Conf. on Parallel Processing (1984), 444-448. [156] R.M. Karp and V. Ramachandran, Parallel algorithms for shared-memory machines. Handbook of Theoretical Computer Science, J. van Leeuwen, ed.. Vol. 1, Elsevier Science Publishers (1990). [157] D.G. Kirkpatrick, Optimal search in planar subdivisions, SIAM J. Comput. 12 (1983), 28-35. [158] S.R. Kosaraju and A. Delcher, Optimal parallel evaluation of tree-structured computations by raking. Lecture Notes in Comput. Sci. 319: VLSI Algorithms and Architectures, 3rd Aegean Workshop on Computing, Springer-Verlag (1988), 101-110. [159] C.R Kruskal, L. Rudolph and M. Snir, The power of parallel prefix, IEEE Trans. Comput. C-34 (1985), 965-968. [160] C.R Kruskal, L. Rudolph and M. Snir, A complexity theory of efficient parallel algorithms. Lecture Notes in Comput. Sci. 317, Proc. 15th Coll. on Autom., Lang, and Prog., Springer-Verlag (1988), 333-346. [161] V. Kumar and V. Sinch, Scalability ofparallel algorithms for the all-pairs shortest-path problem, J. Parallel Distribut. Comput. 13 (1991), 124-138. [162] M. Kunde, Optimal sorting on multidimensional mesh-connected computers, Proc. 4th Symp. on Theoretical Aspects of Computer Science, Lecture Notes in Comput. Sci., Springer (1987), 408^19. [163] H.T. Kung, F. Luccio and F.P. Preparata, On finding the maxima of a set of vectors, J. ACM 22(4) (1975), 469^76. [164] R.E. Ladner and M.J. Fischer, Parallel prefix computation, J. ACM 27 (1980), 831-838. [165] D.T. Lee and F.P. Preparata, Computational geometry — a survey, IEEE Trans. Comput. C-33 (1984), 872-1101. [166] D.T. Lee, F.P. Preparata, C.S. Jeong and A.L. Chow, SIMD parallel convex hull algorithms. Tech. Rep. AC-91-02, Northwestern University, Dept. of Electrical Eng. and Computer Science (1991). [167] F.T. Leighton, An Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes, Morgan Kaufmann Publishers, San Mateo, CA (1992). [168] C. Levcopoulos, J. Katajainen and A. Lingas, An optimal expected time algorithm for Voronoi diagrams, Proc. 1st Scandinavian Workshop on Algorithm Theory, Springer-Verlag (1988). [169] Z. Li and V. Milenkovic, Constructing strongly convex hulls using exact or rounded arithmetic, Algorithmica 8 (1992), 345-364. [170] A. Lingas, A. Maheshwari and J.-R. Sack, Optimal parallel algorithms for rectilinear link distance problems, Algorithmica 14 (1995), 261-289. [171] P.D. MacKenzie and Q. Stout, Asymptotically efficient hypercube algorithms for computational geometry, Proc. 3rd Symp. on the Frontiers of Massively Parallel Computation (1990), 8-11. [ 172] J.M. Marberg and E. Gafni, Sorting in constant number of row and column phases on a mesh, Proc. 24th Annual AUerton Conf. on Communication, Control and Computing, Monticello, Illinois (1986), 603-612. [173] J. Matousek, Epsilon-nets and computational geometry. New Trends in Discrete and Computational Geometry, J. Pach, ed., Algorithms and Combinatorics, Vol. 10, Springer-Verlag (1993), 69-89. [174] N. Megiddo, Linear time algorithms for linear programming in R^ and related problems, SIAM J. Comput. 12 (1983), 759-776. [175] G.L. Miller and J.H. Reif, Parallel tree contraction and its applications, Proc. 26th IEEE Symp. on Foundations of Comp. Sci. (1985), 478-489. [176] R. Miller and S.E. Miller, Convexity algorithms for digitized pictures on an Intel iPSC hypercube. Supercomputer J. 31 (VI-3) (1989), 45-53.

Deterministic parallel computational

geometry

199

[ 177] R. Miller and Q.F. Stout, Geometric algorithms for digitized pictures on a mesh-connected computer, IEEE Trans. PAMI 7 (1985), 216-228. [178] R. Miller and Q.F. Stout, Efficient parallel convex hull algorithms, IEEE Trans. Comput. C-37 (1988), 1605-1618. [179] R. Miller and Q.F. Stout, Mesh computer algorithms for computational geometry, IEEE Trans. Comput. C-38 (1989), 321-340. [180] R. Miller and Q.F. Stout, Parallel Algorithms for Regular Architectures, The MIT Press, Cambridge, Massachusetts (1991). [181] S. Miyano, S. Shiraishi and T. Shoudai, A list of P-complete problems. Technical Report RIFIS-TR-CS-17, Kyushu University (1989). [ 182] R. Motwani, J. Naor and M. Naor, The probabilistic method yields deterministic parallel algorithms, Proc. 30th Annual IEEE Symp. Found. Comput. Sci. (1989), 8-13. [183] H. Mueller, Sorting numbers using limited systolic coprocessors. Inform. Process. Lett. 24 (1987), 351354. [184] K. Mulmuley, Computational Geometry: An Introduction through Randomized Algorithms, Prentice-Hall, New Jersey (1994). [185] D. Nassimi and S. Sahni, Data broadcasting in SIMD computers, IEEE Trans. Comput. 30 (1981), 101106. [186] J. O'Rourke, Art Gallery Theorems and Algorithms, Oxford University Press (1987). [187] J. O'Rourke, Computational geometry, Ann. Rev. Comp. Sci. 3 (1988), 389-411. [188] J. O'Rourke, Computational Geometry in C, Cambridge University Press (1993). [189] I. Parberry, Parallel Complexity Theory, Pitman, London (1987). [190] W. Paul, U. Vishkin and H. Wagener, Parallel dictionaries on 2-3 trees, Proc. 10th Coll. on Autom., Lang. and Prog., Lecture Notes in Comput. Sci. 154, Springer, Berlin (1983), 597-609. [191] CO. Plaxton, Load balance, selection and sorting on the hypercube, Proc. 1st Annual ACM Symp. on Parallel Algorithms and Architectures (1989), 64-73. [192] C.G. Plaxton, On the network complexity of selection, Proc. 30th Annual IEEE Symp. on Foundations of Computer Science (1989), 396-401. [193] F.P. Preparata and M.I. Shamos, Computational Geometry: An Introduction, Springer-Verlag (1985). [194] F.P. Preparata and R. Tamassia, Fully dynamic techniques for point location and transitive closure in planar structures, Proc. 29th IEEE Symp. on Foundations of Computer Science (1988), 558-567. [195] R. Raman and U. Vishkin, Optimal parallel algorithms for totally monotone matrix searching, Proc. 5th Annual ACM-SIAM Symp. on Discrete Algorithms (1994), 613-621. [196] E.A. Ramos, Construction ofl-d lower envelopes and applications, Proc. 13th Annual ACM Symp. Computational Geometry (1997), 57-66. [197] J.H. Reif and S. Sen, Optimal randomized parallel algorithms for computational geometry, Algorithmica 7(1992), 91-117. [198] J.H. Reif and S. Sen, Polling: A new random sampling technique for computational geometry, SIAM J. Comput. 21 (1992), 466-485. [199] J.H. Reif and S. Sen, Randomized algorithms for binary search and load balancing on fixed connection networks with geometric applications, SIAM J. Comput. 23 (1994), 633-651. [200] J.H. Reif and Q.F. Stout, Manuscript. [201] J.H. Reif and L. Valiant, A logarithmic time sort for linear size networks, J. ACM 34 (1987), 60-76. [202] C. Riib, Line-segment intersection reporting in parallel, Algorithmica 8 (1992), 119-144. [203] K.W. Ryu and J. Jaja, Efficient algorithms for list ranking andfor solving graph problems on the hypercube, IEEE Trans. Parallel and Distributed Systems 1 (1990), 83-90. [204] S. Sairam, R. Tamassia and J.S. Vitter, An efficient parallel algorithm for shortest paths in planar layered digraphs, Algorithmica 14 (1995), 322-339. [205] J.L.C. Sanz and R. Cypher, Data reduction and fast routing: A strategy for efficient algorithms for message-passing parallel computers, Algorithmica 7 (1992), 77-89. [206] B. Schieber, Computing a minimum-weight k-link path in graphs with the concave Monge property, Proc. 6th Annual ACM-SIAM Symp. on Discrete Algorithms (1995), 405-411. [207] C.P. Schnorr and A. Shamir, An optimal sorting algorithm for mesh-connected computers, Proc. 18th ACM Symp. on Theory on Computing (1986), 255-261.

200

MJ. Atallah and D.Z. Chen

[208] S. Sen, A deterministic Poly (log log «) time optimal CRCW PRAM algorithm for linear programming in fixed dimension. Technical Report 95-08, Dept. of Computer Science, University of Newcastle (1995). [209] S. Sen, Parallel multidimensional search using approximation algorithms: With applications to linearprogramming and related problems, Proc. 8th ACM Symp. Parallel Algorithms and Architectures (1996), 251-260. [210] M. Sharir and P.K. Agarwal, Davenport-Schinzel Sequences and Their Geometric Applications, Cambridge University Press, New York (1995). [211] Y. Shiloach and U. Vishkin, Finding the maximum, merging, and sorting in a parallel computation model, J. Algorithms 2 (1981), 88-102. [212] I. Stojmenovic, Manuscript (1988). [213] Q.F. Stout, Constant-time geometry on PRAMs, Proc. 1988 Int'l. Conf. on Parallel Computing, Vol. HI, IEEE, 104-107. [214] R. Tamassia and J.S. Vitter, Parallel transitive closure and point location in planar structures, SIAM J. Comput. 20 (1991), 708-725. [215] R. Tamassia and J.S. Vitter, Optimal cooperative search in fractional cascaded data structures, Algorithmica 15 (1996), 154-171. [216] R.E. Tarjan and U. Vishkin, Finding biconnected components and computing tree functions in logarithmic parallel time, SIAM J. Comput. 14 (1985), 862-874. [217] CD. Thompson and H.T. Kung, Sorting on a mesh-connected parallel computer, Comm. ACM 20 (1977), 263-271. [218] G.T. Toussaint, Solving geometric problems with rotating calipers, Proc. IEEE MELECON '83, Athens, Greece (May 1983). [219] J.-J. Tsay, Optimal medium-grained parallel algorithms for geometric problems. Technical Report 942, Dept. of Computer Sciences, Purdue University (1990). [220] J.-J. Tsay, Parallel algorithms for geometric problems on networks ofprocessors, Proc. 5th IEEE Symp. on Parallel and Distributed Processing, Dallas, Texas (Dec. 1993), 200-207. [221] P. Vaidya, Personal communication. [222] L. Valiant, Parallelism in comparison problems, SIAM J. Comput. 4 (1975), 348-355. [223] V.N. Vapnik and A.Y. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. Appl. 16 (1971), 264-280. [224] H. Wagener, Optimally parallel algorithms for convex hull determination. Manuscript (1985). [225] H. Wagener, Optimal parallel hull construction for simple polygons in 0(log log^i) time, Proc. 33rd Annual IEEE Sympos. Found. Comput. Sci. (1992), 593-599. [226] D.E. Willard and Y.C. Wee, Quasi-valid range querying and its implications for nearest neighbor problems, Proc. 4th Annual ACM Symp. on Computational Geometry (1988), 34-43. [227] C.K. Yap, Parallel triangulation of a polygon in two calls to the trapezoidal map, Algorithmica 3 (1988), 279-288.

CHAPTER 5

Voronoi Diagrams^ Franz Aurenhammer Institutfiir Grundlagen der Informationsverarbeitung, Technische Universitdt Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria

Rolf Klein Praktische Informatik VI, Fern Universitdt Hagen, Informatikzentrum, D-58084 Hagen, Germany

Contents 1. Introduction 2. Definitions and elementary properties 3. Algorithms 3.1. A lower bound 3.2. Incremental construction 3.3. Divide & conquer 3.4. Sweep 3.5. Lifting to 3-space 4. Generalizations and structural properties 4.1. Characterization of Voronoi diagrams 4.2. Optimization properties of Delaunay triangulations 4.3. Higher dimensions, power diagrams, and order-A: diagrams 4.4. Generalized sites 4.5. Generalized spaces and distances 4.6. General Voronoi diagrams 5. Geometric applications 5.1. Distance problems 5.2. Subgraphs of Delaunay triangulations 5.3. Geometric clustering 5.4. Motion planning 6. Concluding remarks and open problems References

* Partially supported by the Deutsche Forschungsgemeinschaft, grant Kl 655 2-2. HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V All rights reserved

201

203 204 209 209 211 215 217 220 221 221 225 229 239 248 260 264 264 270 275 278 280 281

This Page Intentionally Left Blank

Voronoi diagrams

203

1. Introduction The topic of this chapter, Voronoi diagrams, differs from other areas of computational geometry, in that its origin dates back to the 17th century. In his book on the principles of philosophy [87], R. Descartes claims that the solar system consists of vortices. His illustrations show a decomposition of space into convex regions, each consisting of matter revolving round one of the fixed stars; see Figure 1. Even though Descartes has not explicitly defined the extension of these regions, the underlying idea seems to be the following. Let a space M, and a set S of sites p in M he given, together with a notion of the influence a site p exerts on a point x of M. Then the region of p consists of all points x for which the influence of p is the strongest, over all seS. This concept has independently emerged, and proven useful, in various fields of science. Different names particular to the respective field have been used, such as medial axis transform in biology and physiology, WignerSeitz zones in chemistry and physics, domains of action in crystallography, and Thiessen polygons in meteorology and geography. The mathematicians Dirichlet [95] and Voronoi [253,252] were the first to formally introduce this concept. They used it for the study of quadratic forms; here the sites are integer lattice points, and influence is measured by the Euclidean distance. The resulting structure has been called Dirichlet tessellation or Voronoi diagram, which has become its standard name today. Voronoi [253] was the first to consider the dual of this structure, where any two point sites are connected whose regions have a boundary in common. Later, Delaunay [86] obtained the same by defining that two point sites are connected iff (i.e. if and only if) they lie on a circle whose interior contains no point of S. After him, the dual of the Voronoi diagram has been denoted Delaunay tessellation or Delaunay triangulation. Besides its applications in other fields of science, the Voronoi diagram and its dual can be used for solving numerous, and surprisingly different, geometric problems. Moreover, these structures are very appealing, and a lot of research has been devoted to their study (about one out of 16 papers in computational geometry), ever since Shamos and Hoey [232] introduced them to the field. The reader interested in a complete overview over the existing literature should consult the book by Okabe et al. [210] who list more than 600 papers, and the surveys by Aurenhammer [27], Bemal [39], and Fortune [124]. Also, Chapters 5 and 6 of Preparata and Shamos [215] and Chapter 13 of Edelsbrunner [104] could be consulted. Within one chapter, we cannot review all known results and applications. Instead, we are trying to highlight the intrinsic potential of Voronoi diagrams, that lies in its structural properties, in the existence of efficient algorithms for its construction, and in its adaptability. We start in Section 2 with a simple case: the Voronoi diagram and the Delaunay triangulation of n points in the plane, under the Euclidean distance. We state elementary structural properties that follow directly from the definitions. Further properties will be revealed in Section 3, where different algorithmic schemes for computing these structures are presented. In Section 4 we complete our presentation of the classical two-dimensional case, and turn to generalizations. Next, in Section 5, important geometric applications of the Voronoi diagram and the Delaunay triangulation are discussed. The reader who is mainly

204

F. Aurenhammer

and R. Klein

Fig. 1. Descartes' decomposition of space into vortices.

interested in these applications can proceed directly to Section 5, after reading Section 2. Finally, Section 6 concludes the chapter and mentions some open problems.

2. Definitions and elementary properties Throughout this section we denote by 5 a set of w ^ 3 point sites /?, ^, r , . . . in the plane. For points p = (pi, p2) and x = (x\,X2) let d(p,x) = ^J{p\ — x\)'^ -\- {p2 — xi)^ denote their Euclidean distance. By ~pq we denote the line segment from p to q. The closure of a set A will be denoted by A. DEFINITION 2.1. For p,q

eSlei

B(p,q) = {x \d(p,x)

= d(q,x)]

Voronoi diagrams

205

be the bisector of p and q. B{p,q) is the perpendicular Une through the center of the Hne segment ~pq. It separates the halfplane D{p, q) = i^x\ d(p, x) < d(q, x)] containing p from the halfplane D(q, p) containing q. We call

YR(p,S)=

P I D(p,q) qeS,q^p

the Voronoi region of p with respect to S. Finally, the Voronoi diagram of S is defined by V{S)=

U

VR(/7,5)nVR(^,5).

p,q^S,p^q

By definition, each Voronoi region VR(/^, S) is the intersection of n — 1 open halfplanes containing the site p. Therefore, VR(/7, S) is open and convex. Different Voronoi regions are disjoint. The common boundary of two Voronoi regions belongs to V{S) and is called a Voronoi edge, if it contains more than one point. If the Voronoi edge e borders the regions of p and q then e C B{p, q) holds. Endpoints of Voronoi edges are called Voronoi vertices', they belong to the common boundary of three or more Voronoi regions. There is an intuitive way of looking at the Voronoi diagram V{S). Let x be an arbitrary point in the plane. We center a circle, C, at x and let its radius grow, from 0 on. At some stage the expanding circle will, for the first time, hit one or more sites of S. Now there are three different cases. LEMMA 2.L If the circle C expanding from x hits exactly one site, p, then x belongs to VR(/7, S).IfC hits exactly two sites, p and q, then x is an interior point of a Voronoi edge separating the regions of p and q. If C hits three or more sites simultaneously, then x is a Voronoi vertex adjacent to those regions whose sites have been hit. PROOF. If only site p is hit then p is the unique element of S closest to x. Consequently, X e Dip, r) holds for each site r e S with r ^ p.lfC hits exactly p and q, then x is contained in each halfplane D(p,r), D(q,r), where r ^ {p,q}, and in B(p,q), the common boundary of D(p, q) and D{q, p). By Definition 2.1, x belongs to the closure of the regions of both p and q, but of no other site in 5. In the third case, the argument is analogous.

n This lemma shows that the Voronoi regions form a decomposition of the plane; see Figure 2. Conversely, if we imagine n circles expanding from the sites at the same speed, the fate of each point x of the plane is determined by those sites whose circles reach x first. This "expanding waves" view has been systematically used by Chew and Drysdale [66] and Thurston [248].

206

F. Aurenhammer

and R. Klein

Fig. 2. A Voronoi diagram of 11 points in the Euclidean plane.

The Voronoi vertices are of degree at least three, by Lemma 2.1. Vertices of degree higher than three do not occur if no four point sites are cocircular. The Voronoi diagram V(S) is disconnected if all point sites are colinear; in this case it consists of parallel lines. From the Voronoi diagram of S one can easily derive the convex hull of 5, i.e. the boundary of the smallest convex set containing S. LEMMA 2.2. A point p of S lies on the convex hull of S iff its Voronoi region VR(/7, S) is unbounded.

The Voronoi region of p is unbounded iff there exists some point q e S such that V(S) contains an unbounded piece of B{p, q) as a Voronoi edge. Let x e B(p, q), and let C(x) denote the circle through p and q centered at x, as shown in Figure 3. Point x belongs to V(S) iff C(x) contains no other site. As we move x to the right along B(p, q), the part of C(x) contained in halfplane R keeps growing. If there is another site r in R,it will eventually be reached by C(JC), causing the Voronoi edge to end at x. Otherwise, all other sites of S must be contained in the closure of the left halfplane L. Then p and q both lie on the convex hull of 5. D PROOF.

Sometimes it is convenient to imagine a simple closed curve F around the "interesting" part of the Voronoi diagram, so large that it intersects only the unbounded Voronoi edges; see Figure 2. While walking along F, the vertices of the convex hull of S can be reported in cyclic order. After removing the halflines outside F, a connected embedded planar graph with n -\- I faces results. Its faces are the n Voronoi regions and the unbounded face outside F. We call this graph the finite Voronoi diagram.

207

Voronoi diagrams

L

I

R

B(p,q)

Fig. 3. As jc moves to the right, the intersection of circle C{x) with the left half plane shrinks, while C(x) H /? grows.

One virtue of the Voronoi diagram is its small size. LEMMA 2.3. The Voronoi diagram V{S) has 0(n) many edges and vertices. The average number of edges in the boundary of a Voronoi region is less than 6. PROOF. By the Euler formula (see, e.g. [129]) for planar graphs, the following relation holds for the numbers u, e, / , and c of vertices, edges, faces, and connected components.

V— e+ f =

l+c.

We apply this formula to the finite Voronoi diagram. Each vertex has at least three incident edges; by adding up we obtain e ^ 3^/2, because each edge is counted twice. Substituting this inequality together with c = 1 and f = n^l yields v^2n

—2

and

^ ^ 3 n —3.

Adding up the numbers of edges contained in the boundaries of all w + 1 faces results in 2e ^6n — 6 because each edge is again counted twice. Thus, the average number of edges in a region's boundary is bounded by {6n — 6)/(n -f 1) < 6. The same bounds apply to V{S). U Now we turn to the Delaunay tessellation. In general, a triangulation of 5 is a planar graph with vertex set S and straight line edges, which is maximal in the sense that no further straight line edge can be added without crossing other edges. Each triangulation of S contains the edges of the convex hull of S. Its bounded faces are triangles, due to maximality. Their number equals 2n — k — 2, where k denotes the size of the convex hull. We call a connected subset of edges of a triangulation a tessellation of S

208

F. Aurenhammer

and R. Klein

Fig. 4. Voronoi diagram and Delaunay tessellation.

if it contains the edges of the convex hull, and if each point of S has at least two adjacent edges. 2.2. The Delaunay tessellation DT(S) is obtained by connecting with a line segment any two points p,q of 5 for which a circle C exists that passes through p and q and does not contain any other site of S in its interior or boundary. The edges of DTC^S) are called Delaunay edges. DEFINITION

The following equivalent characterization is a direct consequence of Lemma 2.1. LEMMA 2.4. Two points of S are joined by a Delaunay edge iff their Voronoi regions are edge-adjacent.

Since each Voronoi region has at least two neighbors, at least two Delaunay edges must emanate from each point of 5. By the proof of Lemma 2.2, each edge of the convex hull of S is Delaunay. Finally, two Delaunay edges can only intersect at their endpoints, because they allow for circumcircles whose respective closures do not contain other sites. This shows that DT(5) is in fact a tessellation of S. Two Voronoi regions can share at most one Voronoi edge, by convexity. Therefore, Lemma 2.4 implies that DTC^") is the graph-theoretical dual of V(S), realized by straight hne edges. An example is depicted in Figure 4; the Voronoi diagram V(S) is drawn by solid lines, and DTC^S) by dashed lines. Note that a Voronoi vertex (like w) need not be contained in its associated face of DTC^S). The sites p,q,r,s art cocircular, giving rise to a Voronoi vertex V of degree 4. Consequently, its corresponding Delaunay face is bordered by four edges. This cannot happen if the points of S are in general position.

Voronoi diagrams

209

THEOREM 2.1. If no four points of S are cocircular then DT(^), the dual of the Voronoi diagram V{S), is a triangulation of S, called the Delaunay triangulation. Three points of S give rise to a Delaunay triangle iff their circumcircle does not contain a point of S in its interior.

3. Algorithms In this section we present several ways of computing the Voronoi diagram and its dual, the Delaunay tessellation. For simplicity, we assume of the n point sites of S that no four of them are cocircular, and that no three of them are colinear. According to Theorem 2.1 we can then refer to DT(5) as to the Delaunay triangulation. All algorithms presented herein can be made to run without the general position assumption. Also, they can be generalized to metrics other than the Euclidean, and to sites other than points. This will be discussed in Subsections 4.5 and 4.4. Data structures well suited for working with planar graphs like the Voronoi diagram are the doubly connected edge list, DCEL, by Muller and Preparata [202], and the quad edge structure by Guibas and Stolfi [136]. In either structure, a record is associated with each edge e that stores the following information: the names of the two endpoints of e; references to the edges clockwise or counterclockwise next to e about its endpoints; finally, the names of the faces to the left and to the right of e. The space requirement of both structures is 0(n). Either structure allows to efficiently traverse the edges incident to a given vertex, and the edges bounding a face. The quad edge structure offers the additional advantage of describing, at the same time, a planar graph and its dual, so that it can be used for constructing both the Voronoi diagram and the Delaunay triangulation. From the DCEL of V(S) we can derive the set of triangles constituing the Delaunay triangulation in linear time. Conversely, from the set of all Delaunay triangles the DCEL of the Voronoi diagram can be constructed in time 0(n). Therefore, each algorithm for computing one of the two structures can be used for computing the other one, within 0(n) extra time. It is convenient to store structures describing the finite Voronoi diagram, as introduced before Lemma 2.3, so that the convex hull of the point sites can be easily reported by traversing the bounding curve F; see Figure 2. 3,1. A lower bound Before constructing the Voronoi diagram we want to establish a lower bound for its computational complexity. Suppose that n real numbers x i , . . . , x^ are given. From the Voronoi diagram of the point set S = [pi = (xi ,Xi^)\l^i^n} one can derive, in linear time, the vertices of the convex hull of 5, in counterclockwise order. From the leftmost point in S on, this vertex sequence contains all points pt, sorted by increasing values of Xf; see Figure 5(i). This argument due to Shamos [231] shows that constructing the convex hull and, a fortiori, computing the Voronoi diagram, is at least as difficult as sorting n real numbers, which requires 0(n logn) time in the algebraic computation tree model.

F. Aurenhammer

210

\

Y

Fig. 5. Proving the Q(n\ogn)

and R. Klein

.Y

lower bound for constructing the Voronoi diagram (i) by transformation from sorting, and (ii) by transformation from ^-closeness.

However, a fine point is lost in this reduction. After sorting n points by their x-values, their convex hull can be computed in linear time [101], whereas sorting does not help in constructing the Voronoi diagram. The following result has been independently found by Djidjev and Lingas [96] and by Zhu and Mirzaian [262]. 3.1. It takes time Q{n\ogn) to construct the Voronoi diagram of n points Pn whose X-coordinatesare strictly increasing.

THEOREM

p\,...,

PROOF. By reduction from the e-closeness problem which is known to be in 0(nlogn). Let y\,.. .,yn be positive real numbers, and let £ > 0. The question is if there exist / / j such that \yi — yj\ < e holds. We form the sequence of points pi = (is/n, yi), 1 ^ / < n, and compute their Voronoi diagram; see Figure 5(ii). In time 0(«), we can determine the Voronoi regions that are intersected by the >^-axis, in bottom-up order (such techniques will be detailed in Subsection 3.3). If, for each /?/, its projection onto the >'-axis lies in the Voronoi region of pt then the values yi are available in sorted order, and we can easily answer the question. Otherwise, there is a point pi whose projection lies in the region of some other point pj. Because of

\yi -yj\^

d{(0, yi), Pj) < J((0, yd, pi) = ^ ^ ^,

in this case the answer is positive.

D

On the other hand, sorting n arbitrary point sites by x-coordinates is not made easier by their Voronoi diagram, as Seidel [225] has shown.

Voronoi diagrams

211

With Definition 2.1 in mind one could think of computing each Voronoi region as the intersection ofn — l halfplanes. This would take time G{n logn) per region, see [215]. In the following subsections we describe various algorithms that compute the whole Voronoi diagram within this time; due to Theorem 3.1, these algorithms are worst-case optimal. 3.2. Incremental construction A natural idea first studied by Green and Sibson [133] is to construct the Voronoi diagram by incremental insertion, i.e. to obtain V{S) from V(5 \ {p}) by inserting the site p. As the region of p can have up to n — 1 edges, for n = \S\, this leads to a runtime of 0{n^). Several authors have fine-tuned the technique of inserting Voronoi regions, and efficient and numerically robust implementations are available nowadays; see Ohya et al. [209] and Sugihara and Iri [242]. In fact, runtimes of 0{n) can be expected for well distributed sets of sites. The insertion process is, maybe, better described, and implemented in the dual environment, for the Delaunay triangulation: construct DT/ = DT({/7i,..., p / - i , p/}) by inserting the site pi into DT/_i. The advantage over a direct construction of V(S) is that Voronoi vertices that appear in intermediate diagrams but not in the final one need not be constructed and stored. We follow Guibas and Stolfi [136] and construct DT/ by exchanging edges, using Lawson's [176] original edge flipping procedure, until all edges invalidated by pi have been removed. To this end, it is useful to extend the notion of triangle to the unbounded face of the Delaunay triangulation. If 'pq is an edge of the convex hull of S we call the supporting halfplane H not containing S an infinite triangle with edge 'pq. Its circumcircle is H itself, the limit of all circles through p and q whose center tend to infinity within H\ compare Figure 3. As a consequence, each edge of a Delaunay triangulation is now adjacent to two triangles. Those triangles of DT/_i (finite or infinite) whose circumcircles contain the new site. Pi, are said to be in conflict with pi. According to Theorem 2.1, they will no longer be Delaunay triangles. Let ^ be an edge of DT/_i, and let T{q,rJ)ht the triangle adjacent to 'qr that lies on the other side of ^ than pi\ see Figure 6. If its circumcircle C{q,r, t) contains pi then each circle through q, r contains at least one of pi,t\ see Figure 3 again. Consequently, 'qr cannot belong to DT/, due to Definition 2.2. Instead, ^ will be a new Delaunay edge, because there exists a circle contained in C(^, r, t) that contains only pi and t in its interior or boundary. This process of replacing edge 'qr by p7? is called an edge flip. The necessary edge flips can be carried out efficiently if we know the triangle T(q,s,r) of DTj_i that contains pt, see Figure 7. The line segments connecting pt to q, r, and s will be new Delaunay edges, by the same argument from above. Next, we check if, e.g. edge 'qr must be flipped. If so, the edges 'qt and Jr are tested, and so on. We continue until no further edge currently forming a triangle with, but not containing pi, needs to be flipped, and obtain DT/. LEMMA 3.1. If the triangle ofDTt-i containing pi is known, the structural work needed for computing DT/ from DT/_i is proportional to the degree d of pi in DT/.

212

F. Aurenhammer

and R. Klein

t

C(q,r,t) Fig. 6. If triangle T(q,r, t) is in conflict with pj then former Delaunay edge qr must be replaced by pit.

(iii) Fig. 7. Updating DT/_i after inserting the new site pj. In (ii) the new Delaunay edges connecting pi to q,r,s have been added, and edge ^ has already been flipped. Two more flips are necessary before the final state shown in (iii) is reached.

PROOF. Continued edge flipping replaces d — 2 conflicting triangles of DT/_i by d new triangles in DT/ that are adjacent to pi; compare Figure 7. D

Lemma 3.1 yields an obvious O(n^) time algorithm for constructing the Delaunay triangulation ofn points: we can determine the triangle of DT/_i containing pi within linear time, by inspecting all candidates. Moreover, the degree of pi is trivially bounded by n. The last argument is quite crude. There can be single vertices in DT/ that do have a high degree, but their average degree is bounded by 6, as Lemma 2.3 and Lemma 2.4 show. This fact calls for randomization. Suppose we pick pn at random in 5, then choose pn-\ randomly from S — {pn}, and so on. The result is a random permutation {p\, pi^ ---^ Pn) of the site set S.

Voronoi diagrams

213

If we insert the sites in this order, each vertex of DT/ has the same chance of being pi. Consequently, the expected value of the degree of /?/ is 0(1), and the expected total number of structural changes in the construction of DT^ is only 0(w), due to Lemma 3.1. In order to find the triangle that contains /?/ it is sufficient to inspect all triangles that are in conflict with pi. The following lemma shows that the expected total number of all conflicting triangles so far constructed is only logarithmic. LEMMA 3.2. For each h < i, let dh denote the expected number of triangles in DT/^ \ DT/j_i that are in conflict with pi. Then,

^J/,=O(log0. Let C denote the set of triangles of DT/j that are in conflict with pi. A triangle T eC belongs to DT/^ \ DT/^-i iff it has pu as a vertex. As ph is randomly chosen in DT/^, this happens with probability 3//z. Thus, the expected number of triangles in C \ Tflh-i equals 3 • \C\/h. Since the expected size of C is less than 6 we have dh < 18//z, hence PROOF.

i : l = \ 4 < i8El=\ i/h = e(iogi).

n

Suppose that 7 is a triangle of DTj adjacent to pt, see Figure 7(iii). Its edge Jr is in DT/_i adjacent to two triangles: To its father, F, that has been in conflict with pi; and to its stepfather, SF, who is still present in DT/. Any further site in conflict with T must be in conflict with its father or with its stepfather, as illustrated by Figure 8. This property can be exploited for quickly accessing all conflicting triangles. The Delaunay tree due to Boissonnat and Teillaud [46] is a directed acyclic graph that contains one node for each Delaunay triangle ever created during the incremental construction. Pointers run from fathers and stepfathers to their sons. The triangles of DT3 are the sons of a dummy root node. When P/+1 must be inserted, a Delaunay tree including all triangles up to DT/ is available. We start at its root and descend as long as the current triangle is in conflict with /?/+i. The above property guarantees that each conflicting triangle of DT/ will be found. The expected number of steps this search requires is O(log0, due to Lemma 3.2. Once DT/+1 has been computed, the Delaunay tree can easily be updated to include the new triangles. Thus, we have the following result. THEOREM 3.2. The Delaunay triangulation of a set ofn points in the plane can be constructed in expected time 0{n\ogn), using expected linear space. The average is taken over the different orders of inserting the n sites.

As a nice feature, the insertion algorithm is on-line. That is, it is capable of constructing DT/ from DT/_i without knowledge of /7/+1,..., p^. Note also that we did not make any assumptions concerning the distribution of the sites in the plane; the incremental algorithm achieves its 0{n\ogn) time bound for every pos-

214

F. Aurenhammer

and R. Klein

Fig. 8. The circumcircle of T is contained in the union of the circumcircles of F and SF.

sible input set. Only under a "poor" insertion order can a quadratic number of structural changes occur, but this is unlikely. Randomized geometric algorithms are presented in more detail in a separate chapter of this book. Though conceptually simple, they tend to be tricky to analyze. Since Clarkson and Shor [74] introduced their technique, many researchers have been working on generalizing and simplifying the methods used. To mention but a few results, Boissonnat et al. [43] and Guibas et al. [135] have refined the methods of storing the past in order to locate new conflicts quickly, Clarkson et. al. [73] have generaHzed and simplified the analytic framework, and Seidel [230] systematically applied the technique of backward analysis first used by Chew [62]. The method in [135] for storing the past is briefly described in Subsection 4.3.3 for constructing a generalized planar Voronoi diagram. If the set S of sites can be expected to be well distributed in the plane, bucketing techniques for accessing the triangle that contains a new site /?/ have been used for speed-up. Joe [152], who implemented Sloan's algorithm [238], and Su and Drysdale [240], who used a variant of Bendey et al.'s spiral search [36], report on fast experimental runtimes. The arising issues of numerical stability have been addressed in Fortune [123], Sugihara [241], and JUnger et al. [154]. A technique similar to incremental insertion is incremental search. It starts with a single Delaunay triangle, and then incrementally discovers new ones, by growing triangles from edges of previously discovered triangles. This basic idea is used, e.g., in Maus [189] and in Dwyer [103]. It leads to efficient expected-time Delaunay algorithms in higher dimensions; see [103]. The paper [240] gives a thorough experimental comparison of available Delaunay triangulation algorithms.

Voronoi diagrams

215

3.3. Divide & conquer The first deterministic worst-case optimal algorithm for computing the Voronoi diagram has been presented by Shamos and Hoey [232]. In their divide & conquer approach, the set of point sites, S, is split by a dividing line into subsets L and R of about the same sizes. Then, the Voronoi diagrams V(L) and V(R) are computed recursively. The essential part is in finding the split line, and in merging V(L) and V(R), to obtain V(S). If these tasks can be carried out in time 0(n) then the overall running time is 0(n logn). During the recursion, vertical or horizontal split lines can be found easily if the sites in S are sorted by their x- and y-coordinates beforehand. The merge step involves computing the set B(L, R) of all Voronoi edges of V(S) that separate regions of sites in L from regions of sites in R. Suppose that the split line is vertical, and that L lies to its left. LEMMA 3.3. The edges of B{L, R) form a single y-monotonepolygonal chain. In V{S), the regions of all sites in L are to the left of B(L, R), whereas the regions of the sites of R are to its right. PROOF. Let b be an arbitrary edge of B(L, R), and let / G L and r e Rhe the sites whose regions are adjacent to b. Since / has a smaller x-coordinate than r, b cannot be horizontal, and the region of / must be to its left. D

Thus, V(S) can be obtained by glueing together B(L, R), the part of V(L) to the left of B(L, R), and the part of V(R) to its right; see Figure 9, where V(R) is depicted by dashed lines. The polygonal chain B(L, R) is constructed by finding a starting edge at infinity, and by tracing B(L,R) through V(L) and V(R). Due to Shamos and Hoey [232], an unbounded starting edge of B(L, R) can be found inO(n) time by determining a line tangent to the convex hulls of L and R, respectively. Here we describe an alternative method by Chew and Drysdale [66] since that method also works for generalized Voronoi diagrams (Subsection 4.5.2). The unbounded regions of V(L) and V(R) are scanned simultaneously in cyclic order. For each non-empty intersection VR(/, L) n VR(r, R), we test if it contains an unbounded piece of B(l,r).lf so, this must be an edge of B(L, R), by Definition 2.1. Since B(L, R) has two unbounded edges, by Lemma 3.3, this search will be successful. It takes time | V(L)\ + | V(R)\ = 0(n). Now we describe how B(L, R) is traced. Suppose that the current edge b of B(L, R) has just entered the region VR(/, L) at point v while running within VR(r, R), see Figure 10. We determine the points VL and VR where b leaves the regions of / resp. of r. The point VL is found by scanning the boundary of VR(/, L) counterclockwise, starting from v. In our example, VR is closer to v than VL, SO that it must be the endpoint of edge b. From VR, B{L,R) continues with an edge Z?2 separating / and r2. Now we have to determine the points UL,2 and VR^2 where b2 hits the boundaries of the regions of / and r2- The crucial observation is that VL,2 cannot be situated on the boundary segment of VR(/, L) from v to VL that we have just scanned; this can be inferred from the convexity of VR(/, S). Therefore, we need to scan the boundary of VR(/, L) only from VL on, in counterclockwise direction.

216

F. Aurenhammer and R. Klein B(L,R)

V(R)

Fig. 9. Merging V{L) and V{R) into V{S).

The same reasoning applies to V{R)\ only here, region boundaries are scanned clockwise. Even though the same region might be visited by B{L, R) several times, no part of its boundary is scanned more than once. The edges of V(L) that are scanned all lie to the right of B{L, R). This part of V(L), together with B(L, R), forms a planar graph each of whose faces contains at least one edge of B(L, R) in its boundary. As a consequence of Lemma 2.3, the size of this graph does not exceed the size of B(L, R), times a constant. The same holds for V(R). Therefore, the cost of constructing B(L, R) is bounded by its size, once a starting edge is given. This leads to the following result. THEOREM 3.3. The divide SL conquer algorithm allows the Voronoi diagram ofn point sites in the plane to be constructed within time 0(n logn) and linear space, in the worst case. Both bounds are optimal.

Of course, the divide & conquer paradigm can also be applied to the computation of the Delaunay triangulation DT(5). Guibas and Stolfi [136] give an implementation that uses the quad-edge data structure and only two geometric primitives, an orientation test and an in-circle test. Fortune [123] showed how to perform these tests accurately with finite precision.

Voronoi diagrams

217

Bd.rs) K /

\ \ I

, / /

/ B(l,r2) '

/

/ /

'

'

B(l,r)

IVR(I,L)

"B(r2,r3)

b^

•r2

B(r,r2)

Fig. 10. Computing the chain B(L, R).

Dwyer's implementation [102] uses vertical and horizontal split lines in turn, and Katajainen and Koppinen's [157] merges square buckets in a quad-tree order. Both papers report on favorable results. Divide & conquer algorithms are candidates allowing for cffLcitnt parallelization. Several theoretically efficient algorithms for computing in parallel the Voronoi diagram or the Delaunay triangulation have been proposed. We refer to the recent paper by Blelloch et al. [41] for references and for a practical parallel algorithm for computing DT(5). They highlight an algorithm by Edelsbrunner and Shi [116] that uses the lifting map for S (see Subsection 3.5) to construct a chain of Delaunay edges that divides S. They show experimentally that their implementation is comparable in work to the best sequential algorithms.

3.4. Sweep The well-known line sweep algorithm by Bentley and Ottmann [34] computes the intersections of n line segments in the plane by moving a vertical line, H, across the plane. The line segments currently intersected by H are stored in bottom-up order. This order must be updated whenever H reaches an endpoint of a line segment, or an intersection point. To discover the intersection points in time, it is sufficient to check, after each update of the order, those pairs of line segments that have just become neighbors on H. It is tempting to apply the same approach to Voronoi diagrams, by keeping track of the Voronoi edges that are currently intersected by the vertical sweep line. The problem is in discovering new Voronoi regions in time. By the time the sweep line hits a new site it has been intersecting Voronoi edges of its region for a while.

218

F. Aurenhammer

and R. Klein

Pe Pe

-'

H

V

H'

Fig. 11. Voronoi diagrams of the sweep line, H, and of the points to its left.

Fortune [125] was the first to find a way around this difficulty. He suggested a planar transformation under which each point site becomes the leftmost point of its Voronoi region, so that it will be the first point hit during a left-to-right sweep. His transformation does not change the combinatorial structure of the Voronoi diagram. Later, Seidel [228] and Cole [75] have shown how to avoid this transformation. They consider the Voronoi diagram of the point sites to the left of the sweep line H and of H itself, considered an additional site; see Figure 11. Because the bisector of a line and a non-incident point is a parabola, the boundary of the Voronoi region of / / is a connected chain of parabola segments whose top- and bottommost edges tend to infinity. This chain is called the wavefront, W. Let /? be a point site to the left of H. Any point to the left of, or on, the parabola B(p, H) is not farther from p than from //; hence, it is a fortiori closer to p than to any site to the right of H. Consequently, as the sweep line moves on to the right, the waves must follow because the sets D(pi, H) grow. On the other hand, each Voronoi edge to the left of W that currently separates the regions of two point sites /?/, pj will be (part of) a Voronoi edge in

y{s). During the sweep, there are two types of events that cause the structure of the wavefront to change, namely when a new wave appears in W, or when an old wave disappears. The

Voronoi diagrams

219

first happens each time the sweep line hits a new site, e.g. pe in Figure 11. At that very moment B(H, pe)is2i horizontal line through pe, according to Definition 2.1. A little later, its left halfline unfolds into a parabola that must be inserted into the wavefront by gluing it onto the wave of p4 (which now contributes two segments to W). Let p,q be two point sites whose waves are neighbors in W. Their bisector, B(p,q), gives rise to a Voronoi edge to the left of W. Its prolongation into the region of H is called a spike. In Figure 11 spikes are depicted as dashed lines; one can think of them as tracks along which the waves are moving. A wave disappears from W when it arrives at the point where its two adjacent spikes intersect. Its former neighbors become now adjacent in the wavefront. In Figure 11, the wave of p3 would disappear at point i;, if the new site, p^, did not exist. But after the wave of p^ has been inserted, there will be a previous event at v\ where the lower part of the wave of p4 disappears. While keeping track of the wavefront one can easily maintain the Voronoi diagram of H and of the point sites to its left. As soon as all point sites have been detected and all spike intersections have been processed, V(S) is obtained by removing the wavefront and extending all spikes to infinity. Even though one wave may contribute several segments to the wavefront, the following holds. LEMMA

3.4. The size of the wavefront is 0(n).

Since any two parabolic bisectors B(p, H), B(q, H) can cross at most twice, the size of the wavefront is bounded by A2(n) = 2n — 1, where Xs{n) denotes the maximum length of a Davenport-Schinzel sequence over n symbols in which no two symbols appear s times each in alternating positions; see [21]. D PROOF.

The wavefront can be implemented by a balanced binary tree that stores the segments in bottom-up order. This enables us to insert a wave, or remove a wave segment, in time OiXogn). Before the sweep starts, the point sites are sorted by increasing x-coordinates and inserted into an event queue. After each update of the wavefront, newly adjacent spikes are tested for intersection. If they intersect at some point f, we insert into the event queue the time, i.e. the position x of the sweep line, when the wave segment between the two spikes arrives at v. Since the point f is a Voronoi vertex of V(5), there are only 0(n) many events caused by spike intersections. In addition, each of the n sites causes an event. For each active spike we need to store only its first intersection event. Thus, the size of the event queue never exceeds 0(n). We obtain the following result. THEOREM 3.4. Plane sweep provides an alternative way of computing the Voronoi diagram ofn points in the plane within 0(n logn) time and linear space.

McAllister et al. [191] have pointed out a subtle difference between the sweep technique and the two methods mentioned before. The divide & conquer algorithm computes 0 (n log n) many vertices, even though only a linear number of them appears in the final diagram. The randomized incremental construction method performs an expected 0 (n log n)

220

F. Aurenhammer

and R. Klein

tz Xg-X^ +^2

Fig. 12. Lifting circles onto the paraboloid.

number of conflict tests. Both tasks, constructing a Voronoi vertex and testing a subset of sites for conflict, are usually handled by subroutines that deal directly with point coordinates, bisector equations etc. They can become quite costly if we consider sites more general than points, and distance measures more general than the Euclidean distance; see Sections 4.4, 4.5, and 4.6. The sweep algorithm, on the other hand, processes only 0(n) many spike events.

3.5. Lifting to 3-space The following approach employs the powerful method of geometric transformation. Let P = [{x\,X2,xi) I x^ -h x | = X3} denote the paraboloid depicted in Figure 12. For each point x = {x\,X2) in the plane, let x' = {x\,X2,x]-\- jc|) denote its lifted image on P. LEMMA

3.5. Let C bea circle in the plane. Then C' is a planar curve on the paraboloid P.

PROOF.

Suppose that C is given by the equation r^ = (X\ -C\)^

-\-(X2 -02)^

=x]+xl

-2x\C\

- 2X2C2 + c] + C^.

By substituting jc^ -h jc| = JC3 we obtain x^ — 2x\c\ — 2x2C2 + Cj + ^2 — r

=0

for the points of C. This equation defines a plane in 3-space.

D

This lemma has an interesting consequence. By the lower convex hull of a set of points in 3-space we mean that part of the convex hull which is visible from the {xi, jC2)-plane.

Voronoi diagrams

221

THEOREM 3.5. The Delaunay triangulation of S equals the projection onto the {x\, X2)plane of the lower convex hull of S'. PROOF. Let p,q,r denote three point sites of S. By Lemma 3.5, the hfted image, C\ of their circumcircle C Hes on a plane, E, that cannot be vertical. Under the lifting mapping, the points inside C correspond to the points on the paraboloid P that lie below the plane E. By Theorem 2.1, p,q,r define a triangle of the Delaunay triangulation iff their circumcircle contains no further site. Equivalently, no lifted site s^ is below the plane E that passes through p\ q\ r'. But this means that p\ q', r' define a face of the lower convex hull of S'.

D

Because there exist Oiji log/t) time algorithms for computing the convex hull of/i points in 3-space, see, e.g. Preparata and Shamos [215], we have obtained another optimal algorithm for the Voronoi diagram. The connection between Voronoi diagrams and convex hulls has first been studied by Brown [49] who used the inversion transform. The simpler lifting mapping has been used, e.g., in Edelsbrunner and Seidel [113]. We shall see several applications and generalizations in Subsection 4.3. In [113] also the following fact is observed. For each point p of 5, consider the paraboloid Pp = {(xi, X2, ^3) | {x\ — pi)^ + (^2 — P2)^ = X3}. If these paraboloids were opaque, and of pairwise different colors, an observer looking upwards from ^3 = — oc would see the Voronoi diagram V(S). In fact, the projection X = {x\, X2) of a point (x\, X2, X3) e Pp 0 Pq belongs to B{p, q); and there is no site s closer to x than p and q iff (xi, X2, X3) lies below all paraboloids Ps. Instead of the paraboloids Pp one could use the surfaces {(xi,X2, f((xi — pi)^ + (x2 — P2)^))] generated by any function / that is strictly increasing. For example, f(x) = ^ gives rise to cones of slope 45° with apices at the sites. This setting illustrates the concept of circles expanding from the sites at equal speed, as mentioned after the proof of Lemma 2.1. Coordinate X3 represents time. In order to visualize a Voronoi diagram on a graphic screen one can feed the n surfaces to a z-buffer, and eliminate by brute force those parts not visible from below. Finally, we would like to mention a nice connection between the two ways of obtaining the Voronoi diagram by means of paraboloids explained above; it goes back to [113]. For a point w = (wi, W2, W3), let -'W denote its mirror image (wi, W2, —w^). If we apply to 3-space the mapping which sends x to (x\, X2, ^3 — (xi — p\)^ — (x2 — P2)^) then, for each point p in the plane, the paraboloid Pp corresponds to the tangent plane of the paraboloid -"P at the point —"{p'), where p' denotes the lifted image of p\ compare the plane equation derived in the proof of Lemma 3.5, letting c = p and r = 0.

4. Generalizations and structural properties 4.1. Characterization of Voronoi diagrams The process of constructing the Voronoi diagram for n point sites can be seen as an assignment of a planar convex region to each of the sites, according to the nearest-neighbor

222

F. Aurenhammer and R. Klein

rule. We now address the following, in some sense inverse, question: Given a partition of the plane into n convex regions (which are then necessarily polygonal), do there exist sites, one for each region, such that the nearest-neighbor rule is fulfilled? In other words, when is a given convex partition the Voronoi diagram of some set of sites? Whether a given set of sites induces a given convex partition as its Voronoi diagram is, of course, easy to decide by exploiting symmetry properties among the sites. For the same reason, it is easy to check whether a given triangulation is Delaunay, by exploiting the empty circumcircle property of its triangles, stated in Theorem 2.1. Conditions for a given graph to be isomorphic to the Delaunay triangulation of some set of sites are mentioned, e.g., in the survey article by Fortune [124]. Below we concentrate on the recognition of Voronoi diagrams without knowing the sites. Questions of this kind arise in facility location and in the recognition of biological growth models (as report, e.g., in Suzuki and Iri [245]) and, in particular, in the so-called gerrymander problem mentioned in Ash and Bolker [20]: When the sites are regarded as polling places and election law requires that each person votes at the respective closest polling place, the election districts form a Voronoi diagram. If the legislature draws the district lines first, how can we tell whether election law is satisfied? Let Ri and Rj be two of the given regions. Assume that they share a common edge, and let hij be the line containing that edge. Further, let a/y denote the reflection at line htj. LEMMA 4.1. A convex partition R\,..., Rn of the plane defines a Voronoi diagram if and only if there exists a point pi for each region Ri such that the following holds. (1) Pi e Ri {containment condition), (2) crij(pi) = pj if Rj is adjacent to Ri (reflection condition). PROOF. If we do have a Voronoi diagram then its defining sites exist and obviously fulfill (1) and (2). To prove the converse, assume that points p\,..., pn fulfilling both conditions exist. Take any region Ri and any point x therein. We show that d(x, pi) is a minimum. To get a contradiction, suppose pj, j ^ i, is closest to x. Consider an edge of Rj that is intersected by xpj, and let Rk be the region adjacent to Rj at that edge; see Figure 13. Note that k = i may happen. By convexity of Rj and by (1), the line hjk separates pj from X. Hence by (2) we gQtd(x, pk) 0, and ^/pow(x, p) expresses the distance of x to the touching point of a line tangent to the sphere and through x. The locus of equal power with respect to two weighted sites p and ^ is a hyperplane called the power hyperplane of p and q. Let h(p,q) denote the closed halfspace bounded by this hyperplane and containing the points of less power with respect to p. The power cell of p is given by

cell(/7)= P I h{p,q). qeS\[p}

232

F. Aurenhammer

and R. Klein

Fig. 18. Power diagram for circles in the plane.

In analogy to the classical Voronoi regions, the power cells define a partition of J-space into convex polyhedra, the so-called power diagram, PD{S), of S. See Figure 18 for a planar example. PD(S) coincides with the Voronoi diagram of S if all weights are the same. In contrast to Voronoi regions, power cells might be empty if general weights are used; see cell(/7) in Figure 18. PD(S) is a face-to-face cell complex in J-space that consists of polyhedral faces of various dimensions j , for 0 ^ 7 ^ d. In the non-degenerate case, exactly d -\- I edges, ( J ) facets (faces of dimension d — I), and d + 1 cells meet at each vertex of PD(S). For storing a J-dimensional cell complex, the cell-tuple structure in Brisson [48] seems appropriate. This data structure represents the incidence and ordering information in a cell complex in a simple uniform way. When each weighted site p e S is interpreted as the sphere cTp = (p, ^Jw{p)), we can make the following nice observation. The part of Op that contributes to the union of all these spheres, Une^ ^p^ is just the part of cr^ within cell(/7). This means that PD(S) defines

Voronoi diagrams

233

a partition of this union into simply-shaped and algorithmically tractable pieces. Several algorithms concerning the union (and also the intersection) of spheres are based on this partition; see Avis et al. [32], Aurenhammer [24], and Edelsbrunner [106]. Power diagrams (and thus Voronoi diagrams) are, in a strong sense, equivalent to the boundaries of convex polyhedra in one dimension higher. This is a fact with far-ranging implications and has been observed in Brown [49], Klee [165], Edelsbrunner and Seidel [113], Paschinger [213], and Aurenhammer [22]. The power function pow(x, p) can be expressed by the hyperplane 7t(p):Xd-^i = 2x^p - p^p + w(p) in (d + l)-space, in the sense that a point x lies in cell(p) of PD(S) iff, at x, 7t(p) is vertically above all hyperplanes 7t(q) for q e S\ {p}. Hence PD(S) corresponds, by vertical projection, to the upper envelope of these hyperplanes, which is the surface of a convex polyhedron, 17(5), in (d + 1)-space. Conversely, it is not difficult to see that every upper envelope of n non-vertical hyperplanes in (d + 1)-space corresponds to the power diagram of n appropriately weighted sites in d-spact. The following upper bound is a direct consequence of the upper bound theorem for convex polyhedra proved in McMuUen [193]. The bound is trivially sharp for power diagrams, but is achieved also for Voronoi diagrams, as was shown in Seidel [224]. 4.4. Let S be a set ofn point sites in d-space. Any power diagram for S, and in particular, the Voronoi diagram for S, realizes at most fj faces of dimension j , for

THEOREM

Y^ /i\/n

— d-^i — 2\

^^-y /d — i -^ l\ /n — d -\- i — 2\

where a = \^'] and b = [f J. The numbers fj are 0(n^^^),for 0 < j < J — 1. For algorithmic issues, power diagrams can be brought in connection to convex hulls in (d + 1)-space, by exploiting a duality (actually, polarity) between upper envelopes of hyperplanes (or intersections of upper halfspaces) and convex hulls of points. This connection is best described by generalizing the lifting map in Subsection 3.5 to weighted points. A site p e S with weight w(p) is transformed into the point Up) = (\p^ T p -^ w(p)J r ^) in (d -h 1)-space. There is a interelation cdillQd polarity between the transforms k and n. The point k(p) is called the pole of the hyperplane 7t(p) which, in turn, is called the polar hyperplane ofk(p). Polarity defines a one-to-one correspondence between arbitrary points and non-vertical hyperplanes in (d + 1)-space. It is well known that polarity preserves the relative position of points and hyperplanes. To show the connection to convex hulls, consider an arbitrary face / of the polyhedron n(S). Let / be the intersection of m = d — j-\-l hyperplanes 7r(/7i),..., 7t(pm), such

234

F. Aurenhammer and R. Klein

that / is of dimension j . Each point x e f lies on these but above all other hyperplanes n(q) defined by S. Hence the polar hyperplane of x has the points X(pi),..., X{pm) on it and the remaining points X{q) above it. This shows that the points X(/7i),..., X(pm) span a face of dimension d — j of the convex hull of the point set {Xip) \ p e S}. We conclude that each 7-dimensional face of /7(5), and thus of PD(S), is represented by a (J — 7)-dimensional face of this convex hull. This implies a duaUty between power diagrams in J-space and convex hulls in (d + l)-space. In the special case of an unweighted point set 5 in the plane, the parts of the convex hull that are visible from the plane project to the vertices, edges, and triangles of the Delaunay triangulation of S, and we obtain Theorem 3.5 of Section 3.5. A triangulation which can be obtained by projecting a convex hull is called a regular triangulation in Edelsbrunner and Shah [115]. Regular triangulations are just those being dual to planar power diagrams. Once the convex hull of {k(p) \ p e S] has been computed, the faces of PD(S), as well as their incidence and ordering relations, can be obtained in time proportional to the size of PD(S). 4.5. Let Cd-\-\ (n) be the time needed to compute a convex hull ofn points in (d +1)-space. A power diagram (and in particular, the Voronoi diagram) of a given n-point set in d-space can be computed in Cj+i (n) time. THEOREM

Worst-case optimal convex hull algorithms working in general dimensions have been designed by Clarkson and Shor [74], Seidel [229], and Chazelle [58], yielding Q+i(«) = 0(n logn -h n^2^). So Theorem 4.5 is asymptotically optimal in the worst case. Note, however, that power diagrams in J-space may as well have a fairly small size, 0(/i), which emphasizes the use of output-sensitive convex hull algorithms. The algorithm in Seidel [226] achieves C^+i (n) = 0(n^ + / log / ) , where / is the total number of faces of the convex hull constructed. The latest achievements are C4 = 0((n -h / ) log^ / ) in Chan et al. [56] and C5 = 0((n + / ) log'^ / ) in Amato and Ramos [15]. Space constraints preclude our discussion of power diagrams. Still, some remarks are in order to point out their central role within the context of Voronoi diagrams. For a detailed discussion and references, see Aurenhammer and Imai [30]. The regions of a Voronoi diagram are usually defined by a set of sites and a distance function. If the regions are polyhedral, then any such Voronoi diagram can be shown to be the power diagram of a suitable set of weighted point sites. For instance, this is the case for the furthest site Voronoi diagram, whose regions consist of all points having the same furthest site in the given set. A polyhedral cell complex in J-space is called simple if exactly d-\-l cells meet at each vertex. For example, the Voronoi diagram of a set of point sites in J-space is simple if no d -\-2 sites are co-spherical. If J ^ 3, any simple cell complex can be shown to be a power diagram. The class of power diagrams is closed under taking cross-sections with a hyperplane. That is, the diagram obtained from intersecting a power diagram in J-space with a hyperplane is again a power diagram, in (d — 1)-space. Moreover, the class of power diagrams is closed under the modifications to higher order defined in Subsection 4.3.3.

Voronoi diagrams

235

Several generalized Voronoi diagrams in J-space have an embedding in a power diagram in {d + 1)-space, in the sense that they can be obtained by intersecting a power diagram with certain simple geometric objects and then projecting the intersection. For example, the additively weighted Voronoi diagram (i.e., the closest-point Voronoi diagram for spheres, or the Johnson-Mehl model [153]), and the multiplicatively weighted Voronoi diagram (or the Apollonius model) have this property. In all situations mentioned above, a set of weighted sites for the corresponding power diagram can be computed easily. Thus general methods of handling Voronoi diagrams and cell complexes become available. For example, the Voronoi diagram for spheres in 3-space, and the multiplicatively weighted Voronoi diagram in the plane, can both be computed in 0{n^) time which is optimal. The latter diagram is investigated in detail in Aurenhammer and Edelsbrunner [28] and in Sakamoto and Takagi [223]. 4.3.3. Higher-order Voronoi diagrams and arrangements. Higher-order Voronoi diagrams are natural and useful generalizations of classical Voronoi diagrams. Given a set S of n point sites in J-space, and an integer k between 1 and n — 1, the order-k Voronoi diagram of S, Vk{S), partitions the space into regions such that each point within a fixed region has the same k closest sites. V\ (S) just is the classical Voronoi diagram of S. The regions of Vk (S) are convex polyhedra, as they arise as the intersection of halfspaces bounded by symmetry hyperplanes of the sites. A subset Mofk sites in S has a non-empty region in Vk(S) iff there is a sphere that encloses M but no site in 5 \ M. In fact, the region of M in Vk(S) just is the set of centers of all such spheres. Figure 19 illustrates a planar order-2 Voronoi diagram. Two differences to the classical Voronoi diagram are apparent. A region need not contain its defining sites, and the bisector of two sites may contribute more than one facet. In the extreme case of k = n — 1, the furthest-site Voronoi diagram of S is obtained. It contains, for each site p e S, the region of all points x for which p is the furthest site in S. Exact upper bounds on the size of furthest-site Voronoi diagrams in J-space are derived in Seidel [227]. The family of all higher-order Voronoi diagrams for a given set S of sites in J-space is closely related to an arrangement of hyperplanes in (d -\-1)-space; see Edelsbrunner and Seidel [113]. We describe this relationship in the more general setting of power diagrams, by defining an order-/: power diagram, PDk(S), for a set S of weighted point sites in an analogous way. See Aurenhammer [22,27] for more details. Recall from Subsection 4.3.2 that the power function with respect to a site p e S can be expressed by a hyperplane 7t(p) in (d + l)-space. The set of hyperplanes {nip) \ p e S} dissects (d + 1)-space into a polyhedral cell complex called an arrangement. Arrangement cells are convex, and can be classified according to their relative position with respect to the hyperplanes in [7z{p) \ p ^ S). A cell C is said to be of level k if exactly k hyperplanes are vertically above C. For example, the upper envelope of [n^p) \ p e S} bounds the only cell, 11 (S), of level 0. All cells of level 1 share some facet with n(S), so that their vertical projection gives the (order-1) power diagram PD(S). More generally, the cells of level k project to the regions of PDk(S), for each k between 1 and n — 1. To see this, let x be a point in some ^-level cell C. Then k hyper-

236

F. Aurenhammer and R. Klein

Fig. 19. Region of {p,q} in V2('^)-

planes 7r(/?i),..., Tr(pk) are above x, and n — k hyperplanes 7T(pk-\-\),..., 7t(pn) are below X. That means that, for the vertical projection x^ of x onto J-space, we have pow(x^ Pi) < pow(jc', pj) for 1 ^ / ^ /: and k -\- I ^ j ^ n. Hence x^ is a point in the region of {p\,..., pk) in PDk(S). Hyperplane arrangements are well-investigated objects, concerning their combinatorial as well as their algorithmic complexity; see Edelsbrunner et al. [112,114]. We obtain: THEOREM 4.6. Let S be a set ofn {weighted or unweighted) point sites in d-space. The family of all higher-order power diagrams {or Voronoi diagrams) for S realizes a total of 0 (^^+1) faces, and it can be computed in optimal G {n^^ ^) time.

Clarkson and Shor [74] proved that the number of arrangement cells with levels up to a given value oik is 0{n^^^'^^k^^^'^^). The collection of these cells can be constructed within this time for J > 4, with the algorithm in Mulmuley [203]. A modification by Agarwal et al. [2] achieves roughly 0{nk^) time also in 3-space. An output-sensitive construction algorithm is given in Mulmuley [204]. All these results apply to families of order-/: diagrams in one dimension lower.

Voronoi diagrams

237

Most practical applications ask for the computation of a single order-^ Voronoi diagram Vk(S) in the plane, for a given value of k. (Typically, k does not depend on \S\=n but is a small constant.) As for the classical Voronoi diagram V(S), edges are pieces of perpendicular bisectors of sites. Vertices are centers of circles that pass through three sites. However, these circles are no longer empty; they enclose either /: — 1 or ^ — 2 sites. Lee [181] showed that this diagram has 0(k(n — k)) regions, edges, and vertices. It is easy to see that the regions of ViiS) are in one-to-one correspondence with the edges of V(S). Hence ViiS) realizes at most 3^ — 6 regions in the plane. Considerable efforts have been made to compute the single planar order-^ Voronoi diagram efficiently. Different approaches have been taken in Lee [181], Chazelle and Edelsbrunner [59], Aurenhammer [26], Clarkson [71], and Agarwal et al. [2]. In the last two papers randomized runtimes of 0(^n^+^) and (roughly) 0(k(n — k)\ogn) are achieved, respectively, which is close to optimal. Below we describe a (roughly) 0(k^(n — k) \ogn) time randomized incremental algorithm by Aurenhammer and Schwarzkopf [31], that can be modified to handle arbitrary on-line sequences of site insertions, site deletions, and knearest neighbor queries. Though not being most time efficient, the algorithm profits by its simplicity and flexibility. The heart of the algorithm is a duality transform that relates the diagram Vk (S) to a certain convex hull in 3-space. This transform allows us to insert and also delete sites in a simple fashion by computing convex hulls. Let M c S be any subset of k sites. M is transformed into a point q(M) in 3-space, by taking the centroid of M and lifting it up vertically. More precisely.

Now consider the set Qk (5) of all points that can be obtained from S in this way. That is, e^(5) = { ^ ( M ) | M c 5 } . LEMMA

4.2. The part of the convex hull of Qk (S) that is visible from the plane is dual to

Vkisy The lemma can be proved by first mapping each ^-subset M of 5 into a non-vertical plane 7t{M) in 3-space,

PGM

peM

and then considering the upper envelope HkiS) of all these planes. It is not difficult to show that the facets of Flk (S) project vertically to the regions of Vk(S). The lemma follows from observing the polarity (cf. Subsection 4.3.2) between the planes 7t(M) and the points To construct Vk(S), we could just compute Qk(S), determine its convex hull, and then dualize its triangles, edges, and vertices that are visible from the plane. However, Qk (S)

238

F. Aurenhammer and R. Klein

contains a point for each /^-subset of 5, and thus has cardinaHty (^) = 0 (n^). Only 0(k(n — k)) points He on the convex hull, as Vk{S) has this many regions. We use randomized incremental insertion of sites in order to compute this convex hull efficiently. Let S = [p\,..., pn), and let C/ denote the visible part of the convex hull of Qk{{p\,...,/?/}), for ^ 4-1 ^ / ^n. Points of Qk (S) lying on the triangular surface C/ are called corners of C/. We start by determining Q + i . Qk({p\, "-, Pk+\}) contains k-\-\ points which can be calculated in time 0{k), so Oiklogk) time suffices. The generic step of the algorithm is the insertion of site /?/ into C/_ i, for / > /: + 2. (1) Identify all triangles of C/_i which are destroyed by pi and cut them out. Let B be the set of comers on the boundary of the hole. (2) Calculate the set P of all new comers created by pi. (3) Compute the convex hull of P U J5, and fill the hole with the visible part Ft of this convex hull. This gives C/. Each triangle A of C/_i is dual to a vertex of Vk{S). This vertex is the center of a circle that passes through three sites m S, A will be destroyed if this circle encloses pt. The destroyed triangles of C/_i form a connected surface I^i-\. Hence, if we know one of them in advance, iJ/_i can be identified in time proportional to the number ni of its triangles. Moreover, the set P of new comers can be calculated easily from the edges of Ui-i, as each such edge gives rise to a unique comer. LEMMA 4.3. Given C/_i we can construct Ci in time 0(/2/ log/i/), provided we know a triangle ofCi-\ that is destroyed by /?/.

When looking for a starting triangle of X',-!, we profit from another nice property of the duality transform: If the vertical projection A' of a triangle A of C/_i contains pt then A is destroyed by the insertion of pi. This leaves us with the problem of locating pt in the triangulation given by the planar projection of C/_ i. In fact, we get the desired point-location stmcture nearly for free. Adapting a technique used in Guibas et al. [135] for constmcting Delaunay triangulations, we do not remove the triangles of C/_ i that get destroyed by the insertion of pi, but mark them as old. When marked old, each triangle gets a pointer to the newly constmcted part Ft of C/. The next site then is located by scanning through the 'construction history' of C/. The structure for point location within each surface Fy, j ^ /, which is needed in addition, is a byproduct of the randomized incremental convex hull algorithm in [135], which we used for computing Fj. In summary, the order-/: Voronoi diagram for n sites in the plane can be computed in expected time 0{{k^{n — k))\ogn + nk\o^n), and optimal 0(k(n — k)) deterministic space, by an online randomized incremental algorithm. Full details and the following extensions are given in [31]. Deletion of a site can be done in the reverse order of insertion, again by computing a convex hull. The historybased point location structure used by the algorithm can be adapted to support ^-nearest neighbor queries (see Subsection 5.1.1). A dynamic data stmcture is obtained, allowing for insertions and deletions of sites in expected time 0(k^ \ogn -\- klog^ n), and A:-nearest neighbor queries in expected time 0(klog^n). This promises a satisfactory performance for small values of k.

Voronoi diagrams

239

4.4. Generalized sites It is commonly agreed that most geometric scenarios can be modeled with sufficient accuracy by polygonal objects. Two typical and prominent examples are the description of the workspace of a robot moving in the plane, and the geometric information contained in a geographical map. In both applications, robot motion planning and geographical information systems, the availability of proximity information for the scenario is crucial. This is among the reasons why considerable attention has been paid to the study of Voronoi diagrams for polygonal objects. Still, in some applications the scenario can be modeled more appropriately when curved objects, for instance, circular arcs are also allowed. Many Voronoi diagram algorithms working for line segments can be modified to work for curved objects as well. 4.4.1. Line segment Voronoi diagram and medial axis. Let G be a planar straight line graph on n points in the plane, that is, a set of non-crossing fine segments spanned by these points. For instance, G might be a tree, or a collection of disjoint line segments or polygons, or a complete triangulation of the points. The number of segments of G is maximum, 3n — 6, in the last case. We will discuss several types of diagrams for planar straight line graphs in the present and following subsections. The classical type is the {closestpoint) Voronoi diagram, V(G), of G. It consists of all points in the plane which have more than one closest segment in G. V{G) is known under different names in different areas, for example, as the line Voronoi diagram or skeleton of G, or as the medial axis when G is a simple polygon. Applications in such diverse areas as biology, geography, pattern recognition, computer graphics, and motion planning exist; see, e.g. Kirkpatrick [161] and Lee [180] for references. See Figure 20. V{G) is formed by straight line edges and parabolically curved edges, both shown as dashed lines. Straight edges are part of either the perpendicular bisector of two segment endpoints, or of the angular bisector of two segments. Curved edges consist of points equidistant from a segment endpoint and a segment's interior. There are two types of vertices, namely of type 2 having degree two, and of type 3 having degree three (provided G is in general position). Both are equidistant from a triple of objects (segment or segment endpoint), but for type-2 vertices the triple contains a segment along with one of its endpoints. Together with G's segments, the edges of V(G) partition the plane into regions. These can be refined by introducing certain normals through segment endpoints (shown dotted in Figure 20), in order to delineate/ace^ each of which is closest to a particular segment or segment endpoint. Two such normals start at each segment endpoint where G forms a reflex angle, and also at each terminal of G which is an endpoint belonging to only one segment in G. A normal ends either at a type-2 vertex of V(G) or extends to infinity. It is well known that the number of faces, edges and vertices of V{G) is linear in n, the number of segment endpoints for G. The number of vertices is shown to be at most An — 3 in Lee and Drysdale [182]. An exact bound, that also counts the 'infinite' vertices at unbounded edges and segment normals, is given below.

240

F. Aurenhammer

and R. Klein

..6

Fig. 20. Line segment Voronoi diagram.

LEMMA 4.4. Let G be a planar straight line graph on n points in the plane, and let G realize t terminals and r reflex angles. The number of {finite and infinite) vertices of V{G) is exactly 2n -\-1 -\- r — 2. PROOF. Suppose first that G consists of e disjoint segments (that do not touch at their endpoints). Then there are e regions, and each type-3 vertex belongs to three of them. By the Euler formula for planar graphs, there are exactly 2^ — 2 such vertices, if we also count those at infinity. To count the number of type-2 vertices, observe that each segment endpoint is a terminal and gives rise to two segment normals each of which, in turn, yields one (finite or infinite) vertex of type 2. Hence there are 4e such vertices, and 6^ — 2 vertices in total. Now let G be a general planar straight line graph with e segments. We simulate G by disjoint segments, by shortening each segment slightly such that the segment endpoints are in general position. Then we subtract from 6^ — 2 the number of vertices which have been generated by this simulation.

Voronoi diagrams

241

Consider an endpoint p that is incident to J ^ 2 segments of G. Obviously, p gives rise to d copies in the simulation. The Voronoi diagram of these copies has d — 2 finite vertices, which are new vertices of type 3. As the sum of the degrees J > 2 in G is 2e — /, we get 2e — t — 2(n — t) new vertices in this way. Each convex angle at p gives rise to two new normals emanating at the respective copies of /?, and thus to two (finite) type-2 vertices. A possible reflex angle at p gives rise to one (finite or infinite) type-3 vertex, on the perpendicular bisector of the corresponding copies of p. There are r reflex angles in G, and thus 2e — t — r convex angles. This gives D r + 2(2^ — t — r) new vertices in addition. The lemma follows by simple arithmetic. Surprisingly, the number of edges of G does not influence the bound in Lemma 4.4. The maximum number of vertices, ?>n — 2, is achieved, for example, if G is a set of disjoint segments {t = n and r = 0), or if G is a simple polygon P (t = 0 and r =n). In the latter case, the majority of appHcations concerns the part of V(P) interior to P. This part is commonly called the medial axis of P. The medial axis of an n-gon with r reflex interior angles has a tree-like structure and realizes exactly n -\- r — 2 vertices and at most 2(n + r) — 3 edges. Lee [180] first mentioned this bound, and also listed some applications of the medial axis. An interesting application to NC pocket machining is described in Held [139]. Several algorithms for computing V{G), for general or restricted planar straight line graphs G, have been proposed and tested for practical efficiency. V (G) can be computed in 0{n logn) time and 0{n) space by divide & conquer (Kirkpatrick [161], Lee [180], and Yap [261]), plane sweep (Fortune [125]), and randomized incremental insertion (Boissonnat et al. [43] and Klein et al. [170]). Burnikel et al. [51] give an overview of existing methods, and discuss implementation details of an algorithm in Sugihara et al. [243] that first inserts all segment endpoints, and then all the segments, of G in random order. An algorithm of comparable simplicity and practical efficiency (though with a worst-case running time of 0(n^)) is given in Gold et al. [130]. They first construct a Voronoi diagram for point sites by selecting one endpoint for each segment, and then maintain the diagram while expanding the endpoints, one by one, to their corresponding segments. During an expansion, the resulting topological updates in the diagram can be carried out efficiently. In fact, Voronoi diagrams for moving point sites are well-studied concepts; see, e.g. Guibas et al. [134] and Roos [221]. An efficient 0{n\o^n) work parallel algorithm for computing V(G) is given in Goodrich et al. [132]. This is improved to O(logn) parallel (randomized) time using 0(n) processors in Rajesekaran and Ramaswami [218]. (The latter result also implies an optimal parallel construction method for the classical Voronoi diagram.) If G is a connected graph then V{G) can be computed in randomized time 0{n log* n)\ see Devillers [88]. Recently, 0{n) time randomized, and deterministic, algorithms for the medial axis of a simple polygon have been designed by Klein and Lingas [169] and Chin et al. [68], settling open questions of long standing. The case of a convex polygon is considerably easier; see Subsection 4.4.3. Some of the algorithms above also work for curved objects. The plane-sweep algorithm in Fortune [125] elegantly handles arbitrary sets of circles (i.e., the additively weighted

242

F. Aurenhammer and R. Klein

Voronoi diagram, or Johnson-Mehl model) without modification from the point site case. Yap [261] allows sets of disjoint segments of arbitrary degree-two curves. A randomized incremental algorithm for general curved objects is given in Alt and Schwarzkopf [12]. They show that complicated curved objects can be partitioned into 'harmless' ones by introducing new points. All these algorithms achieve an optimal running time, 0{n \ogn). In dimensions more than two, the known results are sparse. The complexity of the Voronoi diagram for n line segments in J-space may be as large as Q{n^~^), as was observed by Aronov [17]. By the relationship of Voronoi diagrams to lower envelopes of hypersurfaces (see Subsection 4.6), the results in Sharir [234] imply an upper bound of roughly O(n^). No better upper bounds are known even for line segments in 3-space. The Voronoi diagram for n spheres in J-space has a size of only 0(nL^/^^+^), by its relationship to power diagrams proved in Aurenhammer and Imai [30]. A case of particular interest in several applications is the medial axis M(P) of a (generally non-convex) polyhedron P in 3-space. M(P) contains pieces of parabolic and hyperbolic surfaces and thus has a fairly complicated structure. A practical and numerically stable algorithm for computing M(P) is proposed in Milenkovic [198]. 4.4.2. Straight skeletons. In comparison to the Voronoi diagram for point sites, which is composed of straight edges, the occurrence of curved edges in the line segment Voronoi diagram V(G) is a disadvantage in the computer representation and construction, and sometimes also in the application, of V(G). There have been several attempts to linearize and simplify V(G), mainly for the sake of efficient point location and motion planning; see Canny and Donald [54], Kao and Mount [155], de Berg et al. [80], and McAlhster et al. [191]. The compact Voronoi diagram in [191] is particularly suited to these applications. It is defined for the case where G is a set of k disjoint convex polygons. Its size is only 0(k), rather than 0(n), and it can be computed in time 0(k logn); see Subsection 5.1.1 for more details. As a different alternative to V(G), we now describe the straight skeleton, S(G), of a planar straight line graph G. This structure is introduced, and discussed in much more detail, in Aichholzer and Aurenhammer [9]. 5(G) is composed of angular bisectors and thus does not contain curved edges. In general, its size is even less than that of V(G). Beside its use as a type of skeleton for G, 5(G) applies, for example, to the reconstruction of terrains from a given geographical map as will be sketched later. 5(G) is defined as the interference pattern of certain wavefronts propagated from the segments and segment endpoints of G. Let F be a connected component (called SL figure) of G. Imagine F as being surrounded by a belt of (infinitesimally small) width e. For example, a single segment s gives rise to a rectangle of length l^l -|- 26: and width 2s, and a simple polygon P gives rise to two homothetic copies of P with minimum distance 2s. In general, if F partitions the plane into c connected faces then F gives rise to c simple polygons called wavefronts for F. The wavefronts arising from all the figures of G are now propagated simultaneously, at the same speed, and in a self-parallel manner. Wavefront vertices move on angular bisectors of wavefront edges which, in turn, may increase or decrease in length during the propagation. This situation continues as long as wavefronts do not change combinatorially. Basically, there are two types of changes.

Voronoi diagrams

243

Fig. 21. Straight skeleton.

(1) Edge event: A wavefront edge collapses to length zero. (The wavefront may vanish due to three simultaneous edge events.) (2) Split event: A wavefront edge splits due to interference or self-interference. In the former case, two wavefronts merge into one, whereas a wavefront splits into two in the latter case. After either type of event, we are left with a new set of wavefronts which are propagated recursively. The edges of 5(G) are just the pieces of angular bisectors traced out by wavefront vertices. Each vertex of 5(G) corresponds to an edge event or to a split event. S{G) is a unique structure defining a polygonal partition of the plane; see Figure 21. During the propagation, each wavefront edge e sweeps across a certain area which we call ihtface of e. Each segment of G gives rise to two wavefront edges and thus to two faces, one on each side of the segment. Each terminal of G (endpoint of degree one) gives rise to one face. Faces can be shown to be monotone polygons and thus are simply connected. This gives a total of 2m + ^ = 0(^2) faces, if G realizes m edges and t terminals. There is also an exact bound on the number of vertices of S{G).

244

F. Aurenhammer and R. Klein

4.5. Let G be a planar straight line graph on n points, t of which are terminals. The number of (finite and infinite) vertices of S(G) is exactly 2n-\-t — 2.

LEMMA

From Lemma 4.4 in the previous subsection it is apparent that 5(G) has r vertices less than V{G) if G has r reflex angles. In particular, if G is a simple polygon with r reflex interior angles, then the part of S{G) interior to G is a tree with only n — 2 vertices, whereas the medial axis of G has n-\-r — 2 vertices. A wavefront model similar to that yielding S{G) is sometimes used to define the Voronoi diagram V{G) of G (cf. the expanding waves view in Section 2). There, the propagation speed of all points on the wavefront is the same, whereas, in the model for 5(G), the speed of each wavefront vertex is controlled by the angle between its incident wavefront edges. The sharper the angle, the faster is the movement of the vertex. This behaviour may make 5(G) completely different from the Voronoi diagram of G. It can be shown that, without prior knowledge of its structure, S(G) cannot be defined by means of distances from G. Moreover, S{G) does not fit into the framework of abstract Voronoi diagrams described in Subsection 4.6: The bisecting curve for two segments of G would be the interference pattern of the rectangular wavefronts they send out, but these curves do not fulfill condition (ii) in Definition 4.2. As a consequence, the well-developed machinery for constructing Voronoi diagrams (see Section 3) does not apply to S{G). An algorithm that simulates the wavefront propagation by maintaining a triangulation of the wavefront vertices is given in [9]. The method is simple and practically efficient but has a worst-case running time of 0(/i^ log«). S{G) has a three-dimensional interpretation, obtained by defining the height of a point X in the plane as the unique time when x is reached by a wavefront. In this way, S{G) lifts up to a polygonal surface EG, where points on G have height zero. In a geographical application, G may delineate rivers, lakes, and coasts, and Ec represents a corresponding terrain with fixed slope. EG has the nice property that every raindrop that hits a terrain facet / runs off to the segment or terminal of G defining / ; see Aichholzer et al. [8]. This may have applications in the study of rain water fall and its impact on the floodings caused by rivers in a given geographic area. The concept of 5(G) can be generalized by tuning the propagation speed or angle of the individual wavefront edges, in order to achieve prescribed facet slopes for EG, or individual elevations for the terrain points on G. The size of 5(G), and its construction algorithm, remain unaffected. When restricted to the interior of a simple polygon P, Ep is used in [8] as a canonical construction of a roof of given slope above P. For rectilinear (and axis-aligned) polygons P, the medial axis of P in the Loo-metric will do the job. 5(P) coincides with this structure for such polygons, and thus generalizes this roof construction technique to general shapes of P. Straight skeletons can be generalized to higher dimensions without much difficulties. They retain their piecewise-linear shape and thus, for example, offer a simpler alternative to the medial axis of a non-convex polyhedron in 3-space. 4.4.3. Convex polygons. Voronoi diagrams for a single convex polygon have a particularly simple structure. Tailor-made algorithms for their construction have been designed.

Voronoi diagrams

245

Let C be a convex n-gon in the plane. The medial axis M{C) of C is a tree whose edges are pieces of angular bisectors of C's sides. In fact, M{C) coincides with the part of the straight skeleton S{C) interior to C. M{C) realizes exactly n faces, n — 2 vertices, and 2/1 — 3 edges. There is a simple randomized incremental algorithm by Chew [62] that computes M{C) in 0{n) time. The algorithm first removes, in random order, the halfplanes whose intersection is C. Removing a halfplane means removing the corresponding side e, and extending the two sides adjacent to e so that they become adjacent in the new convex polygon. This can be done in constant time per side. The adjacency history of C is stored. That is, for each removed side e, one of its formerly adjacent sides is recorded. In a second stage, the sides are put back in reversed (still randomized) order, and the medial axis is maintained during these insertions. Let us focus on the insertion of the /-th side et. We have to integrate the face, /(e/), of Ci into the medial axis of the / — 1 sides that have been inserted before et. From the adjacency history, we already know a side e' of the current polygon that will be adjacent to et after its insertion. Hence we know that the angular bisector of et and e' will contribute an edge to /(e/). Having a starting edge available in 0(1) time, the face /(^/) now can be constructed in time proportional to its size. We construct /(^/) edge by edge, by simply tracing and deleting parts of the old medial axis interior to / ( ^ / ) . As the medial axis of an /-gon has 2/ — 3 edges, and each edge belongs to two faces, the expected number of edges of a randomly chosen face is less than 4. Thus fiei) can be constructed in constant expected time, which gives an 0{n) randomized algorithm for computing M{C). The same technique also applies to the Voronoi diagram for the vertices of a convex n-gon C, that is, to the Voronoi diagram of a set Sofn point sites in convex position. By Lemma 2.2, all regions in V{S) are unbounded, and the edges of V{S) form a tree. Hence V{S) has the same numbers of edges and (finite) vertices as the medial axis of C. For each p e S, its region VR(/?, S) shares an unbounded edge with the regions YR(p\ S) and YR(p^\ 5), where p' and p^^ are adjacent to p on the convex hull of S (which is the polygon C). An adjacency history can be computed in 0(n) time, by removing the sites in random order and maintaining their convex hull. For each site that is re-inserted, the expected number of edges is less than 4. So an 0(n) randomized construction algorithm is obtained. The diagrams V(S) and M(C) can also be computed in deterministic linear time; see Aggarwal et al. [4]. The details of this algorithm are much more involved, however. 4.4.4. Constrained Voronoi diagrams and Delaunay triangulations. In certain situations, unconstrained proximity among a set of sites is not enough information to meet practical needs. There might be reasons for not considering two sites as neighbors although they are geometrically close to each other. For example, two cities that are geographically close but separated by high mountains might be far from each other from the point of view of a truck driver. The concepts described below have been designed to model constrained proximity among a set of sites.

F. Aurenhammer

246

and R. Klein

Fig. 22. Bounded Voronoi diagram extended, and its dual.

Let 5 be a set of n point sites in the plane, and let L be a set of non-crossing line segments spanned by S. Note that \L\ ^3n — 6. The segments in L are viewed as obstacles: we define the bounded distance between two points x and y in the plane as

b{x,y) =

d{x,y)

\ixyr\L = 9i,

oo

otherwise,

where d stands for the Euclidean distance. In the resulting bounded Voronoi diagram y ( 5 , L), regions of sites that are close but not visible from each other are clipped by segments in L. Regions of sites being segment endpoints are nonconvex near the corresponding segment; see Figure 22. The dual of V{S, L) is not a full triangulation of S, even if the segments in L are included. However, V{S, L) can be modified to dualize into a triangulation which includes L and, under this restriction, is as much 'Delaunay' as possible. The modification is simple but nice. For each segment € G L, the regions clipped by I from the right are extended to the left of £, as if only the sites of these regions were present. The regions clipped by I from the left are extended similarly; see Figure 22. Of course, extended regions may overlap now, so they fail to define a partition of the plane. If we dualize now by connecting sites of regions that share an edge, a full triangulation that includes L is obtained: the constrained Delaunay triangulation Yyi{S, L). It is clear that the number of edges of DT(5', L) is at most 3n — 6, and that, in general, the number of edges of V(5, L) is even less. Hence both structures have a linear size.

Voronoi diagrams

247

The original definition of DT(5, L) in Lee and Lin [183] is based on a modification of the empty circle property: DTC^, L) contains L and, in addition, all edges between sites p,q e S that have b(p,q) < oo and that lie on a circle enclosing only sites r e S with at least one of b(r, p), b(r, q) = oo. Algorithms for computing V(S, L) and DTC^", L) have been proposed in Lee and Lin [183], Chew [63], Wang and Schubert [255], Wang [254], Seidel [228], and Kao and Mount [156]. The last two methods seem best suited to implementation. For an application of DT(»S, L) to quality mesh generation see Chew [65]. We sketch the 0(n logn) time plane sweep approach in [228]. If only V{S, L) is required then the plane sweep algorithm described in Subsection 3.4 can be applied without much modification. If DT(5, L) is desired then the extensions of V{S, L) as described above are computed in addition. To this end, an additional sweep is carried out for each segment i e L. The sweep starts from the line through i in both directions. It constructs, on the left side of this line, the (usual) Voronoi diagram of the sites whose regions in V(S, L) are clipped by I from the right, and vice versa. The special case where S and L are the sets of vertices, and sides, of a simple polygon has received special attention, mainly because of its applications to visibility problems in polygons. The bounded Voronoi diagram V{S, L) is constructable in 0(n) randomized time in this case; see Klein and Lingas [167]. If the Li-metric instead of the Euclidean metric is used to measure distances, the same authors [169] give a deterministic linear time algorithm. Both algorithms, as well as the linear time medial axis algorithms in [168] and in [68], first decompose the polygon into smaller parts called histograms. These are polygons whose vertices, when considered in cyclic order, appear in sorted order in some direction. An alternative concept that forces a set L of line segments spanned by S into DT(*S') is the conforming Delaunay triangulation. For each segment i E L that does not appear in DTC^), new sites on I are added such that i becomes expressible as the union of Delaunay edges in DT(5' U C), where C is the total set of added sites. For several site adding algorithms, \C\ depends on the size as well as on the geometry of L. See, e.g., the survey article by Bern and Eppstein [37] and references therein. Edelsbrunner and Tan [118] show that \C\= 0(k^n) is always sufficient, and construct a set of sites with this size in time

0{k^n-\-n^)Jork=\L\. A different, and more complicated, type of constrained Voronoi diagram is the geodesic Voronoi diagram. Here, the distance between a point site p and a point x in the plane is equal to the length of the shortest obstacle-avoiding path between p and x. The obstacles are usually modeled by a set of non-crossing line segments. If all segment endpoints are sites then the bounded Voronoi diagram is obtained. However, this is typically not the case. The fact that computing geodesic distances is not a constant-time operation complicates the construction of geodesic Voronoi diagrams. The only known subquadratic algorithm is by Mitchell [199]. An 0((^ -h k) log(n + k)) time algorithm for the geodesic Voronoi diagram of k sites inside a simple n-gon is given in Papadopoulou and Lee [212], improving over an earlier approach in Aronov [16].

248

F. Aurenhammer and R. Klein

4.5. Generalized spaces and distances So far we have mainly discussed Voronoi diagrams of sites in J-space, that are defined with respect to the Euchdean distance function. Now we want to generaUze both the space in which the sites are situated and the distance measure used; but we shall only discuss the case of point sites. The main questions are which of the structural properties the standard Voronoi diagram enjoys will be preserved, and will the remaining properties be strong enough to apply one of the algorithmic approaches for computing the Voronoi diagram introduced in Section 3. 4.5.1. Generalized spaces. Since the surface of earth is not flat, it seems very natural to ask about Voronoi diagrams of point sites on curved surfaces in 3-space. The distance between two points on the surface is the minimum Euclidean length of a curve that connects the points and runs entirely inside the surface. Such a curve will be called a shortest path. Brown [50] has addressed the Voronoi diagram of points on the surface of the twodimensional sphere. Here great circles play the role of lines in the Euchdean plane. In fact, the bisector of two points is a great circle, and the shortest paths are segments of great circles, too. (One can show that the only other metric space in which all bisector segments are shortest paths is the hyperbolic space; see Busemann [52].) For each pair of antipodal points on the sphere there is a continuum of shortest paths connecting them. But this does not affect the Voronoi diagram of n points; it can be computed in optimal 0(/i log n) time and linear space, by adaption of the algorithms mentioned in Section 3. Quite different is the situation on the surface of a cone. In order to determine the bisector of two points, p and q, we can cut the cone along a halfline emanating from the apex, and unfold it; in Figure 23 the halfline diametrically opposed to p has been chosen. Since curve length does not change in this process, each shortest path on the cone that does not cross the cut is transformed into a shortest path in the plane, i.e. into a line segment. In order to represent those shortest paths that cross the cut, we add to the unfolded cone two more copies, as shown in Figure 23(ii). Now the shortest path on the cone from some point x to site q corresponds to the shortest one of the line segments 'qx, q'x, and q^'x. This explains why the unfolded bisector B(p,q) consists of segments of the planar bisectors of p,q and

p,q\ In spite of this strong connection to the plane, the Voronoi diagram of points on a cone has structural properties surprisingly different from the planar Voronoi diagram. If the unfolded cone forms a wedge of angle less than 180° then the bisector of two points can be a closed curve. If three points p,q,r are placed, in this order, on a halfline emanating from the apex of such a cone, the bisector B(q, r) fully encircles B{p, q) which in turn encircles the apex. This causes the Voronoi region of q in V{{p,q, r}) to be not simply connected. Also, the closures of two Voronoi regions can have more than one Voronoi edge in common. Such a situation is shown in Figure 24, on the unfolded cone. The bisectors of the three points cross twice, at the Voronoi vertices v and w\ the latter happens to lie on the cut. (It is interesting to observe that none of these phenomena occurs on the sphere, although there, too, bisectors are closed curves and cross twice.) Despite these fundamental differences to the plane, the Voronoi diagram of n points on the surface of a cone can be constructed in optimal time and space, using a sweep circle that

Voronoi diagrams

249

Fig. 23. A cone sliced and unfolded, showing the bisector of p and q.

Fig. 24. The common border of the Voronoi regions of q and r consists of two Voronoi edges.

expands from the apex; see Dehne and Klein [83]. This approach works without unfolding the cone. Mazon and Recio [190] have independently pointed out the algebraic background of the unfolding procedure illustrated by Figure 23(i), and obtained the following generalization.

250

F. Aurenhammer and R. Klein

Let P denote the Euclidean plane or two-dimensional sphere, and let G be a discrete group of motions on P: a group of bijections that leave the distance between any pair of points of P invariant, such that for each point p e P there exists a constant c satisfying

Pl^gip)"^

\p-gip)\

^c

for all motions g eG. Examples in the plane are the group generated by a rotation of rational angle about some given point, or the group generated by two translations that move each point afixeddistance to the right and a fixed distance upwards, respectively. In the 19th century, mathematicians have completely classified all discrete groups. Two points p, p^ e P axQ equivalent if there exists a motion in G that takes pio p'', the equivalence class, [/?], of p is called the orbit of p. The quotient space, P/G, consists of all orbits. In order to geometrically represent P/G one starts with a connected subset of P that contains a representative out of every orbit; equivalent points must be on the boundary. Such a set is called a fundamental domain, if it is convex. The following lemma provides a nice way of obtaining a fundamental domain; a proof can be found in Ehrlich and Im Hof[120]. 4.6. Let p be a point of P that is left fixed only by the unit element ofG. Then its Voronoi region VR(/7, [p]) is a fundamental domain.

LEMMA

In Figure 23(ii), for example, the point set {p, p\ p") is the orbit of p under a clockwise rotation by 120°. The Voronoi region VR(/7, {/?, p\ p'^]) equals the master copy of the unfolded cone, as drawn by solid lines. Each interior point x is the only point of [x] contained in this region, only the points on the boundary (i.e. the cut) of the unfolded cone are mapped into each other by rotation. If we identify these two halflines we obtain the cone depicted in Figure 23(i), a model of the quotient space of the Euclidean plane over the cyclic group of order 3. In a similar way we would obtain a rectangle as fundamental domain of the group of two translations mentioned above, and identifying opposite edges would result in a torus. If we want to compute the Voronoi diagram of a set S of point sites on a surface associated with such a quotient space P/G,WQ can proceed as follows. Let So denote a set of representatives of 5 in a fundamental domain D c P. First, we compute the Voronoi diagram V([5o]), where [So] denotes the union of the orbits of the elements of 5o, an infinite but periodic set. Due to [190], ^([^o]) can be obtained by applying the motions in G to the Voronoi diagram of a finite set of points of 5o and translated copies of ^oTHEOREM

4.7. There exists a finite subset U of [So] such that V([So]) = [V(U) (1 D]

holds. If one removes from y([5o]) all Voronoi edges that separate points of the same orbit and intersects the resulting structure with the fundamental domain D, the desired diagram V(S) results, after identifying equivalent points. Although the set ^o can be constructed effectively, it seems hard to establish an upper bound for the efficiency of this step.

Voronoi diagrams

Fig. 25. By dc ip,q) = ,.

251

/. a convex distance function dc with unit circle C is defined.

To mention a few other spaces considered, Ehrlich and Im Hof [120] have studied, from a differential geometrist's point of view, structural properties of the Voronoi diagram in such Riemannian manifolds where any two points are connected by a unique shortest path. In order to compute the Voronoi diagram of n points on SL polyhedral surface in 3-space containing m vertices, one can make use of its discrete structure and apply the continuous Dijkstra technique usually employed for computing shortest paths. It allows the Voronoi diagram to be computed in O(A^^logiV) time and O(N^) space, where A^ = max(m, n); see Mitchell et al. [200]. 4.5.2. Convex distance functions. In numerous appHcations the Euclidean metric does not provide an appropriate way of measuring distance. In the following subsections we consider the Voronoi diagram of point sites under distance measures different from the Euclidean metric. We start with convex distance functions, a concept that generalizes the Euchdean distance but slightly. Whereas this generalization does not cause serious difficulties in the plane, surprising changes will occur as we move to 3-space. Let C denote a compact, convex set in the plane that contains the origin in its interior. Then a convex distance function can be defined in the following way. In order to measure the distance from a point p to some point q, the set C is translated by the vector p. The half line from p through q intersects the boundary of C at a unique point q'\ see Figure 25. Now one puts

d{p,q) dc(p,q) = ~ -. d{p,q') By definition, C equals the unit circle of d, that is, the set of all points q satisfying dc(0,q) ^ 1. The value of dc(p,q) does not change if both p and q are translated by the same vector. One can show that the triangle inequality dc(p,r) ^dc(p,q) -\-dc(q,r) is fulfilled, with equality holding for colinear points p,q,r.ln general, we have dc(p,q) = dc'iq, p), where C' denotes the reflected image of C about the origin. We can define the Voronoi diagram based on an arbitrary convex distance function by associating with each site p all points x of the plane such that dc(p,x) = 0, ay^^ - 2Py^j,y + yy^^ = 0, where

p = x^Xy + y^yv

y = ^l + yl'

Mesh generation

303

Software designed to solve these systems often includes an additional source term on the right-hand sides of the harmonic system in (4) to control the local point spacing in the domain [129]. The elliptic method just discussed, though motivated by conformal mapping, does not compute true conformal mappings. A true conformal mapping induces a structured mesh with certain advantages; for example, the Laplacian is the limit of the second-order difference on such a grid. True conformal mapping, however, does not seem to be widely used in mesh generation, perhaps because algorithms to compute such mappings are relatively slow, or because they do not allow local control of point spacing. In the case that ^ is a simple polygon, the Schwarz-Christoffel formula offers an explicit form for the conformal mappings from the unit disk D io Q. Such a mapping can in turn be used to find conformal mappings from ^ to a square or rectangle. Let the points in the complex plane defining the polygon (in counterclockwise order) be z i , . . . , Z/i, the interior angles at these points be a i , . . . , a ^ , and define the normalized angles as P/^=ak/7t — 1. Using coi,.. .,(On as the preimages of z\, • - •,Zn on the edge of the disk, the Schwarz-Christoffel formula gives the form of the conformal mapping as f{a,) = A-\-B

f](l-§M)^^df •^^

(5)

k=\

There are several programs available to solve for the unknown cok values: SCPACK by Trefethen [133], the SC Toolbox by Driscoll [47], and CRDT by DriscoU and Vavasis [48]. One difficulty in the numerical solution is "crowding", enormous variation in spacing between the (Dk points. CRDT, the latest and apparently best Schwarz-Christoffel algorithm, overcomes this difficulty by repeatedly remapping so that crowding does not occur near the points being evaluated.

5. Unstructured two-dimensional meshes We have already mentioned the advantages of unstructured meshes: flexibility in fitting complicated domains, rapid grading from small to large elements, and relatively easy refinement and derefinement. Unlike structured mesh generation, unstructured mesh generation has been part of mainstream computational geometry for some years, and there is a large literature on the subject. We consider three principled approaches to unstructured mesh generation in some detail; these approaches use Delaunay triangulation, constrained Delaunay triangulation, and quadtrees. Then we discuss mesh refinement and improvement. In the final section, we describe some geometric problems abstracted from unstructured mesh generation.

5.1. Delaunay triangulation Our first approach to unstructured mesh generation partitions the task into two phases: placement of mesh vertices, followed by triangulation. (Added points are called Steiner

304

M. Bern and P. Plassmann

points to distinguish them from the domain's original vertices.) If the placement phase is smart enough, the triangulation phase can be especially simple, considering only the input vertices and Steiner points and ignoring the input edges. The placement phase typically places vertices along the domain boundary before adding them to the interior. The boundary should be lined with enough Steiner points that the Delaunay triangulation of all vertices will conform to the domain. This requirement inspires a crisp geometric problem, called conforming Delaunay triangulation: given a polygonal domain Q, add Steiner points so that each edge of .^ is a union of edges in the Delaunay triangulation. An algorithm due to Saalfeld [112] lines the edges of Q with a large number of Steiner points, uniformly spaced except near the endpoints. A more efficient solution [96] covers the edges of Q by disks that do not overlap other edges. Edelsbrunner and Tan [52] gave the best theoretical result, an algorithm that uses 0{n^) Steiner points for an ^-vertex multiple domain. They also gave an Q{n^) lower bound example. There are several approaches to placing interior Steiner points. One approach [84] combines the vertices from a number of structured meshes. A second approach [10,95] adds Steiner points in successive layers, working in from the domain boundary as in advancing front mesh generation (Section 7.2). Figure 6 shows an example. A third approach [88, 137] chooses interior points at random according to some distribution, which may be interpolated from a coarse quadtree or "background" triangulation. An independent random sample is likely to produce some badly shaped triangles [26], so the generator should oversample and then filter out points too close to previously chosen points [88]. Finally, there are deterministic methods that achieve essentially the same effect as random sampling with filtering; these methods [29,120] define birth and death rules that depend upon the density of neighboring points. All of these methods can give anisotropy. The first and second approaches, structured submeshes and advancing front, offer local control of element shapes and orientations. These approaches may space points improperly where structured meshes or advancing fronts collide, but this flaw can usually be corrected by filtering points and later smoothing the mesh. The third and fourth approaches trade direct control over element shapes for ease of fitting complicated geometries. Nevertheless, one can achieve anisotropy with these approaches by computing the Delaunay triangulation within a stretched space [29,36,43]. For example, Bossen [29] uses a "background" triangulation to define local affine transformations; Delaunay flips (described below) are then made with respect to transformed circles. Stretched Delaunay triangulations have many more large angles than ordinary Delaunay triangulations, but this should not pose a problem unless the stretching exceeds the desired amount (Section 3). The triangulation phase uses the well-known Delaunay triangulation. The Delaunay triangulation of a point set S = [s\,S2,.. .,Sn} is defined by the empty circle condition: a triangle siSjSk appears in the Delaunay triangulation DT(5) if and only if its circumcircle encloses no other points of S. See Figure 7(a). There is an exception for points in special position: if an empty circle passes through four or more points of 5, we may triangulate these points — complete the triangulation — arbitrarily. So defined, DT(5) is a triangulation of the convex hull of 5. For our purposes, however, we can discard all triangles that fall outside the original domain Q.

Mesh

generation

305

Fig. 6. Delaunay triangulation of points placed by an advancing front (T. Barth).

Fig. 7. (a) Delaunay triangulation. (b) A reversed quadrilateral.

There are a number of practical Delaunay triangulation algorithms [56]. We describe only one, called the edge flipping algorithm, because it is most relevant to our subsequent discussion. Its worst-case running time of O(n^) is suboptimal, but it performs quite well in practice. The edge flipping algorithm starts from any triangulation of S and then locally optimizes each edge. Let e be an internal (non-convex-hull) edge and Qe be the triangulated quadrilateral formed by the triangles sharing e. Quadrilateral Qe is reversed if the two angles without the diagonal sum to more than 180°, or equivalently, if each triangle circumcircle contains the opposite vertex as in Figure 7(b). If Qe is reversed, we "flip" it by exchanging e for the other diagonal. compute an initial triangulation of S place all internal edges onto a queue while the queue is not empty do remove the first edge e if quadrilateral Qe is reversed then flip it and add the outside edges of Qe to the queue endif endwhile An initial triangulation can be computed by a sweep-line algorithm. This algorithm adds the points of S by x-coordinate order. Upon each addition, the algorithm walks around the

306

M. Bern and P. Plassmann ^4.

Fig. 8. A sweep-line algorithm for computing an initial triangulation.

convex hull of the already-added points, starting from the rightmost previous point and adding edges until the slope reverses, as shown in Figure 8. The following theorem [45] guarantees the success of edge flipping: a triangulation in which no quadrilateral is reversed must be a completion of the Delaunay triangulation.

5.2. Constrained Delaunay triangulation There is another way, besides conforming Delaunay triangulation, to extend Delaunay triangulation to polygonal domains. The constrained Delaunay triangulation of a (possibly multiple) domain ^ does not use Steiner points, but instead redefines Delaunay triangulation in order to force the edges of Q into the triangulation. A point p is visible to a point q'mQ li the open line segment pq Hes within Q and does not intersect any edges or vertices of ^ . The constrained Delaunay triangulation CDT(^) contains each triangle not cut by an edge of i?, that has an an empty circumcircle, where empty now means that the circle does not contain any vertices of Q visible to points inside the triangle. The visibility requirement means that external proximities, where ^ wraps around to nearly touch itself, have no effect. Figure 9 provides an example; here vertex v is not visible to any point in the interior of triangle abc. The edge flipping algorithm can be generalized to compute the constrained Delaunay triangulation, only this time we do not allow edges of I? onto the queue. Obtaining an initial triangulation is somewhat more difficult for polygonal domains than for point sets. The textbook by Preparata and Shamos [103] describes an 0(«logn)-time algorithm for computing an initial triangulation. This algorithm first adds edges to Q to subdivide it into easy-to-triangulate "monotone" faces. Ruppert [111], building on work of Chew [38], gave a mesh-generation algorithm based on constrained Delaunay triangulation. (Subsequently, Mitchell [90] sharpened Ruppert's analysis, and Shewchuk [117,118] further refined the algorithm and made an implementation available on the Web.) Ruppert's algorithm computes the constrained Delaunay triangulation at the outset and then adds Steiner points to improve the mesh, thus uniting the two phases of the approach described in the last section. In choosing this approach, the

Mesh

generation

307

Fig. 9. The constrained Delaunay triangulation of a polygon with holes.

Fig. 10. A mesh computed by Ruppert's algorithm (J. Ruppert).

user gives up some control over point placement, but obtains a more efficient mesh with fewer and "rounder" triangles. The first step of Ruppert's mesh generator cuts off all vertices of the domain Q at which the interior angle measures less than 45°. The cutting line at such a vertex v should not introduce a new small feature to ^ ; it is best to cut off an isosceles triangle whose base is about halfway from v to its closest visible neighbor. If v has degree greater than two, as might be the case in a multiple domain, then the bases of the isosceles triangles around v should match up so that no isosceles triangle receives a Steiner point on one of its legs. Next the algorithm computes the constrained Delaunay triangulation of the modified domain. The algorithm then goes through the loop given below. The last line of the loop repairs a constrained Delaunay triangulation after the addition of a new Steiner point c. To accomplish this step, there is no need to recompute the entire triangulation. The removed

308

M. Bern and P. Plassmann

old triangles are exactly those with circumcircles containing c, which can be found by searching outwards from the triangle that contains c, and the new triangles that replace the removed triangles must all be incident to the new vertex c. while there exists a triangle t with an angle smaller than 20° do let c be the center of r's circumcircle if c lies within the diameter semicircle of a boundary edge e then add the midpoint m of ^ else add c endif recompute the constrained Delaunay triangulation endwhile The loop is guaranteed to halt with all angles larger than 20°. At this point, the cutoff isosceles triangles are returned to the domain, and the mesh is complete. Ruppert's algorithm comes with a strong theoretical guarantee: all new angles, that is, angles not present in the input, are greater than 20°, and the total number of triangles in the mesh is at most a constant times the minimum number of triangles in any such no-small-angle mesh. To prove this efficiency result, Ruppert shows that each triangle in the final mesh is within a constant factor of the local feature size at its vertices. The local feature size at point p e Q is defined to be the radius of the smallest circle centered at p that touches two nonadjacent edges of the boundary; this is a spacing function intrinsic to the domain.

5.3. Quadtrees A quadtree mesh generator [8,25,143] starts by enclosing the entire domain ^ inside an axis-aligned square. It splits this root square into four congruent squares, and continues splitting squares recursively until each minimal — or leaf — square intersects ^ in a simple way. Further splits may be dictated by a user-defined spacing function or balance condition. Quadtree squares are then warped and cut to conform to the boundary. A final triangulation step gives an unstructured triangular mesh. We now describe a particular quadtree mesh generator due to Bern, Eppstein, and Gilbert [25]. As first presented, the algorithm assumes that ^ is a polygon with holes; however, the algorithm can be extended to multiple and even to curved domains. In fact, the quadtree approach handles curved domains more gracefully than the Delaunay and constrained Delaunay approaches, because the splitting phase can automatically adapt to the curvature of enclosed boundary pieces. The algorithm of Bern et al. splits squares until each leaf square contains at most one connected component of ^ ' s boundary, with at most one vertex. Mitchell and Vavasis [91] improved the splitting phase by "cloning" squares that intersect Q in more than one connected component, so that each copy contains only a single connected component of Q. The algorithm then splits squares near vertices of Q two more times, so that each vertex lies within a buffer zone of equal size squares. Next the mesh generator imposes a balance condition: no square should be adjacent to one less than one-half its size. This causes more splits to propagate across the quadtree, increasing the total number of leaf squares by a constant factor (at most 8). Squares are

Mesh

generation

309

Fig. 11. A mesh computed by a quadtree-based algorithm (S. Mitchell).

then warped to conform to the domain Q. Various warping rules work; we give just one possibihty. In the following pseudocode, \b\ denotes the side length of square b. for each vertex u of i? do let y be the closest quadtree vertex to v move y iov endfor for each leaf square b still crossed by an edge e do move the vertices of b that are closer than \b\/Aio eio their closest points on e endfor discard faces of the warped quadtree that lie outside Q Finally, the cells of the warped quadtree are triangulated so that all angles are bounded away from 0°. Figure 11 gives a mesh computed by a variant of the quadtree algorithm. This figure shows that cloning ensures appropriate element sizes around holes and "almost holes". Notice that a quadtree-based mesh exhibits preferred directions — horizontal and vertical. If this artifact poses a problem, mesh improvement steps can be used to redistribute element orientations. The quadtree algorithm enjoys the same efficiency guarantee as Ruppert's algorithm. In fact, the quadtree algorithm was the first to be analyzed in this way [25].

5.4. Mesh refinement and derefinement Adaptive mesh refinement places more grid points in areas where the PDF solution error is large. Local error estimates based on an initial solution are known as a posteriori error estimates [7] and can be used to determine which elements should be refined. For elliptic

310

M. Bern and P. Plassmann

problems these estimators asymptotically bound the true error and can be computed locally using only the information on an element [138]. One approach to mesh refinement [71] iteratively inserts extra vertices into the triangulation, typically at edge bisectors or triangle circumcenters as in Section 5.2. New vertices along the boundaries of curved domains should be computed using the curved boundary rather than the current straight edge, thereby giving a truer approximation of the domain as the mesh refines [36]. Iterative vertex insertion may be viewed as a mesh improvement step (Section 5.5), and indeed several generators [29,119,139] have combined insertion/deletion, flipping, and smoothing into a single loop. Iterative vertex insertion gives a finer mesh, but not a nesting or edge conforming, refinement of the original mesh, meaning a mesh that includes the boundaries of the original triangles. Nesting refinements simplify the interpolation step in the multigrid method (Section 2.2). To compute such a refinement, we turn to another approach. This approach splits triangles in need of refinement, by adding the midpoints of sides. The pseudocode below gives the overall approach. solve the differential equation on the initial mesh TQ estimate the error on each triangle while the maximum error on a triangle is larger than the given tolerance do based on error estimates, mark a set of triangles Sk to refine * divide the triangles in Sk, along with adjacent invalid triangles to get T^^i solve the differential equation on T^+i estimate the error on each triangle k = k+\ endwhile There are a number of popular alternatives for step *, in which the current mesh Tk is adaptively refined. In regular refinement [11,12], the midpoints of the sides of a marked triangle are connected, as in Figure 12(b), to form four similar triangles. Unmarked triangles that received two or three midpoints are split in the same way. Unmarked triangles that received only one midpoint are bisected by connecting the midpoint to the opposite vertex as in Figure 12(a). Before the next iteration of •, bisected triangles are glued back together and then marked for refinement; this precaution guarantees that each triangle in Tk^\ will either be similar to a triangle in To or be the bisection of a triangle similar to a triangle in TQ. Thus, regular refinement — regardless of the number of times through the refinement loop — produces a mesh with minimum angle at least half the minimum angle in 7b. Hence the angles in T^+i are bounded away from 0 and n. Rivara [107-109] proposed several alternatives for step • based on triangle bisection. One method refines each marked triangle by cutting from the opposite vertex to the midpoint of the longest edge. Neighboring triangles are now invalid, meaning that one side contains an extra vertex; these triangles are then bisected in the same way. Bisections continue until there are no remaining invalid triangles. Refinement can propagate quite far from marked triangles; however, propagation cannot fall into an infinite loop, because along a propagation path each bisected edge is longer than its predecessor. This approach, like the previous one, produces only a finite number of different triangle shapes — similarity

Mesh

generation

311

Fig. 12. A triangle divided by (a) bisection, and (b) regular refinement.

Fig. 13. The bisection algorithm given in the pseudocode splits invalid children of refined triangles to their subdivision points, rather than to their longest edges.

classes — and the minimum angle is again at least half the smallest angle in TQ. Quite often longest-edge refinement actually improves angles. A second Rivara refinement method is given in the pseudocode below and illustrated in Figure 13. This method does not always bisect the longest edge, so bisections tend to propagate less, yet the method retains the same final angle bound as the first Rivara method. Qi = Sk {Q denotes "marked" triangles to be refined} Ri =0 {R denotes children of refined triangles} while (Q/U/?/) 7^0 do bisect each triangle in Qi across its longest edge bisect each triangle in Rt across its subdivided edge add all invalid children of Qi triangles to Ri^\ add all other invalid triangles to 2/+i /= /+1 endwhile We now discuss the reverse process: coarsening or derefinement of a mesh. This process helps reduce the total number of elements when tracking solutions to time-varying differential equations. Coarsening can also be used to turn a single highly refined mesh into a sequence of meshes for use in the multigrid method [98]. Figure 14 shows a sequence of meshes computed by a coarsening algorithm due to Ollivier-Gooch. The algorithm marks a set of vertices to delete from the fine mesh, eliminates all marked vertices, and then retriangulates the mesh. The resulting mesh is node

312

M. Bern and P. Plassmann

Fig. 14. A sequence of meshes used by the multigrid method for solving the Unear systems arising in modeUng airflow over an airfoil (C. OUivier-Gooch).

conforming, meaning that every vertex of the coarse mesh appears in the fine mesh, but not edge conforming. One difficulty is that the shapes of the triangles degrade as the mesh is coarsened, due to increasing disparity between the interior and boundary point densities. Meshes produced by refinement methods are typically easier to coarsen than are less hierarchical meshes such as Delaunay triangulations. Teng, Talmor, and Miller [87] have recently devised an algorithm using Delaunay triangulations of well-spaced point sets, which produces a sequence of bounded-aspect-ratio, node-conforming meshes of approximately minimum depth.

5.5. Mesh improvement The most common mesh improvement techniques are flipping and smoothing. These techniques have proved to be very powerful in two dimensions, and together they can transform very poor meshes into very good ones, so long as the mesh starts with enough vertices. Flipping exchanges the diagonals of a triangulated quadrilateral as in the edge flipping algorithm for computing Delaunay triangulation (Section 5.1), only the criterion for making the exchange need not be the Delaunay empty circle test. Flipping can be used to regularize vertex degrees, minimize the maximum angle, or improve almost any other quality measure of triangles. For quality measures optimized by the Delaunay triangulation (Section 5.6.1), flipping computes a true global optimum, but for other criteria it computes only a local optimum. Mesh smoothing adjusts the locations of mesh vertices in order to improve element shapes and overall mesh quality [2,3,33,55,100]. In mesh smoothing, the topology of the mesh remains invariant, thus preserving important features such as the nonzero pattern of the linear system. Laplacian smoothing [55,77] is the most commonly used smoothing technique. This method sweeps over the entire mesh several times, repeatedly moving each adjustable vertex to the arithmetic average of the vertices adjacent to it. Variations weight each adjacent vertex by the total area of the elements around it, or use the centroid of the incident elements rather than the centroid of the neighboring vertices [139]. Laplacian smoothing is computationally inexpensive and fairly effective, but it does not guarantee improvement

Mesh

generation

313

Fig. 15. (a) A mesh resulting from bisection refinement without smoothing, (b) The same mesh after local optimization-based smoothing.

in element quality. In fact, Laplacian smoothing can even invert an element, unless the algorithm performs an explicit check before moving a vertex. Another class of smoothing algorithms uses optimization techniques to determine new vertex locations. Both global and local optimization-based smoothing offer guaranteed mesh improvement and validity. Global techniques simultaneously adjust all unconstrained vertices; such an approach involves an optimization problem as large as the number of unconstrained vertices, and consequently, is computationally very expensive [33,100]. Local techniques adjust vertices one by one — or an independent set of vertices in parallel [58] — resulting in a cost more comparable to Laplacian smoothing. Many quality measures, including maximum angle and area divided by sum of squared edge lengths, can be optimized by techniques related to linear programming [2]. Figure 15 shows the results of a local optimization-based smoothing algorithm developed by Freitag et al. [58]. The algorithm was applied to a mesh generated adaptively during the finite element solution of the linear elasticity equations on a two-dimensional rectangular domain with a hole. The mesh on the left was generated using the bisection algorithm for refinement; the edges from the coarse mesh are still evident after many levels of refinement. The mesh on the right was generated by a similar algorithm, only with vertex locations optimized after each refinement step. Overall, the global minimum angle has improved from 11.3°to21.7° and the average minimum element angle from 35.7° to 41.1°.

5.6. Theoretical questions We have mentioned some theoretical results — conforming Delaunay triangulation, nosmall-angle triangulation — in context. In this section, we describe some other theoretical work related to mesh generation.

314

M. Bern and P. Plassmann

5.6.1. Optimal triangulation. Computational geometers have studied a number of problems of the following form: given a planar point set or polygonal domain, find a best triangulation, where "best" is judged according to some specific quality measure such as maximum angle, minimum angle, maximum edge length, or total edge length. If the input is a simple polygon, most optimal triangulation problems are solvable by dynamic programming in time 0(f2^), but if the input is a point set, polygon with holes, or multiple domain, these problems become much harder. The Delaunay triangulation — constrained Delaunay triangulation in the case of polygonal domains — optimizes any quality measure that is improved by flipping a reversed quadrilateral; this statement follows from the theorem that a triangulation without reversed quadrilaterals must be Delaunay. Thus Delaunay triangulation maximizes the minimum angle, along with optimizing a number of more esoteric quality measures, such as maximum circumcircle radius, maximum enclosing circle radius, and "roughness" of a piecewiselinear interpolating surface [105]. As mentioned in Section 5.5, edge flipping can also be used as a general optimization heuristic. For example, edge flipping works reasonably well for minimizing the maximum angle [53], but it does not in general find a global optimum. A more powerful local improvement method called edge insertion [23,53] exactly solves the minmax angle problem, as well as several other minmax optimization problems. Edge insertion starts from an arbitrary triangulation and repeatedly inserts candidate edges. If minimizing the maximum angle is the goal, the candidate edge e subdivides the maximum angle; in general the candidate edge is always incident to a "worst vertex" of a worst triangle. The algorithm then removes the edges that are crossed by e, forming two polygonal holes alongside e. Holes are retriangulated by repeatedly removing ears (triangles with two sides on the boundary, as shown in Figure 16) with maximum angle smaller than the old worst angle Lcab. If retriangulation runs to completion, then the overall triangulation improves and edge be is eliminated as a future candidate. If retriangulation gets stuck, then the overall triangulation is returned to its state before the insertion of e, and e is eliminated as a future candidate. Each candidate insertion takes time 0{n), giving a total running time of 0(«^). compute an initial triangulation with all (2) edge slots unmarked while 3 an unmarked edge e cutting the worst vertex a of worst triangle abc do add e and remove all edges crossed by e try to retriangulate by removing ears better than abc if retriangulation succeeds then mark be else mark e and undo e's insertion endif endwhile Edge insertion can compute the minmax "eccentricity" triangulation or the minmax slope interpolating surface [23] in time 0{n^). By inserting candidate edges in a certain order and saving old partial triangulations, the running time can be improved to 0{n^ log n) for minmax angle [53] and maxmin triangle height. We close with some results for two other optimization criteria: maximum edge length and total length. Edelsbrunner and Tan [51] showed that a triangulation of a point set that minimizes the maximum edge length must contain the edges of a minimum spanning tree.

Mesh generation

315

Fig. 16. Edge insertion retriangulates holes by removing sufficiently good ears. Dotted lines indicate the old triangulation.

The tree divides the input into simple polygons, which can be filled in by dynamic programming, giving an 0(n-^)-time algorithm (improvable to 0(n^)). Whether a triangulation minimizing total edge length — "minimum weight triangulation" — can be solved in polynomial time is still open. The most promising approach [46] incrementally computes a set of edges that must appear in the triangulation. If the required edges form a connected spanning graph, then the triangulation can be completed with dynamic programming as in the minmax problem. 5.6.2. Steiner triangulation. The optimal triangulation problems just discussed have limited applicability to mesh generation, since they address only triangulation and not Steiner point placement. Because exact Steiner triangulation problems appear to be intractable, typical theoretical results on Steiner triangulation prove either an approximation bound such as the guarantees provided by the mesh generators in Sections 5.2 and 5.3, or an order of complexity bound such as Edelsbrunner and Tan's 0{n^) algorithm for conforming Delaunay triangulation. The mesh generators in Sections 5.2 and 5.3 give constant-factor approximation algorithms for what we may call the no-small-angle problem: triangulate a domain Q using a minimum number of triangles, such that all new angles are bounded away from 0°. The provable constants tend to be quite large — in the hundreds — although the actual performance seems to be much better. The number of triangles in a no-small-angle triangulation depends on the geometry of the domain, not just on the number of vertices n; an upper bound is given by the sum of the aspect ratios of triangles in the constrained Delaunay triangulation. We can also consider the no-large-angle problem: triangulate Q using a minimum number of triangles, such that all new angles are bounded away from 180°. The strictest bound on large angles that does not imply a bound on small angles is nonobtuse triangulation: triangulate a domain Q such that the maximum angle measures at most 90°. Moreover, a nonobtuse mesh has some desirable numerical and geometric properties [9,135]. Bern, Mitchell, and Ruppert [27] developed a circle-based algorithm for nonobtuse triangulation

316

M. Bern and P. Plassmann

Fig. 17. Steps in circle-based nonobtuse triangulation.

of polygons with holes; this algorithm gives a triangulation with 0(n) triangles, regardless of the domain geometry. Figure 17 shows the steps of this algorithm: the domain is packed with nonoverlapping disks until each uncovered region has either 3 or 4 sides; radii to tangencies are added in order to split the domain into small polygons; and finally small polygons are triangulated with right triangles, without adding any new subdivision points. It is currently unknown whether multiple domains admit polynomial-size nonobtuse triangulations. Mitchell [93], however, gave an algorithm for triangulating multiple domains using 0{n^ log n) triangles with maximum angle 157.5^. Tan [ 126] improved the maximum angle bound to 132° and the complexity to the optimal O(n^).

6. Hexahedral meshes Mesh generation in three dimensions is not as well developed as in two, for a number of reasons: lack of standard data representations for three-dimensional domains, greater software complexity, and — most relevant to this article — some theoretical difficulties. This section and the next one survey approaches to three-dimensional mesh generation. We have divided this material according to element shape, hexahedral or tetrahedral. This classification is not completely strict, as many hexahedral mesh generators use triangular prisms and tetrahedra in a pinch. Careful implementations of numerical methods can handle degenerate hexahedra such as prisms [66,67]. In this section, we describe three approaches to hexahedral mesh generation that vary in their degree of structure and strictness.

6.1. Multiblock meshes We start with the approach that produces meshes with the most structure (and quite often the highest quality elements). A multiblock mesh contains a number of small structured

Mesh

generation

317

Fig. 18. A multiblock hexahedral mesh of a submarine, showing (a) block structure, and (b) a vertical slice through the mesh (ICEM CFD).

meshes that together form a large unstructured mesh. Typically a user must supply the topology of the unstructured mesh, but the rest of the process is automated. Figure 18 shows a multiblock mesh created by ICEM Hexa, a system developed by ICEM CFD Engineering. In this system the user controls the placement of the block comers, and then the mesh generator projects the implied block edges onto domain curves and surfaces automatically. Due to the need for human interaction, multiblock meshes are not well suited to adaptive meshing, nor to rapidly iterated design and simulation.

6.2. Cartesian meshes We move on to a recently developed "quick and dirty" approach to hexahedral mesh generation. The Cartesian approach offers simple data structures, explicit orthogonality of mesh edges, and robust and straightforward mesh generation. The disadvantage of this approach is that it uses non-hexahedral elements around the domain boundary, which then require special handling. A Cartesian mesh is formed by cutting a rectangular box into eight congruent boxes, each of which is split recursively until each minimal box intersects the domain ^ in a simple way or has reached some small target size. (This construction is essentially the same as an octree, described in Section 7.3.) Requiring neighboring boxes to differ in size by at most a factor of two ensures appropriate mesh grading. Boxes cut by the boundary are classified into a number of patterns by determining which of their vertices lie interior and exterior to Q. Each pattern corresponds to a different type of non-hexahedral element. Boxes adjacent to ones half their own size can similarly be classified as non-hexahedral elements, or alternatively the solution value at their subdivision vertices can be treated as implicit variables using Lagrange multipliers [1]. Recent fluid dynamics simulations have used Cartesian meshes quite successfully in both finite element and finite volume formulations [41,42,144]. The approach can be adapted even to very difficult meshing problems. For example, Berger and Oliger [21] and Berger

318

M Bern and P. Plassmann

Fig. 19. A two-dimensional Cartesian mesh for a biplane wing (W. Coirier).

and Colella [20] have developed adaptive Cartesian-based methods for rotational IBiows and flows with strong shocks.

6.3. Unstructured hexahedral meshes Hexahedral elements retain some advantages over tetrahedral elements even in unstructured meshes. Hexahedra fit man-made objects well, especially objects produced by CAD systems. The edge directions in a box-shaped hexahedron often have physical significance; for example, hexahedra show a clear advantage over tetrahedra for a stress analysis of a beam [19]. The face normals of a box meet at the center of the element; this property can be used to define control volumes for finite volume methods. These advantages are not inherent to hexahedra, but rather are properties of box-shaped elements, which degrade as the element grows less rectangular. Thus it will not suffice to generate an unstructured hexahedral mesh by transforming a tetrahedral mesh. Armstrong et al. [4] are currently developing an unstructured hexahedral mesh generator based on the medial axis transform. The medial axis of a domain is the locus of centers of spheres that touch the boundary in two or more faces. This construction is closely related to the Voronoi diagram of the faces of the domain; Srinivasan et al. [124] have previously applied this construction to two-dimensional unstructured mesh generation. The medial axis is a natural tool for mesh generation, as advancing fronts meet at the medial axis in the limit of small, equal-sized elements. By precomputing this locus, a mesh generator can more gracefully handle the junctures between sections of the mesh. Tautges and Mitchell [127] are developing an all-hexahedral mesh generation algorithm called whisker weaving. Whisker weaving is an advancing front approach that fixes the topology of the mesh before the geometry. It starts from a quadrilateral surface mesh.

Mesh generation

319

which can itself be generated by an advancing-front generator within each face [28]. The algorithm forms the planar dual of the surface mesh, and then finds closed loops in the planar dual around the surface of the polyhedron. Each loop will represent the boundary of a layer of hexahedra in the eventual mesh. A layer of hexahedra can be represented by its dual, called a sheet, which has one vertex per hexahedron and edges between adjacent hexahedra. As the algorithm runs, it fills in sheets from the boundary inwards. This approach to hexahedral meshing raises an interesting theoretical question: which quadrilateral surface meshes can be extended to hexahedral volume meshes? Mitchell [94] and Thurston [132] (see also Eppstein [54]) answered this question in a topological sense by showing that any surface mesh on a simple polyhedron with an even number of quadrilaterals can be extended to a volume mesh formed by (possibly curved) topological cubes. The geometric question remains open.

7. Tetrahedral meshes Tetrahedra have several important advantages over hexahedra: unique linear interpolation from vertices to interior, greater flexibility in fitting complicated domains, and ease of refinement and derefinement. In order to realize the last two of these advantages, tetrahedral meshes are almost always unstructured. Most of the approaches to unstructured triangular mesh generation that we surveyed in Section 5 can be generalized to tetrahedral mesh generation, but not without some new difficulties. Before describing Delaunay, advancing front, and octree mesh generators we discuss three theoretical obstacles to unstructured tetrahedral meshing, ways in which M^ differs from M^. First, not all polyhedral domains can be triangulated without Steiner points. Figure 20(a) gives an example of a non-tetrahedralizable polyhedron, a twisted triangular prism in which each rectangular face has been triangulated so that it bends in towards the interior. None of the top three vertices is visible through the interior to all three of the bottom vertices; hence no tetrahedron formed by the vertices of this polyhedron can include the bottom face. Chazelle [37] gave a quantitative bad example, shown in Figure 20(b). This polyhedron includes ^(n) grooves that nearly meet at a doubly-ruled curved surface; any triangulation of this polyhedron must include Q{n^) Steiner points and Q(n^) tetrahedra. Bad examples such as these appear to rule out the possibility of generalizing constrained Delaunay triangulation to three dimensions. Second, the very same domain may be tetrahedralized with different numbers of tetrahedra. For example, a cube can be triangulated with either five or six tetrahedra. As we shall see below, the generalization of the edge flip to three dimensions exchanges two tetrahedra for three or vice versa. This variability does not usually pose a problem, except in the extreme cases. For example, n points in M? can have a Delaunay triangulation with Q{n^) tetrahedra, even though some other triangulation will have only 0(n). Finally, tetrahedra can be poorly shaped in more ways than triangles. In two dimensions, there are only two types of failure, angles close to 0° and angles close to 180°, and no failures of the first kind implies no failures of the second. In three dimensions, we can classify poorly shaped tetrahedra according to both dihedral and solid angles [22]. There are then

320

M. Bern and P. Plassmann

Fig. 20. (a) Schonhardt's twisted prism cannot be tetrahedralized without Steiner points, (b) Chazelle's polyhedron requires Q{n ) Steiner points.

Needle

Wedge

Fig. 21. The five types of bad tetrahedra.

five types of bad tetrahedra, as shown in Figure 21. A needle permits arbitrarily small solid angles, but not large solid angles and neither large nor small dihedral angles. A wedge permits both small solid and dihedral angles, but neither large solid nor large dihedral angles, and so forth. Notice that a sliver or a cap can have all face angles bounded away from both 0° and 180°, although the tetrahedron itself may have arbitrarily small solid angles and interior volume. An example is the sliver with vertex coordinates (0, 0,0), (0, 1, £), (1,0, e), and (1, 1,0), where ^ ^- 0. Many measures of tetrahedron quality have been proposed [75], most of which have a maximum value for an equilateral tetrahedron and a minimum value for a degenerate tetrahedron. One suitable measure, which forbids all five types of bad tetrahedra, is the minimum solid angle. A weaker measure, which forbids all types except slivers, is the ratio of the minimum edge length to the radius of the circumsphere [88].

7.1. Delaunay triangulation As in two dimensions, point placement followed by Delaunay triangulation is a popular approach to mesh generation, especially in aerodynamics. The same point placement methods

Mesh

generation

321

Fig. 22. In three dimensions, an edge flip exchanges three tetrahedra sharing an edge for two tetrahedra sharing a triangle, or vice versa.

work fairly well: combining structured meshes [68], advancing front [10,78,79], and random scattering with filtering [137]. As in two dimensions, the placement phase must put sufficiently many points on the domain boundary to ensure that the Delaunay triangulation will be conforming. Although the three-dimensional conforming Delaunay triangulation problem is not too hard for most domains of practical interest, we do not know of published solutions. The first two point placement methods suffer from the same liability in three dimensions as in two: points may be improperly spaced at junctures between fronts or patches. All three methods suffer from a new sort of problem: even a well spaced point set may include sliver tetrahedra in its Delaunay triangulation, because a sliver does not have an unusually large circumsphere compared to the lengths of its edges. For this reason, some Delaunay mesh generators [10] include a special postprocessing step that finds and removes slivers. Chew (personal communication) has recently devised an algorithm that removes slivers by adding Steiner points at a random location near their circumcenters. The triangulation phase of mesh generation also becomes somewhat more difficult in three dimensions. The generalization of edge flipping exchanges the two possible triangulations of five points in convex position, as shown in Figure 22. We call a flip a Delaunay flip if, after the flip, the triangulation of the five points satisfies the empty sphere condition — no circumsphere encloses a point. In three dimensions, it is no longer true that any tetrahedralization can be transformed into the Delaunay triangulation by a sequence of Delaunay flips [69], and it is currently unknown whether any tetrahedralization can be transformed into the Delaunay triangulation by arbitrary flips. Nevertheless, there are provably correct, incremental Delaunay triangulation algorithms based on edge flipping [50,70, 104]. There are other practical three-dimensional Delaunay triangulation algorithms as well. Bowyer [30] and Watson [136] gave incremental algorithms with reasonable expected-case performance. Barber [15] implemented a randomized algorithm in arbitrary dimension. This algorithm can be used to compute Delaunay triangulations through a well-known reduction [31] which "lifts" the Delaunay triangulation of points in W^ to the convex hull of points in IR^+^

322

M. Bern and P. Plassmann

Fig. 23. The surface of a tetrahedral mesh computed by an advancing front generator (ANSYS, Inc.).

7.2. Advancing front We have already mentioned an advancing front approach to placing Steiner points for Delaunay triangulation. A pure advancing front mesh generator [77,79,97,101] places the elements themselves, rather than just the Steiner points. This approach gives more direct control of element shapes, especially near the boundary, which is often a region of special interest. Advancing front generators seem to be especially popular in aerodynamics simulations [64,65,79,85,101]. We describe an advancing front algorithm of Lohner and Parikh [79,80] as it contains the essential ideas. Desired element size (and perhaps stretching directions) are defined at the vertices of a coarse "background" tetrahedralization and interpolated to the rest of the domain. The background mesh can also be defined by an octree, the three-dimensional generahzation of a quadtree. To get started, the boundaries of the domain are triangulated; the initial front consists of the boundary faces. The algorithm then iteratively chooses a face of the front and builds a tetrahedron over that face. The algorithm attempts to fill in clefts left by the last layer of tetrahedra before starting the next layer; within a layer, the algorithm chooses small faces first in order to minimize collisions. The fourth vertex of the tetrahedron will be either an already existing vertex or a vertex specially created for the tetrahedron. In the latter case, the algorithm tries to choose a smart location for the new vertex; for example, the new vertex may be placed along a normal to the base face at a distance determined by aspect ratios and length functions ultimately derived from the background triangulation [59]. In either case, cleft or new vertex, the tetrahedron must be tested for collisions before final acceptance. Figure 23 shows the surface of a fairly isotropic tetrahedral mesh computed by an advancing front mesh generator developed by ANSYS, Inc. This generator, like the one just described, places elements directly.

Mesh

generation

323

Fig. 24. The surface of a tetrahedral mesh derived from an octree (M. Yerry and M. Shephard).

Marcum and Weatherill [81] have devised an algorithm somewhere between pure advancing front and advancing-front point placement followed by Delaunay triangulation. Their algorithm starts with a coarse mesh, and then uses advancing front to place additional Steiner points, simply subdividing the coarse tetrahedra to maintain a triangulation. This mesh is then improved first by Delaunay and then by minmax-solid-angle flips. Other researchers agree that applying flips in this order is more effective than using either type of flip alone. 7.3. Octrees An octree is the natural generalization of a quadtree. An initial bounding cube is split into eight congruent cubes, each of which is split recursively until each minimal cube intersects the domain .Q in a simple way. As in two dimensions, a balance condition ensures that no cube is next to one very much smaller than itself; balancing an unbalanced quadtree or octree expands the number of boxes by a constant multiplicative factor. The balance condition need not be explicit, but rather it may be a consequence of an intrinsic local spacing function [134]. Shephard and his collaborators [114-116,142] have developed several octree-based mesh generators for polyhedral domains. Their original generator [142] tetrahedralizes leaf cubes using a collection of predefined patterns. To keep the number of patterns manageable, the generator makes the simplifying assumption that each cube is cut by at most three facets of the input polyhedron. Perucchio et al. [102] give a more sophisticated way to conform to boundaries. Buratynski [32] uses rectangular octrees and a hierarchical set of warping rules. The octree is refined so that each domain edge intersects boxes of only one size. Boxes are warped to domain vertices, then edges, and finally faces. Mitchell and Vavasis [91] generalized the quadtree mesh generator of Bern et al. [25] to three dimensions. The generalization is not straightforward, primarily because vertices of

324

M. Bern and P. Plassmann

Fig. 25. The tetrahedron on the left is bisected to form two new tetrahedra.

polyhedra may have very complicated local neighborhoods. This algorithm is guaranteed to avoid allfivetypes of bad tetrahedra, while producing a mesh with only a constant times the minimum number of tetrahedra in any such bounded-aspect-ratio tetrahedralization. So far this is the only three-dimensional mesh generation algorithm with such a strong theoretical guaranty. Vavasis [134] has recently released a modified version of the algorithm (called QMG for "Quality Mesh Generator"), including a simple geometric modeler and equation solver to boot. The modified algorithm includes a more systematic set of warping rules; in particular, the new warping method for an octree cube cut by a single facet generalizes to any fixed dimension [92].

7.4. Refinement and improvement We discuss improvement before refinement, because less is known on the subject. As we mentioned above, edge flipping generalizes to three dimensions, and flipping first by the Delaunay empty sphere criterion and then by the minmax solid angle criterion seems to be fairly effective. Laplacian smoothing also generalizes, although experimental results [57] indicate that it is no longer as effective as in two dimensions. Optimization-based smoothing [2,57] appears to be more powerful than simple Laplacian smoothing. Freitag and Ollivier-Gooch [57] recommend combining Delaunay flipping with smoothing for maxmin dihedral angle or maxmin dihedral-angle sine. We now move on to refinement and discuss two different refinement algorithms based upon the natural generalization of bisection to three dimensions. To bisect tetrahedron vov\V2V2> across edge V{)V\, we add the triangle foi 1^21^3, where VQ\ is the midpoint of VQV\, as shown in Figure 25. This operation creates two child tetrahedra, uofo 11^21^3 and VQ\V\V2V^, and bisects the faces VQV\V2 and fofi 1^3, which, unless they lie on the domain boundary, are each shared with an adjacent tetrahedron. Two tetrahedra that share a face must agree on how it is to be bisected; otherwise an invalid mesh will be constructed. A single bisection of a tetrahedron can approximately square the minimum solid angle, unlike in two dimensions where the minimum angle of a triangle is decreased by no more than a factor of two. Consider the wedge tetrahedron with vertex coordinates (0,^,0),

Mesh

generation

325

Fig. 26. The first three levels of longest-edge bisection of the canonical tetrahedron. Note that the tetrahedra generated at each level are similar. For the final level of refinement we show only the four tetrahedra obtained from fo^^oi ^2^3- Four similar tetrahedra are obtained from vo\v\V2V2>.

(0,—e, 0),(1,0,^), and (1, 0, — e). Bisection of the longest ^dgt of this tetrahedron creates a new tetrahedron with minimum soHd angle about s^. Rivara and Levin [110] suggested an extension of longest-edge Rivara refinement (Section 5.4) to three dimensions. Notice that splitting the longest edge in a tetrahedron also splits the longest edge on the two subdivided faces, and thus the bisection of shared faces is uniquely defined. (Ties can be broken by vertex labels.) Neighboring invalid tetrahedra, meaning all those sharing the subdivided longest edge, are refined recursively. Rivara and Levin provide experimental evidence suggesting that repeated rounds of longest-edge refinement cannot reduce the minimum solid angle below a fixed threshold, but this guarantee has not been proved. The guarantee would follow if it could be shown that the algorithm generates only a finite number of tetrahedron similarity classes. A bisection algorithm first introduced by Bansch [14] does indeed generate only a finite number of similarity classes. Before describing the algorithm, we sketch the argument of Liu and Joe [74] which motivates the algorithm. The key observation is that there exists an affine transformation that maps any tetrahedron to a canonical tetrahedron for which longest-edge bisection generates only a finite number of similarity classes. Consider the canonical tetrahedron tc with coordinates ( - 1 , 0 , 0 ) , (1,0, 0), (0, 1/V2, 0), and (0, 0,1). In Figure 26 we illustrate the first three levels of longest-edge bisection of tc = V{)V\ V2V^. It can be shown that all the tetrahedra generated at each level of refinement are similar and that the eight tetrahedra generated after three levels of refinement are similar to tc. Refinement in the canonical space induces a refinement in the original space with only a finite number of different similarity classes. A subtlety: the similarity classes in the original space correspond to homothets (identical up to scaling and translation), not similarity classes, in the canonical space. Hence the number of similarity classes turns out to be 36 rather than 8 [5,83]. Bansch [14], Liu and Joe [76], and Arnold et al. [5] give essentially equivalent [71] algorithms for generating the bisection order; we follow Bansch's presentation. Each face in each tetrahedron elects one of its edges to be its refinement edge, so that two conditions hold: the choice for a face is consistent between the two tetrahedra that share it, and exactly

326

M. Bern and P. Plassmann

one edge in each tetrahedron — the global refinement edge — is chosen by pairs of faces of the tetrahedron. These conditions hold initially if each face picks its longest edge and ties are broken in any consistent manner, for example, by vertex or edge label order. In the pseudocode below, a child face is a triangle like vo\V2V\ in Figure 25, and a new face is one like V{)\V2V^. mark the refinement edge of every face in the current mesh let 7b be the set of marked tetrahedra; / = 0 while (7/ ^ 0) do bisect each tetrahedron in 7/ across its global refinement edge pick the old edge in each child face as its refinement edge pick the longest edge in each new face as its refinement edge /= /+1 let Ti be the set of invalid tetrahedra enddo

8. Conclusions We have described the current state of the art in mesh generation for finite element methods. Practical issues in mesh generation are — roughly in order of importance — algorithm robustness, fit with underlying physics, element quality, and mesh efficiency. Unstructured triangular and tetrahedral mesh generation already makes frequent use of data structures and algorithms familiar in computational geometry. We expect this trend to continue. We also expect — and recommend — computational geometers to focus some attention on structured meshes and hexahedral meshes. We close with a short list of open problems of both practical and theoretical interest. It is no coincidence that these problems focus on three-dimensional mesh generation. 1. Is the flip graph for a point set in B? connected? In other words, is it possible to convert any triangulation of a point set (even a point set in convex position) into any other using only flips (Figure 22)? 2. Is there a smoothing algorithm guaranteed to remove slivers? A sliver (Figure 21) is the only type of bad tetrahedron with well spaced vertices and small circumspheres. 3. Is there an algorithm for conforming Delaunay triangulation in R-^? In other words, place vertices on the boundary of a polyhedron, so that the Delaunay triangulation of all vertices, original and new, contains the polyhedron. 4. Is there an algorithm for unstructured tetrahedral mesh generation that guarantees an M-matrix for the finite element formulation of Poisson's equation? 5. Give an algorithm for computing the blocks in a multiblock mesh. Such an algorithm should give a small number of nicely shaped blocks, quadrilaterals in R^ and hexahedra in R^. 6. Can any quadrilateral surface mesh with an even number of quadrilaterals be extended to a hexahedral volume mesh?

Mesh generation

327

Ackonwledgements We would like to thank Lori Freitag, Paul Heckbert, Scott Mitchell, Carl Ollivier-Gooch, Jonathan Shewchuk, and Steve Vavasis for help in preparing this survey.

References [1] M. Aftosmis, J. Melton and M. Berger, Adaptation and surface modeling for Cartesian mesh methods, AIAA Paper 95-1725, 12th AIAA CFD Conf., San Diego, CA (June 1995). [2] N. Amenta, M.W. Bern and D. Eppstein, Optimal point placement for mesh smoothing, Proc. 8th ACMSIAM Symp. Disc. Algorithms (1997), 528-537. [3] E. Amezua, M.V. Hormaza, A. Hernandez and M.B.G. Ajuria, A method of the improvement of 3d solid finite-element meshes. Adv. Eng. Software 22 (1995), 45-53. [4] C.G. Armstrong, D.J. Robinson, R.M. McKeag, T.S. Li and S.J. Bridgett, Medials for meshing and more, Proc. 4th International Meshing Roundtable, Sandia National Laboratories (1995). [5] D.N. Arnold, A. Mukherjee and L. Pouly, Locally adapted tetrahedral meshes using bisection. Manuscript (1997). [6] I. Babuska and A. Aziz, On the angle condition in the finite element method, SIAM J. Numer. Anal. 13 (1976), 214-227. [7] I. Babuska and W.C. Rheinboldt, Error estimates for adaptive finite element computations, SIAM J. Numer. Anal. 15 (1978), 736-754. [8] PL. Baehmann, S.L. Wittchen, M.S. Shephard, K.R. Grice and M.A. Yerry, Robust geometrically-based automatic two-dimensional mesh generation. Internal. J. Numer. Methods Eng. 24 (1987), 1043-1078. [9] B.S. Baker, E. Grosse and C.S. Rafferty, Nonobtuse triangulation ofpolygons. Discrete Comput. Geom. 3 (1988), 147-168. [10] T.J. Baker, Automatic mesh generation for complex three-dimensional regions using a constrained Delaunay triangulation, Eng. Comput. 5 (1989), 161-175. [11] R.E. Bank, PLTMG: A Software Package for Solving Elliptic Partial Differential Equations, Users' Guide 6.0, SIAM Pubhcations, Philadelphia, PA (1990). [12] R.E. Bank, A.H. Sherman and A. Weiser, Refinement algorithms and data structures for regular local mesh refinement. Scientific Computing, R. Stepleman et al., eds, IMACS/North-HoUand Publishing Company, Amsterdam (1983), 3-17. [13] R.E. Bank and R.K. Smith, Mesh smoothing using a posteriori error estimates, SIAM J. Num. Anal., to appear. ftp://math.ucsd.edU/pub/scicomp/reb/ftpfiles/a67.ps.Z. [14] E. Bansch, Local mesh refinement in 2 and 3 dimensions. Impact Comput. Sci. Eng. 3 (1991), 181-191. [15] C.B. Barber, D.P. Dobkin and H.T. Huhdanpaa, The Quickhull algorithm for convex hulls. Submitted to ACM Trans. Math. Software. See http: //www.geom.umn.edu/software/qhull/ (1995). [16] W.D. Barfield, An optimal mesh generator for Lagrangian hydrodynamic calculations in two space dimensions, J. Comput. Phys. 6 (1970), 417^29. [17] R. Barrett, M. Berry, T.F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine and H. Van der Vorst, Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, Philadelphia (1994). [18] T.J. Barth, Aspects of unstructured grids and finite-volume solvers for the Euler and Navier-Stokes equations. Technical Report, von Karman Institute for Fluid Dynamics, Lecture Series 1994-05 (1994). [19] S.E. Benzley, E. Perry, K. Merkley, B. Clark and G. Sjaardema, A comparison of all-hexahedral and alltetrahedral finite element meshes for elastic and elasto-platic analysis, Proc. 4th International Meshing Roundtable, Sandia National Laboratories (1995), 179-191. [20] M.J. Berger and P. Colella, Local adaptive mesh refinement for shock hydrodynamics, J. Comput. Phys. 82 (1989), 64-84. [21] M.J. Berger and J. Oliger, Adaptive mesh refinement for hyperbolic partial differential equations, J. Comput. Phys. 53 (1984), 484-512.

328

M. Bern and P. Plassmann

[22] M. Bern, L.R Chew, D. Eppstein and J. Ruppert, Dihedral bounds for mesh generation in high dimensions, Proc. 6th ACM-SIAM Symp. Disc. Algorithms (1995), 189-196. [23] M. Bern, H. Edelsbrunner, D. Eppstein, S. Mitchell and T.-S. Tan, Edge-insertion for optimal triangulations. Discrete Comput. Geom. 10 (1993), 47-65. [24] M. Bern and D. Eppstein, Mesh generation and optimal triangulation. Computing in Euclidean Geometry, 2nd ed., D.-Z. Du and EK. Hwang, eds. World Scientific, Singapore (1995), 47-123. [25] M. Bern, D. Eppstein and J.R. Gilbert, Provably good mesh generation, J. Comput. System Sci. 48 (1994), 384^09. [26] M. Bern, D. Eppstein and F. Yao, The expected extremes in a Delaunay triangulation, Intemat. J. Comput. Geom. Appl. 1 (1991), 79-92. [27] M. Bern, S. Mitchell and J. Ruppert, Linear-size nonobtuse triangulation of polygons. Discrete Comput. Geom. 14 (1995), 411^28. [28] T.D. Blacker, Paving: A new approach to automated quadrilateral mesh generation, Intemat. J. Numer. Methods Eng. 32 (1991), 811-847. [29] F. Bossen, Anisotropic mesh generation with particles. Technical Report CMU-CS-96-134, CarnegieMellon University, School of Computer Science (1996). http: //ltswww.epfl.ch/~bossen/. [30] A. Bowyer, Computing Dirichlet tessellations. Computer J. 24 (1981), 162-166. [31] K.Q. Brown, Voronoi diagrams from convex hulls. Inform. Process. Lett. 9 (1979), 223-228. [32] E.K. Buratynski, A fully automatic three-dimensional mesh generator for complex geometries, Intemat. J. Numer. Methods Eng. 30 (1990), 931-952. [33] S. Canann, M. Stephenson and T. Blacker, Optismoothing: An optimization-driven approach to mesh smoothing. Finite Elements in Analysis and Design 13 (1993), 185-190. [34] G.F. Carey and J.T Oden, Finite Elements: Computational Aspects, Prentice-Hall (1984). [35] J.E. Castillo, Mathematical Aspects of Grid Generation, Society for Industrial and Applied Mathematics, Philadelphia (1991). [36] M.J. Castro-Diaz, F. Hecht and B. Mohammadi, New progress in anisotropic grid adaptation for inviscid and viscid flows simulations, Proc. 4th Intemational Meshing Roundtable, Sandia National Laboratories (1995). [37] B. Chazelle, Convex partitions of polyhedra: A lower bound and worst-case optimal algorithm, SIAM J. Comput. 13 (1984), 488-507. [38] L.R Chew, Guaranteed-quality triangular meshes. Technical Report TR-89-983, Comp. Science Dept., Comell University (1989). [39] RG. Ciarlet, The Finite Element Method for Elliptic Problems, North-Holland (1978). [40] RG. Ciarlet and PA. Raviart, Maximum principle and uniform convergence for the finite element method, Comput. Methods Appl. Mech. Eng. 2 (1973), 17-31. [41] W.J. Coirier, An adaptively-refined, Cartesian cell-based scheme for the Euler and Navier-Stokes equations, NASA Technical Memorandum 106754, NASA (October 1994). [42] W.J. Coirier and K.G. Powell, An accuracy assessment of Cartesian-mesh approaches for the Euler equations, J. Comput. Phys. 117 (1995), 121-131. [43] E.F D'Azevedo and R.B. Simpson, On optimal interpolation triangle incidences, SIAM J. Sci. Stat. Comput. 10 (1989), 1063-1075. [44] L. De Floriani and B. Falcidieno, A hierarchical boundary model for solid object representation, ACM Transactions on Graphics 7 (1988), 42-60. [45] B. Delaunay, Sur la sphere vide, Izv. Akad. Nauk SSSR, VII Seria, Otd. Mat. i Estestv. Nauk 7 (1934), 793-800. [46] M.T. Dickerson and M.H. Montague, A (usually?) connected subgraph of the minimum weight triangulation, Proc. 12th ACM Symp. Comp. Geometry (1996), 204-213. [47] T.A. DriscoU, A Matlab toolbox for Schwarz-Christojfel mapping, ACM Trans. Math. Software, to appear. [48] T.A. DriscoU and S.A. Vavasis, Numerical conformal mapping using cross-ratios and Delaunay triangulation. Available under http: //www.cs.comell.edu/Info/People/vavasis/vavasis.html (1996). [49] A.S. Dvinsky, Adaptive grid generation from harmonic maps. Numerical Grid Generation in Computational Fluid Dynamics '88, S. Sengupta, J. Hauser, PR. Eiseman and J.F. Thompson, eds, Pineridge Press Limited, Swansea, U.K. (1988).

Mesh generation

329

[50] H. Edelsbrunner and N.R. Shah, Incremental topological flipping works for regular triangulations, Proc. 8th ACM Symp. Comp. Geometry (1992), 43-52. [51] H. Edelsbrunner and T.-S. Tan, A quadratic time algorithm for the minmax length triangulation, Proc. 32nd IEEE Symp. Foundations of Comp. Science (1991), 414-^23. [52] H. Edelsbrunner and T.-S. Tan, An upper boundfor conforming Delaunay triangulations, Discrete Comput. Geom. 10 (1993), 197-213. [53] H. Edelsbrunner, T.S. Tan and R. Waupotitsch, A polynomial time algorithm for the minmax angle triangulation, SIAM J. Sci. Stat. Comp. 13 (1992), 994-1008. [54] D. Eppstein, Linear complexity hexahedral mesh generation, Proc. 12th ACM Symp. Comp. Geom. (1996), 58-67. [55] D.A. Field, Laplacian smoothing and Delaunay triangulations, Comm. Appl. Numer. Methods 4 (1988), 709-712. [56] S. Fortune, Voronoi diagrams and Delaunay triangulations. Computing in Euclidean Geometry, 2nd ed., FK. Hwang and D.-Z. Du, eds. World Scientific, Singapore (1995), 225-265. [57] L. Freitag and C. OUivier-Gooch, A comparison of tetrahedral mesh improvement techniques, Proc. 5th International Meshing Roundtable, Sandia National Laboratories (1996), 87-100. http://sass577.endo. sandia.gov:80/9225/Personnel/samitch/roundtable96/. [58] L.A. Freitag, M.T Jones and RE. Plassmann, An efficient parallel algorithm for mesh smoothing, Proc. 4th International Meshing Roundtable, Sandia National Laboratories (1995), 47-58. [59] P.J. Frey, H. Borouchaki and P.-L. George, Delaunay tetrahedralization using an advancing-front approach, Proc. 5th International Meshing Roundtable, Sandia National Laboratories (1996), 31-46. http://www.cs.cmu.edu/~ph. [60] PL. George, Automatic Mesh Generation, Wiley, New York (1991). [61] PL. George, F Hecht and E. Saltel, Fully automatic mesh generator for 3D domains of any shape. Impact of Com. in Sci. andEng. 2 (1990), 187-218. [62] A.S. Glassner, Maintaining winged-edge models, Graphics Gems II, E.J. Arvo, ed.. Academic Press Professional, Boston, MA (1991), 191-201. [63] L.J. Guibas and J. Stolfi, Primitives for the manipulation of general subdivisions and the computation of Voronoi diagrams, ACM Trans. Graphics 4 (1985), 74-123. [64] O. Hassan, K. Morgan, E.J. Probert and J. Peraire, Mesh generation and adaptivity for the solution of compressible viscous high-speed flows, Intemat. J. Numer. Methods Eng. 38 (1995), 1123-1148. [65] O. Hassan, K. Morgan, E.J. Probert and J. Peraire, Unstructured tetrahedral mesh generation for threedimensional viscous flows, Intemat. J. Numer. Methods Eng. 39 (1996), 549-567. [66] T. J.R. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, PrenticeHall, Inc., Englewood Cliffs, NJ (1987). [67] T.J.R. Hughes and J.E. Akin, Techniques for developing 'special' finite element shape functions with particular reference to singularities, Intemat. J. Numer. Methods Eng. 15 (1980), 733-751. [68] A. Jameson, T.J. Baker and N.P. Weatherill, Calculation ofinviscid transonic flow over a complete aircraft, Proc. AIAA 24th Aerospace Sciences Meeting, Reno (1986). [69] B. Joe, Three-dimensional triangulations from local transformations, SIAM J, Sci. Stat. Comput. 10 (1989), 718-741. [70] B. Joe, Construction of three-dimensional Delaunay triangulations using local transformations, Comput. Aided Geom. Design 8 (1991), 123-142. [71] M.T. Jones and RE. Plassmann, Adaptive reflnement of unstructured finite-element meshes. Finite Elements in Analysis and Design 25 (1-2) (March 1997), 41-60. [72] M.S. Khaira, G.L. Miller and T.J. Sheffler, Nested dissection: A survey and comparison of various nested dissection algorithms. Technical Report CMU-CS-92-106R, School of Computer Science, Camegie Mellon University, Pittsburgh, Pennsylvania (1992). [73] P. Knupp and S. Steinberg, Fundamentals of Grid Generation, CRC Press (1994). [74] A. Liu and B. Joe, On the shape of tetrahedra from bisection. Math. Comput. 63 (207) (1994), 141-154. [75] A. Liu and B. Joe, Relationship between tetrahedron shape measures, BIT 34 (1994), 268-287. [76] A. Liu and B. Joe, Quality local refinement of tetrahedral meshes based on bisection, SIAM J. Sci. Comput. 16 (6) (1995), 1269-1291.

330

M. Bern and P. Plassmann

[77] S.H. Lo, A new mesh generation scheme for arbitrary planar domains, Intemat. J. Numer. Methods Eng. 21 (1985), 1403-1426. [78] S.H. Lo, Volume discretization into tetrahedra, Computers and Structures 39 (1991), 493-511. [79] R. Lohner and P. Parildi, Three-dimensional grid generation via the advancing-front method, Intemat. J. Numer. Methods Fluids 8 (1988), 1135-1149. [80] R. Lohner, Progress in grid generation via the advancing front technique, Eng. Comput. 12 (1996), 186210. [81] D.L. Marcum and N.P. Weatherill, Unstructured grid generation using iterative point insertion and local reconnection, AIAA J. 33 (9) (1995), 1619-1625. [82] C.W. Mastin, Elliptic grid generation and conformal mapping. Mathematical Aspects of Grid Generation, Jose E. Castillo, ed.. Society for Industrial and AppUed Mathematics, Philadelphia (1991), 9-18. [83] J.M. Maubach, The number of similarity classes created by local n-simplicial bisection refinement. Manuscript (1996). [84] D.J. Mavriplis, Unstructured and adaptive mesh generation for high Reynolds number viscous flows. Technical Report 91-25, ICASE, NASA Langley Research Center (1991). [85] D.J. MavripUs, Unstructured mesh generation and adaptivity. Technical Report ICASE 95-26, NASA Langley, Hampton VA (1995). Abstract at http: //techreports.larc.nasa.gov/cgibin/NTRS. [86] D.J. Mavriplis, Mesh generation and adaptivity for complex geometries and flows. Handbook of Computational Fluid Mechanics, R. Peyret, ed.. Academic Press, London (1996). [87] G.L. Miller, D. Talmor and S.-H. Teng, Optimal coarsening of unstructured meshes, Proc. 8th ACM-SIAM Symp. Disc. Algorithms (1997). [88] G.L. Miller, D. Talmor, S.-H. Teng and N. Walkington, A Delaunay based numerical method for three dimensions: Generation, formulation and partition, Proc. 36th IEEE Symp. on Foundations of Comp. Science (1995), 683-692. [89] G.L. Miller, S.-H. Teng and S.A. Vavasis, A unified geometric approach to graph separators, Proc. 32nd IEEE Symp. on Foundations of Comp. Science (1991), 538-547. [90] S.A. Mitchell, Cardinality bounds for triangulations with bounded minimum angle. Sixth Canadian Conference on Computational Geometry (1994). [91] S.A. Mitchell and S. Vavasis, Quality mesh generation in three dimensions, Proc. 8th ACM Symp. Comp. Geom. (1992), 212-221. [92] S.A. Mitchell and S. Vavasis, An aspect ratio bound for triangulating a d-grid cut by a hyperplane, Proc. 12th ACM Symp. Comp. Geom. (1996), 48-57. [93] S.A. Mitchell, Refining a triangulation of a planar straight-line graph to eliminate large angles, Proc. 34th IEEE Symp. on Foundations of Comp. Science (1993), 583-591. [94] S.A. Mitchell, A characterization of the quadrilateral meshes of a surface which admit a compatible hexahedral mesh of the enclosed volume, Proc. 13th Symposium on Theoretical Aspects of Computer Science (STAGS '96), LNCS, Springer-Verlag (1996). [95] J.-D. Miiller, Proven angular bounds and stretched triangulations with the frontal Delaunay method, Proc. 11th AIAA Comp. Fluid Dynamics, Orlando (1993). [96] L.R. Nackman and V. Srinivasan, Point placement for Delaunay triangulation ofpolygonal domains, Proc. 3rd Canadian Conf. Comp. Geometry (1991), 3 7 ^ 0 . [97] Nguyen-Van-Phai, Automatic mesh generation with tetrahedral element, Intemat. J. Numer. Methods Eng. 18 (1982), 273-289. [98] C.F OUivier-Gooch, Multigrid acceleration of an upwind Euler solver on unstructured meshes, AIAA J. 33 (10) (1995), 1822-1827. [99] S. Owen, Meshing research corner, http: //www.ce.cmu.edu /NetworkZ /sowen/www/mesh.html (1995). [100] V.N. Parthasarathy and S. Kodiyalam, A constrained optimization approach to finite element mesh smoothing. Finite Elements in Analysis and Design 9 (1991), 309-320. [101] J. Peraire, J. Peiro, L. Formaggia, K. Morgan and O.C. Zieniewicz, Finite element Euler computations in three dimensions, Intemat. J. Numer. Methods Eng. 26 (1988), 2135-2159. [102] R. Pemcchio, M. Saxena and A. Kela, Automatic mesh generation from solid models based on recursive spatial decomposition, Internat. J. Numer. Methods Eng. 28 (1989), 2469-2502. [103] F.P Preparata and M.I. Shamos, Computational Geometry: An Introduction, Springer-Verlag (1985).

Mesh generation

331

[104] V.T. Rajan, Optimality of the Delaunay triangulation in R^, Proc. 7th ACM Symp. Comp. Geometry (1991), 357-363. [105] S. Rippa, Minimal roughness property of the Delaunay triangulation, Comput. Aided Geom. Design 7 (1990), 489^97. [106] S. Rippa, Long and thin triangles can be good for linear interpolation, SIAM J. Numer. Anal. 29 (1992), 257-270. [107] M.-C. Rivara, Algorithms for refining triangular grids suitable for adaptive and multigrid techniques, Intemat. J. Numer. Methods Eng. 20 (1984), 745-756. [108] M.-C. Rivara, Design and data structure of fully adaptive, multigrid, finite-element software, ACM Trans. Math. Software 10 (3) (1984), 242-264. [109] M.-C. Rivara, Mesh refinement processes based on the generalized bisection ofsimplices, SIAM J. Numer. Anal. 21 (3) (1984), 604-613. [110] M.-C. Rivara and C. Levin, A 3-d refinement algorithm suitable for adaptive and multi-grid techniques, Comm. Appl. Numer. Methods 8 (1992), 281-290. [ I l l ] J. Ruppert, A Delaunay refinement algorithm for quality 2-dimensional mesh generation, J. Algorithms 18 (3) (1995), 548-585. [112] A. Saalfeld, Delaunay edge refinements, Proc. 3rd Canadian Conf. Comp. Geometry (1991), 33-36. [113] R. Schneiders, Finite element mesh generation, http: //www-users.informatik.rwth-aachen.de/ ~roberts/ meshgeneration.html (1995). [114] W.J. Schroeder and M.S. Shephard, A combined octree/Delaunay method for fully automatic 3-D mesh generation, Intemat. J. Numer. Methods Eng. 29 (1990), 37-55. [115] M. Shephard and M. Georges, Automatic three-dimensional mesh generation by the finite octree technique, Intemat. J. Numer. Methods Eng. 32 (1991), 709-749. [116] M.S. Shephard, F. Guerinoni, I.E. Flaherty, R.A. Ludwig and PL. Baehmann, Finite octree mesh generation for automated adaptive three-dimensional flow analysis, Proc. 2nd Int. Conf. Numer. Grid Generation in Computational Fluid Mechanics (1988), 709-718. [117] J.R. Shewchuk, Triangle: A two-dimensional quality mesh generator and Delaunay triangulator, see http: //www.cs.cmu.edu/%7Equake/triangle.html (1995). [118] J.R. Shewchuk, Adaptive precision floating-point arithmetic and fast robust geometric predicates in C, Proc. 12th ACM Symp. Comp. Geometry (1996). [119] K. Shimada, Physically-based mesh generation: Automated triangulation of surfaces and volumes via bubble packing, PhD thesis, ME Dept., MIT (1993). [120] K. Shimada and D.C. Gossard, Computational methods for physically-based FE mesh generation, Proc. IFIP TC5AVG5.3 8th Int. Conference on PROLAMAT, Tokyo (1992). [121] R.B. Simpson, Anisotropic mesh transformations and optimal error control, Appl. Numer. Math. 14 (1-3) (1994), 183-198. [122] B. Smith, P. Bj0rstad and W. Gropp, Domain Decomposition: Parallel Multilevel Algorithms for Elliptic Partial Differential Equations, Cambridge University Press, New York (1996). [123] PW. Smith and S.S. Sritharan, Theory of harmonic grid generation. Complex Variables 10 (1988), 359369. [124] V. Srinivasan, L.R. Nackman, J.-M. Tang and S.N. Meshkat, Automatic mesh generation using the symmetric axis transformation ofpolygonal domains. Technical Report RC 16132, Comp. Science, IBM Research Division, Yorktown Heights, NY (1990). [125] G. Strang and G.J. Fix, An Analysis of the Finite Element Method, Prentice-Hall (1973). [126] T.-S. Tan, An optimal bound for conforming quality triangulations, Proc. 10th ACM Symp. Comp. Geometry (1994), 240-249. [127] T.J. Tautges and S.A. Mitchell, The whisker weaving algorithm for constructing all-hexahedral finite element meshes, Proc. 4th Intemational Meshing Roundtable, Sandia National Laboratories (1995). [128] J.W. Thomas, Numerical Partial Diff^erential Equations: Finite Difference Methods, Springer, New York (1995). [129] J.F. Thompson, Numerical Grid Generation, Elsevier, Amsterdam (1982). [130] J.F. Thompson, Z.U.A. Warsi and C.W. Mastin, Numerical Grid Generation: Foundations and Applications, North-Holland (1985).

332

M. Bern and P. Plassmann

[131] J.F. Thompson and N.P. Weatherill, Aspects of numerical grid generation: Current science and art, Proc. 11th AIAA Applied Aerodynamics Conference (1993), 1029-1070. [132] W. Thurston, Re: Hexahedral decompostion of polyhedra, a posting to sci.math newsgroup, http: //www.ics.uci.edu/~eppstein/junkyard/Thurston-hexahedra (1993). [133] L.N. Trefethen, Numerical computation of the Schwarz—Christojfel transformation, SI AM J. Sci. Statist. Comput. 1 (1980), 82-102. [134] S. Vavasis, QMG: Mesh generation and related software, http: //www.cs.comell.edu/Info/People/vavasis/ qmg-home.html (1995). [135] S.A. Vavasis, Stable finite elements for problems with wild coefficients. Technical Report TR93-1364, Dept. of Comp. Science, Cornell University (1993). [136] D.F. Watson, Computing the n-dimensional Delaunay tessellation with application to Voronoi polytopes. Computer J. 24 (1981), 167-171. [137] N.P. Weatherill and O. Hassan, Efficient three-dimensional Delaunay triangulation with automatic point creation and imposed boundary constraints. Internal. J. Numer. Methods Eng. 37 (1994), 2005-2039. [138] A. Weiser, Local-mesh, local-order, adaptive finite element methods with a posteriori error estimates for elliptic partial differential equations. Technical Report 213, Yale University, New Haven, Connecticut (1981). [139] W. Welch, Serious putty: Topological design for variational curves and surfaces, PhD thesis, CS Dept, Carnegie Mellon University (Dec. 1995). CMU-CS-95-217, ftp: //reports.adm.cs.cmu.edu/usr/anon/1995/ CMU-CS-95-217A.ps, 217B.ps, 217C.ps. [140] J. Xu, Iterative methods by space decomposition and subspace correction, SI AM Review 34 (4) (1992), 581-613. [141] J. Xu and L. Zikatanov, A monotone finite element scheme for convection diffusion equations. Math. Comput., to appear. [142] M.A. Yerry and M.S. Shephard, Automatic three-dimensional mesh generation by the modified-octree technique, Internat. J. Numer. Methods Eng. 20 (1984), 1965-1990. [143] M.A. Yerry and M.S. Shephard, A modified quadtree approach to finite element mesh generation, IEEE Comput. Graphics Appl. 3 (1983), 3 9 ^ 6 . [144] D.P. Young, R.G. Melvin, M.B. Bieterman and J.E. Bussoletti, A locally refined rectangular grid finite element method: Application to computational fluid dynamics and computational physics, J. Comput. Phys. 92(1991), 1-66. [145] R. Young and I. MacPhedran, Internet finite element resources, http: //www.engr.usask.ca/~macphed/ finite/fe_resources/fe_resources.html (1995).

CHAPTER 7

Applications of Computational Geometry to Geographic Information Systems Leila de Floriani, Paola Magillo and Enrico Puppo Dipartimento di Informatica e Scienze delVInformazione - Universita di Genova, Via Dodecaneso, 35 -16146 Genova, Italy E-mail: (deflo, magillo,puppo) @ disi. unige. it

Contents 1. Introduction 2. Map data modeling 2.1. Two-dimensional spatial entities and relations 2.2. Raster and vector models 2.3. Subdivisions as cell complexes 2.4. Topological data structures 2.5. Multiresolution data structures 3. Map data processing 3.1. Spatial queries 3.2. Map overlay 3.3. Geometric problems in map generalization 3.4. Map labeling 3.5. Other analysis issues 4. Terrain data modeling and processing 4.1. Classical terrain models 4.2. Construction and conversion algorithms 4.3. Terrain generalization 4.4. Multiresolution terrain models 5. Terrain analysis 5.1. Visibility 5.2. Topographic features 5.3. Drainage networks 5.4. Path problems 6. Three-dimensional GIS 7. Concluding remarks References

335 336 336 338 339 340 341 342 342 343 347 349 350 351 352 354 356 358 364 364 370 371 372 373 374 377

HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V. All rights reserved

333

This Page Intentionally Left Blank

Applications

of computational

geometry to geographic information systems

335

1. Introduction During the last decade, Geographical Information Systems (GISs) have gained a great impact over a variety of important application fields [171]. According to Goodchild, Kemp, and Poiker [119] "a GIS can be seen as a system of hardware, software and procedures designed to support the capture, management, analysis, modeling and display of spatiallyreferenced data for solving complex planning and management problems." Geographic data are characterized by spatial properties (location, shape, size) and nonspatial properties, called attributes, usually expressed in textual or numerical form. GIS as a discipline involves many different issues, such as hardware and software equipment for data acquisition, data standards, storage and transmission, database management issues (such as integrity and consistency), etc. In this chapter, we consider only issues related to representation and processing of the geometric aspects of geographic data, with a special emphasis on the application of computational geometry techniques. In the past, classical information retrieval methods were adopted to handle geographic data, while the importance of geometric aspects was often underestimated: geometric problems in GIS were solved through ad hoc empirical methods. Such solutions often suffered from lack of foundations, both from a geometric point of view (e.g., the result of an operation defined just as the output of a certain algorithm), and from a computer science perspective (e.g., no complexity analysis), and sometimes had poor practical performances. The needs for a solid theoretical background and for high performances in geometric reasoning have become urgent in GIS. This makes GIS a field of primary importance for application of computational geometry. Representation and manipulation of spatial entities in a geographic information system involve modeling and computational issues. The need to store and process huge data sets yields a demand for data structures and algorithms that achieve a good tradeoff between a high computational efficiency and low storage space. In this chapter, we review classical as well as new problems in geometric modeling and computational geometry arising in GIS. We follow a broad classification of classical geometric data in a GIS into map data and terrain data. Such a classification is not standard in the GIS community, but it is convenient here to identify data characterized by different spatial dimensionality, and different classes of problems. The map data are located on the surface of the Earth and are basically two-dimensional, i.e., they are points, lines, and polygonal regions, which are combined together to form either subdivisions or arrangements, sometimes organized into layers. The terrain data are related with the threedimensional configuration of the surface of the Earth. The geometry of a terrain is modeled as a 2^-dimensional surface, i.e., a surface in a 3D space described by a bivariate function. In addition to these two kinds of classical GIS data, we also cover new research issues in GIS related to the management of fully three-dimensional data. The first part of this chapter is devoted to the map data. The representation of map data involves designing models and data structures capable of encoding geometric structures more general than those studied in the geometric modeling literature, like regions with holes, non-regular structures, isolated points, and line features. Spatial indices also play an important role in the design of efficient methods for geometric queries of map data. Geometric aspects of map data processing involve query operations (e.g., point location, range search); spatial joins (map overlay); and more specific operations based on geometric

336

L. de Floriani et al.

search, such as map generahzation, map labehng, map conflation (i.e., reconcihation of two variants of a map of the same area), computation of optimal paths. Terrain representation and processing are an even richer source for geometric problems. The construction of terrain models involves conversions among raw point and line data, raster grids, triangulated surfaces, and contour line representations. Algorithms for converting among different representations involve several problems in computational geometry, from point location to computation of Voronoi diagrams, Delaunay and other optimal triangulations of points and lines. Terrain analysis and visualization include several geometric operations, such as contour line extraction, computation of drainage networks and topographic features, extraction of models at resolution variable on the domain, computation of paths on a surface. The problems related to terrain visibility which have been extensively studied in computational geometry, are of primary interest in geographic data processing. The rest of this chapter is organized as follows. In Section 2, we deal with map data modeling in GIS. We first define geographic maps mathematically in terms of spatial entities and relations, and then introduce two major map models used in GIS (the raster and the vector model). Since the vector map data are much a stronger source of applications for computational geometry techniques, the attention is focused on this kind of map representations. Computational issues related to map data processing and analysis are treated in Section 3. In Section 4, we consider terrain data modeling and processing. We describe classical terrain models used in GIS and review basic issues in converting the terrain data among the various possible formats. The survey also covers approximate and multiresolution terrain models. In Section 5, we examine some of the most classical terrain analysis problems: visibility problems, extraction of topographic features, computation of drainage networks and determination of optimal paths on a terrain. Issues on a three-dimensional GIS are briefly discussed in Section 6. Finally, Section 7 gives a summary of the chapter, by providing a list of computational geometry techniques and tools that can be useful in GIS, as well as a list of open geometric problems in GIS. This survey is far from being complete. Several problems and geometric structures are just mentioned here and the reference is given to available publications, or to other chapters of this book. In particular, we do not cover parallel algorithms (treated in Chapter 4) and raster representations, since techniques used to manipulate raster data are more typical of digital geometry [165] than of classical Computational Geometry. 2. Map data modeling 2.1. Two-dimensional spatial entities and relations Two-dimensional entities considered in the context of geographic data processing are points, lines, and regions embedded in a two-dimensional space. Datasets covering small portions of the Earth are often projected onto a flat surface and modeled using classical Cartesian coordinates. Several kinds of projections can be used [119]: azimuthal projections using a plane tangent to the Earth in the center of the area to be mapped, conical projections, cylindrical projections using a cylinder tangent to the Earth along the Equator (Mercator projection) or along the line of longitude (Transverse Mercator projection).

Applications

of computational

geometry to geographic information

systems

337

rail

Fig. 1. A toy example of a geographic map.

A more challenging problem is defining a global reference system. Various conventions are used for such a purpose. One approach involves subdividing the Earth surface into zones, each of which is projected separately (the Universal Transverse Mercator uses 60 different zones); other approaches use a spherical coordinate system based on long-lat coordinates, or projection onto an ellipsoid [75,76,182]. However, in the sequel we will always assume the local convention that the map data are embedded in a Euclidean plane. GIS entities are more general than those usually adopted in Computational Geometry. In some cases, lines may not be simple; regions may not be simply connected (i.e., they may contain holes), and may have intQmsil features, i.e., dangling lines (cuts), and isolated points (punctures). Figure 1 shows a toy example of a geographic map. The presence of holes and features may affect the design of data structures and algorithms for processing geographic entities. Most computational methods in GIS deal only with piecewise-linear geometric entities. Although data structures and techniques for representing and manipulating curved entities also exist, they are beyond the scope of this chapter. Spatial relations are defined on pairs of spatial entities, and depend on their relative positions in space. There is a flourishing literature on the definition, classification, evaluation, and study of invariants of spatial relations [42,43,58,85,86,130,155,156,291,292]. In spite of all such work, a homogeneous and complete characterization (e.g., based on an algebraic approach) is still missing. It is not uncommon to find practical contexts and applications for which none of the models proposed in the literature is satisfactory. However, spatial relations are usually classified as follows: • topological relations are characterized in terms of the geometric intersections between boundaries, interiors, and complements of entities: examples are incidence, adjacency, containment, overlapping, etc. (see [42] for a survey); • metric relations are defined in terms of distance between entities: examples are the nearest neighbors and range queries;

338

L. de Floriani et al.

• directional relations are characterized in terms of a relative position of entities with respect to some oriented direction: examples are before, after, right-of, left-of, north, south, etc.; • order relations are defined by properties that induce a partial order on sets of entities. An example is given by containment [157]. Spatial relations play a central role in GISs because many tasks involve the evaluation of spatial relations between a query object and a collection of spatial entities. A broad classification of spatial datasets, which is convenient for both modeling and computational purposes, is as follows: • collections of spatial entities (sometimes organized into distinct layers) that can take any possible relative position on the plane, and intersect arbitrarily; • subdivisions of a compact planar domain into a set of non-overlapping entities. Collections may define arrangements which are more general than those studied in computational geometry, since they can contain irregular entities of any shape and dimension. Similarly, subdivisions are more also general than those used in computational geometry, since they may contain multiply-connected regions with features. Subdivisions are simpler than collections. The combinatorial complexity of a subdivision is linear in the total size of its entities, while the arrangement induced by a collection may have a quadratic complexity. A pair of entities in a collection may assume any possible topological relationship, while possible topological relations in the context of subdivision consist just of adjacency and incidence among its constituent entities, and inclusion of features.

2.2. Raster and vector models There is a long-running debate in the GIS community on whether it is better to represent spatial information based on a vector or on a raster model. Neither model appears to be superior in all tasks. As outlined by Frank [106]: "A traditional view is to differentiate between an entity based view — space is constructed from objects that fill space — and a space oriented view, where each point in space has some properties. This view is philosophically well established...". The vector and raster approaches seem to correspond to these two alternative views of spatial concepts. In a vector model, spatial entities are represented explicitly through their geometry and attributes. Either collections or subdivisions may be represented, and topological relations among entities can be stored explicitly. An explicit representation can support an arbitrarily good level of accuracy. The vector model is obviously entity-based, hence it is well suited to access a spatial database by using spatial entities as search keys. On the other hand, efficient space-oriented access may require sophisticated search structures and techniques. Early representations in vector data modeling, called spaghetti models, are collections of unrelated simple polygonal lines and points. In a spaghetti model, regions are not explicitly encoded, while they are implicitly described by their boundaries. The spaghetti model may encode either a collection or a subdivision, since it is just a collection of entities without any stored relation. The drawback is an almost total lack of support for efficient query processing. More effective data structures are discussed separately in the following subsections.

Applications

of computational

geometry to geographic information systems

339

In a raster model, the domain is regularly subdivided into a large number of atomic regions, similar to the pixels in a digital image. Each pixel carries information about the portion of space it covers. Points are identified with the pixels they lie in, while lines and regions are obtained as aggregations of pixels (digital lines and regions) on the basis of membership attributes. Therefore, spatial entities are not explicitly represented in a raster, while they can be obtained as relations that cluster pixels according to their common attribute. The raster model intrinsically defines a subdivision of the domain, while a collection can be obtained by using multiple members for pixels (i.e., each pixel may belong to more than one entity). The raster representation is obviously space-oriented, and it provides direct access to information about a given location or extent of space. It has the disadvantage of providing an approximate geometry, whose accuracy is dependent on the resolution of the grid, but, on the other hand, its regular structure greatly helps organizing spatial information. Not all operations are supported in both models (e.g., some systems compute buffer zones around vector features by converting to the raster domain and back). In general, raster models are more suitable to support queries by location, while vector models are more suitable to support queries by content [244]. As a consequence, many commercial systems adopt hybrid representations, which permit using either model depending on the specific task and data analyzed. Hybrid models involve the use of algorithms to convert from one model to the other. The techniques used to store, process and analyze data in the raster model, as well as conversion algorithms, are more typical of digital geometry [165], and image processing [220,234] than of classical Computational Geometry. Therefore, in the sequel we will focus on models and methods for vector models. 2.3. Subdivisions as cell complexes The algebraic topology provides a well-established theory on subdivisions of a topological space into cells. In the mathematical literature there are different definitions of cell complexes, none of which is general enough to model regions and features as those manipulated in GISs. However, many models have been developed on the basis of classical cell complexes, and some efforts have been undertaken to extend the classical theory to entities manipulated in a GIS. Simplicial complexes are a special class of cell complexes with good combinatorial properties that make them easy to construct and manipulate. Simplicial complexes have been used in GISs for encoding generic subdivisions by triangulating each region and including all point and line features as vertices and edges [102,291]. Although the approach is theoretically sound and elegant, it may not be practical: geographic data mainly contain large regions with complicate boundaries and features, and fragmentation into triangles causes a relevant increase in storage requirements. A similar approach, with similar drawbacks, consists of using complexes with convex cells, such as trapezoidal decompositions [94]. Regular cell complexes, in which regions are generic, simply-connected polygons without features, correspond to planar subdivisions commonly used in computational geometry. Regular cell complexes are suitable to model maps that do not contain point or linea features.

340

L. de Floriani et al

The most general cell structure provided by the algebraic topology is the CW-complex [183]: its Euclidean realization in two dimensions is a subdivision in which the interior of each region is homeomorphic to an open disk. Such structures admit some features, such as cuts incident into the generalized boundary of a region, and holes incident into such cuts. However, regions must be simply connected: punctures, generic cuts, and "islands" are not admitted. Each region can be fragmented into a minimal number of simply-connected components: such a fragmentation usually yields only a modest increase in storage. Although some efforts have been undertaken to generalize CW-complexes to support GIS data [46, 218,240,292], no substantial progress in the direction of data structures and computational techniques has been made so far. On a different perspective, some authors have proposed approaches based on integer geometry, motivated by robustness issues: points in a map, including endpoints and intersections of lines, are constrained to lie at the nodes of an integer grid. In particular, an algebra based on discrete generalized maps, called realms, has been specified and implemented in [128,129].

2.4. Topological data structures Data structures for encoding both entities and topological relations in a subdivision are usually referred to as topological data structures. The Hterature offers a number of data structures for encoding regular subdivisions of the plane or of a surface, which support an efficient evaluation of spatial relations therein. According to Lienhardt [177] and Brisson [25], the models underlying data structures for cell complexes can be classified into: • explicit-cell representations, where the entities encoded directly correspond to the cells of the complex; • implicit-cell representations, where basic elements of a single type are used, on which element functions act, while cells are implicitly represented. The incidence graph [79] is the prototypical data structure of the former class: nodes correspond to the cells (in two dimensions, vertices, edges, and faces), while arcs linking cells of different dimension correspond to incidence relations. The symmetric structure described in Woo [290] is an implementation of such a graph for the boundary representation of solids. Implicit-cell representations are composed of a set of basic elements (e.g., darts, celltuples, etc.) plus a set of functions acting on such elements (e.g., involutions, switch operators, etc.). Examples of data structures within such a class are the Doubly-Connected Edge List (DCEL) by Preparata and Shamos [221 ], the winged-edge by Baumgart [16], the half-edge structure by Mantyla [187], the vertex-edge and face-edge structures by Weiler [285], and, in a generic dimension d, the cell-tuple structure [25], and the generalized maps [177]. The quad-edge structure [123] can be viewed as a specialization of the celltuple structure in two dimensions (see Chapter 10 for a treatment of data structures for planar subdivisions). Some special structures have been developed in the literature for triangulations and trapezoidal decompositions, which can exploit nice characteristics of such subdivisions. A widely used data structure for triangulations consists of storing, for each triangle, its

Applications

of computational

geometry to geographic information systems

341

three vertices and its three edge-adjacent triangles [172]. A similar data structure, called the quad-view data structure, has been recently proposed in the context of GIS to encode a trapezoidal map [94]. The data structures proposed to encode general subdivisions with features are basically extensions of incidence graphs to non-regular situations. For instance, data structures adopted in [58,136,198] are variations on the DCEL, which is extended to handle holes and features.

2.5. Multiresolution data structures Multiresolution modeling offers interesting capabilities for spatial representation and reasoning in a GIS: from support to map generalization and automated cartography, to efficient browsing over large GISs, to structured solutions in wayfinding and planning. Early approaches were based on a common practice in cartography, i.e., to have a collection of different maps of the same area at different scales, hence encoding information at different levels of detail and accuracy. Such approaches are often referred to in the GIS literature as multiple representations. Some research has also been undertaken, concerning assessment of consistency of different representations [87]. In [274] some hierarchical data structures, called reactive, are proposed, and some others are reviewed for multiresolution representation of lines and maps. The specific purpose of such structures is to allow one to retrieve a more or less detailed/accurate description of the objects. A description of the whole domain at a low level of detail is retrieved from a reactive model through a breadth-first traversal of the first levels of a tree, while a description of a portion of the domain at a high level of detail is retrieved through a depth-first traversal. Examples of such structures are the reactive tree, based on the R-tree; the BLG-tree for line generalization; and the GAP-tree, to support the generalization of area partitionings [275]. More recent approaches have tried to achieve multiresolution on the basis of an integrated data model, which can include information on maps at different levels of resolution together with explicit relations among them. The possibility of developing models that can support a multiresolution representation of maps through hierarchical structures based on trees of cells has been outlined in [107]. Independently, a first hierarchical model that is formally defined on a mathematical basis has been proposed in [19]: such a model is described by a tree of maps at different resolutions, where each map is a refined description of a simple region of its parent node in the tree. In [223], a more complete multiresolution model for maps represented by generalized subdivisions has been proposed, which deals with combinatorial and metric aspects of geographic maps separately, and formally relates all different representations of an entity through the various levels of resolution. The topological structure of a geographic map is completely captured by a purely combinatorial structure called an abstract cell complex. The map generalization, i.e., an operation that relates two consistent maps at different details, is expressed by a continuous mapping between the abstract cell complexes representing the two maps (see also Section 3.3). The appropriate rules permit us to control such functions in order to guarantee that the generalization occurs through gradual changes. The metric aspects concerning changes in accuracy can be controlled separately through

342

L. de Floriani et ah

the concept of line homotopy. The iterative apphcation of simphfication mappings satisfying both topological and metric constraints permits us to define a sequence of generalized maps of the same area. This sequence provides means to organize the various maps, together with the mappings relating them, either in a multi-layered or in a tree structure.

3. Map data processing 3.1. Spatial queries Spatial queries are queries in a spatial database that can be answered on the basis of geometric information only, i.e., the spatial position and extent of the objects involved. A spatial query is defined by a query space S, i.e., either the whole spatial database, or a portion of it obtained through suitable filters; by a query object q that can either belong or not belong to the database; and by a spatial relation J)i. A generic query is thus defined as follows: return all objects s e S that are in relation ^ with q. A classification of spatial queries follows directly from the classification of spatial relations (see Section 2.1). Computational techniques that can be adopted to answer spatial queries depend on the nature of the query space (and on the model which encodes it), on the nature of the query object, and on the nature of the relation. However, a simple exhaustive study of all possibilities shows that most spatial queries can be eventually reduced to few basic problems in computational geometry. In [58] a study is presented on the formalization and solution of topological and metric queries on geographic maps encoded through a cell complex. It is shown that topological queries are solved by map traversal when the query object belongs to the map, while they are always reduced to point location, line intersection, or polygon intersection, when the query object does not belong to the map (see Chapter 10 for a survey). An efficient algorithm has been proposed in [53], which is specific for map traversal in GISs, and provides the basis for the solution of several topological queries on a map. This algorithm traverses a planar subdivision and reports all its regions, without making use of mark bits in the data structure or a stack. Thus, it is especially suitable for a map stored in a read-only GIS database. On the contrary, metric queries are not directly supported by the topological structure of the map and need auxiliary structures — such as spatial indices, or Voronoi diagrams — to be answered efficiently. The use of Voronoi diagrams [ 11 ] for answering spatial queries in GIS has been discussed by Gold and Roos [114-117]. The Voronoi diagram is seen as an intermediate model between the raster and the vector models. As observed by Gold in [116], "the Voronoi model has the complete tiling property of the raster model, but the tiles are based on the proximal region around each object". Both static and dynamic algorithms for constructing a Voronoi diagram of 0-, 1- and 2-dimensional entities in the plane have been developed in the computational geometry literature (see Chapter 5). Gold and Roos

Applications of computational geometry to geographic information systems

343

show that the Voronoi diagram is a persistent, locally modifiable data structure suitable for algorithms like navigation queries, buffer zone generation, polygon labeling, intersection of a map with a line segment, interpolation of a surface at a new point [115,117], etc. Techniques based on spatial indexes are treated in Chapter 17. The interested reader can also refer to the two books by Samet [242,243]. Queries on collections of entities can benefit by less support from relations provided explicitly by data structures. In principle, computational geometry algorithms working on arrangements may give solutions to some such queries, but such algorithms are not applied in practice, perhaps because they are difficult to code and have an intrinsically high computational complexity. Popular techniques for processing collections of entities are based on spatial indexes, which are simple to code and perform well in the average case. In [97] a thorough analysis of the average case complexity in accessing quadtrees is presented. In [8,140,141], several algorithms for spatial queries based on spatial indexes are presented. Refer to Chapter 17 and [243,274] for more details on searching techniques based on spatial indexes. Spatial queries can also be combined with queries related to non-geometric attributes of objects. In this case, a rigorous analysis should take into account both geometric and non-geometric issues. The efficient treatment of such hybrid queries, and the development of suitable integrated models and techniques, as well as of methods for analyzing their performances, are hot open issues that require interdisciplinary research. 3.2. Map overlay It is common practice in GISs to organize information into layers (e.g., land use, hydrography, road network, etc.), and to produce maps by overlaying layers of interest. The overlay process is the basic tool for solving complex spatial queries (the so-called spatial join). In such queries, the input maps are not only overlapped, but also combined on the basis of their attributes (e.g., by using Boolean expressions). Here, we deal only with the geometric aspect of map overlay, where techniques from computational geometry can be applied. Overlay of maps in raster format reduces to combining the attribute values of corresponding pixels and does not involve computational geometry techniques. Conventional transformations are used to align raster grids having different orientations or cell sizes [98]. Here, we focus on the overlay of vector maps. In GIS, there are several cases of overlay, such as overlay between two subdivisions (also called area-to-area overlays), line-to-area overlays, point-to-area overlays. Moreover, both in area-to-area and in line-to-area overlays, input datasets may be collections of possibly intersecting regions and lines. Two maps may be combined to intersect, form a union, or one map may be used to update another; multiple maps may be combined using Boolean expressions. On the other hand, in computational geometry overlay problems have been studied in a much more standard form: • segment intersection problem: find all the intersection points within an arbitrary set of segments; • red-blue intersection problem: find the intersection points of two sets of segments, where no intersections exist within the same set;

344

L. de Floriani et al.

• superimposing plane subdivisions', given two subdivisions U\ and 1^2, compute a subdivision E' whose edges and regions are obtained as the intersection between edges and regions of Ui and ^2Segment intersection algorithms can be used to overlap line maps (e.g., road networks). Red-blue algorithms apply to map layers consisting of general subdivisions with features (see Section 2.4). Some algorithms for plane subdivisions have further restrictions, for instance, to trapezoidal maps [95], to convex regions (e.g., a triangulated map) [207]. In all overlay problems, the number of intersections is /: = Q{n^) in the worst case, where n is the input size. The problem complexity is bounded by 0{n -\-k). A brute force solution consists of computing the intersections between every pair of entities from the two given layers, and works in &{n^). Methods developed in the GIS community mainly consist of heuristic "filters" which try to reduce the number of tested pairs by superimposing a hierarchical spatial index. The filter is based on a decomposition of the plane into blocks, and on distributing the objects to be intersected (the edges and regions of each layer) among the various blocks. Pairs of objects which lie in disjoint blocks of space can be discarded from consideration since they cannot intersect. Then, an exhaustive search for intersections is applied within each block. The filter, however, is not guaranteed to be effective, i.e., some redundant tests are still possible. Nevertheless, these techniques show a good practical behavior. Even if they turn into an exhaustive intersection search in the worst case, their efficiency has been widely experimented in practice. For some methods, a theoretical analysis of the average randomized case is available (e.g., see [93]). Filtering methods can be subdivided into (i) methods decomposing the space into disjoint blocks, eventually splitting objects that cross several blocks; and (ii) methods decomposing the space into possibly overlapping blocks, storing each object in exactly one block. The simplest decomposition scheme based on disjoint blocks is a uniform grid [104, 105]. The resolution of the grid determines the efficiency of the filter. Uniform grids show a poor performance when data are irregularly distributed. Quadtrees [242,243,210] are adaptive grids, based on a recursive partition of an initial square into four sectors. In this way, a different grid resolution can be achieved in each part of the domain according to the local density of data. Many variants of quadtree-like structures have been proposed, depending on the type of data to be stored: either the regions or the edges of a map. Boolean operations are easily done on maps with a superimposed quadtree-like structure. The two input quadtrees are traversed synchronously, by visiting nodes corresponding to the same portion of space at the same time; a quadtree for the result of the Boolean operation is also incrementally built (see [243]). Binary space partitioning (BSP) is a technique bom in the image processing field and lately applied to GIS [273]. Still based on a recursive subdivision of space, it uses a more flexible scheme. At each step, the current block is subdivided into two parts by an arbitrary straight line. The resulting structure can be described as a binary tree whose nodes correspond to convex polygons. BSP-trees support algorithms for performing Boolean operations on objects stored in them [205]. Methods based on overlapping blocks filter candidate pairs by comparing bounding boxes. The bounding box of a two-dimensional entity is the smallest axis-parallel rectangle that completely contains it.

Applications

of computational

geometry to geographic information systems

345

Structures of the R-tree family [127,235,250,17,92] consist of trees with a degree bounded by a predefined constant value, where data objects are stored at the leaves and every internal node represents the bounding box of all the objects stored in its children. The bounding boxes associated with different nodes (even within the same layer) may overlap. A separate R-tree is built for any map layer. The intersections between two layers are computed while descending the trees: at each step, tests on the bounding boxes are performed, and the results of such tests are used to cut off some branches of the trees which are guaranteed to produce no intersection. Intuitively, an "ideal" R-tree should be balanced, its bounding boxes should be as disjoint as possible, and the amount of empty space enclosed in them should be low. In fact, these properties enhance the efficiency of the filter. The numerous existing variants of R-trees are characterized by different criteria used to evaluate the "good quality" of an R-tree, and by different construction strategies. Common criteria try to minimize the area covered by the rectangles [127], or to avoid overlapping rectangles [250], or to aggregate objects based on their spatial proximity [235,92]. Since many R-trees support dynamic insertions and deletions of objects, there is a trade-off between update time and the quality of the tree produced. R*-trees [17] seem to be the most efficient ones in practice. In computational geometry, the purpose has been to find algorithms with a good theoretical complexity [18,32,30,31,41,124,207]. Some of such techniques, especially the plane sweep paradigm (see below) have been integrated in GIS environments [166,167]. The classification of the major existing approaches can be as follows: • an approach based on a segment tree, i.e., a special data structure which hierarchically partitions the plane according to the configuration of input segments [213,31]; • a randomized incremental approach [41,22], which computes all the intersections generated by a generic set of segments by adding the segments one at a time; • a sweep-line approach, which finds the intersections while translating a line through the plane and monitoring the segments intersected by such a line [18,30,32,207]; • an algorithm based on topological sweep, i.e., a sweeping process driven by the topological structure of the input subdivisions [95,124]. The problem of overlapping subdivisions encoded through hierarchical models is faced in [184]. A segment tree is a binary tree whose leaves represent, in the left-to-right order, the vertical strips defined by every pair of consecutive vertical lines drawn through the segment endpoints. Data segments are associated with nodes of the tree in an appropriate way. Then, it is sufficient to test intersections only between segments associated with the same node. Space requirements are reduced by dynamically constructing only a portion of the tree at a time: a single root-to-leaf path [31], or a single level [213]. In incremental algorithms, segment intersections are reported while constructing a trapezoidal decomposition of the plane. The trapezoidal decomposition induced by a set S of segments is obtained by drawing a vertical line through each segment endpoint or intersection point, until another segment of S is encountered in both directions. The efficiency in locating the trapezoids of the current decomposition affected by the insertion of a new segment is obtained by means of special auxiliary data structures which encode the history of the construction [41,22].

346

L. de Floriani et al. Table 1 Major map overlay algorithms. For each algorithm, the worst-case time complexity and the input requirements are indicated. Parameters n and k denote the input and the output size, respectively; the time complexity for incremental algorithms has been computed through a randomized analysis Algorithm

Approach

Input

Time

[22,41] [31,213] [18] [207] [32] [30] [95,124]

Incremental Segment tree Sweep-line Sweep-line Sweep-Une Sweep-line Topological sweep

Segments Red-blue segments Segments Plane subdivisions with convex regions Red-blue segments Segments Plane subdivisions

0{n\ogn+k) 0{n\ogn+k) 0{{n + k)\ogn) 0{n\ogn + k) 0{n\ogn+k) 0{n\o%n + k)

0(n+k)

Sweep-line algorithms are evolutions of the classical sweep-line method of Bentley and Ottman [18] for reporting the intersections in a set of segments. The basic idea is moving a vertical line left-to-right on the plane, and monitoring the segments intersected by such a line. The key observation is that, if two segments intersect, they must appear in consecutive positions along the line at some instant. Specialized sweep-line algorithms have been defined by relaxing the straightness condition of the line and/or the left-to-right order in intersection reporting; also, some algorithms require additional properties on the input segments [31,32,207]. The topological sweeping [95,124] exploits the topological relations among the edges of a plane subdivision in order to advance the sweepline. In practice, the sweep-line is replaced by a cut on the graph induced by the edges of the two input maps and their intersection points, which is progressively moved left-to-right. The algorithms developed in computational geometry are summarized in Table 1. All the above approaches have a computational complexity of 0(n \ogn -J- k), where n is the size of the two maps and k is the output size (for randomized algorithms this is an expected time complexity). The only exception is the topological sweep method which leads to 0(n-\- k) worst-case optimal algorithms. Andrews et al. [7] provide an experimental comparison of computational geometry overlay algorithms and GIS overlay algorithms. The compared approaches are the uniform grid [104,105], the quadtree [242,243], the BSP tree [273] from the GIS literature, and Bentley and Ottman's sweep [18], the trapezoidal sweep [32], and the hereditary segment tree [213] from the computational geometry literature. The results show that filtering heuristics typically perform better because they take advantage of the typical characteristics of GIS data (short segments with a low number of intersections per segment). On large datasets, however, the grid tends to increase running time, and quadtrees and BSP trees may introduce a serious space overhead. Asymptotically efficient computational geometry methods do not perform well on GIS data. An exception is the algorithm by Chan [32], which is competitive with GIS-based approaches, even on large data sets. The segment tree algorithm is the best one on artificial datasets, but the worst one on real GIS data.

Applications

of computational

geometry to geographic information systems

347

3.3. Geometric problems in map generalization The map generalization is known in cartography as a process of selection and representation of details to be shown on a map on a given scale. The map generalization involves different tasks, such as removing objects, simplifying geometry, aggregating several objects into one, etc. In GIS, generalization is more generally intended as a reduction of "information density" in a geographic database while preserving its overall structure and semantics. Generalization is a complex task, both conceptually and computationally. It involves geometric aspects of geographic entities (shape, structure, and detail), as well as nongeometric ones (role and relevance in the map context). In traditional cartography, generalization processes are classified into the following basic operations [232]: • selection decides whether an entity should appear (i.e., be explicitly represented) or not in the simplified map; • aggregation merges different entities into a single one; • collapse reduces the dimensionality of an entity (e.g., a small region collapses to a point, a thin region collapses to a line); • symbolization transforms a geometric entity into a symbol; • simplification reduces the accuracy in representing the shape of an entity; • exaggeration modifies the shape of an entity to highlight features that would become hardly visible on a small scale; • displacement also modifies the shape to set apart entities that would become too close to each other. Selection and aggregation are usually controlled through non-geometric rules. Symbolization is also non-geometric, except for the problem of avoiding overlapping symbols, which is discussed separately in Section 3.4. The remaining operations are primarily geometric, since they affect the shape of entities explicitly represented in the map at reduced resolution. Simplification is by far the most studied problem in the generalization. The best known and used algorithm is probably an early refinement heuristic proposed by Douglas and Peucker for a simple open polygonal chain [72]. Starting from an initial approximating sequence, at each cycle the algorithm inserts the point that maximizes the distance from the current approximation as long as the maximum error gets below a given threshold. A straightforward implementation runs in O(n^) time in the worst case; a sophisticated implementation running in 0(^log*n) time has been proposed recently [139]. The GIS literature is full of other heuristics proposed for the same purpose, which are usually compared empirically (see [192] for a survey and experimental comparisons; [176,214] for more recent examples). A rigorous approach to the problem is due to Imai and Iri [149] who first gave a formalization of line simplification as an optimization problem. Namely, given an input curve and some metric to measure the error in approximating it, they outline two basic problems: (i) minimizing the number of vertices of the output chain for a given threshold error; (ii) minimizing the error of the output chain for a given number of vertices. Imai and Iri give algorithms, and list results by other authors to cover both problems under four different metrics. Other results are also referred to by Guibas et al. [125]. All algorithms have a superquadratic time complexity for generic polygonal chains, but for a special metric, a sub-

348

L. de Floriani et al.

quadratic time complexity can be achieved [147]. Problem (i) can be solved in linear time for the special case in which the input is the image of a piecewise-linear function [148]. An elegant definition of problem (i) is also suggested by Guibas et al. [125]: a homotopy class is obtained by convolving the input line with a disk having a radius equal to the given threshold. The solution is a polygonal chain with minimum links, which is homotopic to the input line into such a region. Linear algorithms for computing homotopy paths inside simple polygons were proposed by Suri [258], Ghosh [113], and Hershberger and Snoeyink [138]. All such algorithms follow a greedy approach. Guibas et al. also study problem (i) as a problem of ordered stabbing [125]. Only disks centered at vertices of the input chain are considered: the solution must be a minimum-link chain stabbing such discs in order. For such a problem, they give an 0{n\ogn) suboptimal greedy algorithm, as well as an optimal quadratic algorithm based on dynamic programming; finally, they give a linear time optimal solution for the special case in which discs in the input are all disjoint. All the algorithms cited above cannot be considered as ultimate solutions to the problem, because they cannot guarantee that the output will be free of self-crossings. A negative result by Guibas at al. [125] shows that the problem of finding a minimum-link simple polygon for a given homotopy type is NP-hard. In the same paper, the problem of line simplification is also studied in the context of a map: given a plane straight-line graph induced by a set of non-crossing polygonal chains, find a minimum-Unk approximation (in the homotopy class of the map, as outlined above for lines). Also, this problem is NP-hard: the difficult part is in positioning vertices that have more than two incident edges. A more general problem of subdivision simplification has been recently faced by Weibel [287] and de Berg et al. [51]. Weibel [287] defines constraints on the simpHfication process based on cartographies principles, classified into metric, topological, semantic (like preservation of class membership) and gestalt principles (constraints used by cartographers). De Berg et al. [51] face the problem of simplification with the main purpose of reducing the complexity of a subdivision. They propose a nearly quadratic algorithm for the following problem: given a polygonal line, a set of feature points, and a real number ^ > 0, compute a simplification that guarantees (i) a maximum error e, (ii) that feature points remain on the same side of the simplified chain as of the original chain, and (iii) that the simplified chain has no self-intersection. Relatively little research has been conducted on merging areal features. The earlier techniques were usually raster-based involving mathematical morphology operators such as dilation and erosion. Collapse problems, like reduction either of a ribbon-shaped area to a center line, or of an areal feature to a point, are faced by using the medial axis transform (see [281] for a survey, and [74,160] for algorithms on medial axis). Jones et al. [154,281] describe the implementation of a few generalization operators based on a triangulated spatial model of a map built on the constrained Delaunay triangulation of the map vertices and edges (see [173,248]). Such operators allow detection of conflicts and their resolution, dimensionality reduction through skeleton generation, boundary simplification and merging of nearby objects. On a more general perspective, Saalfeld [241] has recently suggested that the basic operations of map generalization can be expressed in terms of graph drawing. Although such an approach seems to be promising, so far it is just a challenge launched to experts in graph drawing, while no algorithms based on it have been proposed yet. Another approach to the

Applications of computational geometry to geographic information systems

349

map generalization has been recently proposed by Puppo and Dettori [68,223], which is based on functions defined between cell complexes (describing maps) and continuous under the finite topology. This approach is mainly intended to giving consistency control of operations that affect the topology of the map, and to the definition of multiresolution models. By following the approach of Puppo and Dettori, algorithms for geometric consistency have been recently proposed by Delis and Hadzilacos [67]. To summarize, we remark that while only empirical or manually-assisted solutions to the map generalization are used in practice, only partial solutions to a few theoretical subproblems have been proposed. A unifying framework that can formally incorporate all operations involved in the map generalization is still an open issue.

3.4. Map labeling Map labeling is a problem of critical importance in cartography. It consists of positioning labels on the map under the following constraints: they must be legibly referred to spatial entities; they cannot overlap; they cannot cover relevant features of the map. In manual cartography, this task is estimated to take about 50% of the time to draw a map; hence the importance of finding automatic procedures to place labels. Map labeling is often subdivided into three subproblems, namely, labeling areas (e.g., countries, lakes), labeHng lineal features (e.g., rivers, roads), and labeling points (e.g., cities). According to some authors [70], there exists a hierarchical relationship that permits to solve such problems separately: area features are placed before point features, which in turn come before line features. However, most literature has been devoted to studying point labeling. Informally, point feature labeling is defined as follows: given a set of points spatially located, and a set of labels referred to each point, find the choice of label positions that minimizes the total number of label-label and label-point overlap. Labels are modeled as rectangles of variable size. Most cartographers agree on constraining a label to have the referred point either at one of its four comers, or at one of the midpoints of its four edges. Many solutions to point labeling have been based on greedy techniques with backtracking to escape invalid configurations (see, e.g., [70,152] and references therein). Different heuristics were adopted to speedup algorithms, which, however, have an exponential time complexity in the worst case. Cromley [47] and Zoraster [296] independently proposed an approach based on integer programming and iterative solutions based on relaxation. Christensen et al. [38] give an extensive survey and an empirical analysis of several computational techniques for map labeling. Moreover, they propose an algorithm based on gradient descent and another one based on simulated annealing. Both such algorithms are based on iteration: starting from an arbitrary configuration, they make local improvements. With a gradient descent, the local change that gives the best improvement is performed at each iteration. However, this method can get stuck at local minima without achieving the optimal solution. In the algorithm based on simulated annealing, local minima are avoided through a stochastic approach. At each iteration, local random changes are performed, which are discarded with an increasing probability if they worsen the cost of the configuration. Christensen et al. show how to manage the cost to include not only the number of conflicts, but

350

L. de Floriani et al.

also placement preferences, and possibly deletions of labels in excess to escape from illegal configurations. Formann and Wagner [99] have studied a version of point labeling, where only four positions per label are allowed; they have shown that the problem is NP-complete through reduction to 3-satisfiabiHty. They also propose an optimal algorithm running in 0{n log^ n) time for arbitrary rectangles, and in 0{n \ogn) time for equally-sized squares, when only two possible positions per label are allowed. On this basis, they propose an approximate solution for the case of four possible positions that can be found in 0{n \ogn) time and ensures correct placement of labels whose size is at least 50% of the optimal. They also show that this is the best approximation ratio that can be achieved in polynomial time for this specific problem. Kucera et al. [168] studied the same problem, but developed an exact super polynomial algorithm that can be applied to sets with up to approximately 100 points. Wagner and Wolff [279,280] have noted that in practice the approach of Formann and Wagner hardly ever results in label sizes significantly larger than half the optimum. They study variations of the problem and find a practical way to improve the size of the squares. Recently, Agarwal et al. [4] have been investigating the problem of computing a large non-intersecting subset in a set of rectangles in the plane. An exhaustive bibliography on the map labeling problem can be found at http://www.inf. fu-berlin.de/map-labeling/papers.html. 3.5. Other analysis issues In this subsection, we will deal with other issues that have received some attenfion in both computational geometry, and GIS: namely, map conflation, path planning, and construction of cartograms. The path problem in the context of terrain models will be discussed in Subsection 5.4. Refer also to Chapter 15 for further details. Maps provide approximations of the real locations and extents of geographic objects. The representation of the same object may differ in two maps for small changes in its shape and displacement. Map conflation is an operation aimed at reconciling two variants of the map of the same area, in order to compile a map that would integrate information from both [238]. The dissertation of Saalfeld [241] provides both a mathematical modeling framework, and a collection of algorithms for building a conflation system. Basic issues in this context are plane graph matching, and geometric transformations on planar regions. Plane graph matching does not involve the graph isomorphism: heuristics for feature matching, and neighborhood relations are used. Geometric transformations and final merging of the two maps is performed by using a triangulation of the maps [237]. The computation of optimal paths has been studied in GIS mainly for road networks. In these cases, problems can be modeled in a graph-theoretical setting, and generally do not involve geometric computation. Such methods cannot be applied, for instance, in planning a new road across a region: the road will not, in general, follow the existing edge of the map, but rather it will traverse the interior of the region along a path that did not exist before. Path computation involves geometric calculations and cannot be modeled as a discrete graph theoretic problem (see the work of Mitchell and Suri [197] for a survey).

Applications

of computational

geometry to geographic information systems

351

The Weighted Region Problem consists of finding an optimal path in a planar subdivision, where a non-negative cost is associated with each edge and region. Such a cost is the cost of travelling one unit of length across the region/along the edge, and may be based on soil, vegetation, etc. Costs can also be infinite, and thus model obstacles which cannot be traversed. The problem is relevant both to motion, and to planning situations such as building highways, railways, pipelines, etc. The weighted region problem has been intensively studied by Mitchell. In [196] the algorithm is proposed that finds an approximated shortest weighted cost path from a fixed point in 0(n^ logn), where n is the size of the problem. In recent paper [189], Mata and Mitchell propose the algorithm that can deal with an arbitrary pair of points, and has an improved time complexity (an n^ factor instead of n^). Such an algorithm has been successfully implemented. The 0(n^) algorithm for a special instance of the above problem in which only regions with weights in the set {0, 1, -foo} is described in [112]. In [266] algorithms for the weighted region path problem working on a raster model as well as the implementation of the algorithm of Mitchell and Papadimitriou based on constrained Delaunay triangulation (see [173]) are described and compared. A cartogram is a geographic map that is purposely distorted so its spatial properties represent quantities not directly associated with the position on the globe [71]. Distortions should act as homeomorphisms, i.e., continuity of the map should be preserved. Some automatic methods for generating cartograms have been developed that use iterative techniques for computing differential equations involving force fields [71,263]. Recently, Edelsbrunner and Waupotitsch [82] have proposed a combinatorial algorithm that is based on a sequence of local piecewise linear homeomorphic changes, acting on tiling of the map made of regular triangles.

4. Terrain data modeling and processing Terrain processing requires input, data models, and algorithms different from those needed for 2D data. Terrain processing is also a major source of geometric problems, which stimulate the use of computational geometry, both in applying existing techniques, and in developing new ones. The term Digital Terrain Model (DTM) is used generically to denote a model representing terrain relief on the basis of a finite number of samples [193]. Elevation data are acquired either through sampling technologies (on-site measurements or remote sensing: tacheometers, photogrammetry, SAR, etc.), or through digitization of existing contour maps [215]. Raw data come in the form of elevations at a set of points, either regularly distributed, or scattered on a two-dimensional domain; chains of points may form polygonal lines, approximating either lineal features, or contour lines. On the basis of sampled data, different models can be built: contour maps, rasters, and mathematical surface representations. Mathematical models are either global (i.e., defined through a single function interpolating all data), or local (i.e., piecewise defined on a partition of the domain into patches). Regularly distributed data generally lead to regular square grids, while scattered data generally lead to triangulated irregular networks. Since we are concentrating on models and methods rooted in computational geometry, we will not cover

352

L. de Floriani et al.

raster models and digital geometry techniques to process them, as well as global mathematical models and algebraic techniques. In the rest of this section, we will deal essentially with general problems related with construction and storage of DTMs, while more specific problems regarding their analysis will be described in the next section. The interested reader can also refer to a survey by van Kreveld [270] for more detailed issues about DTMs, and in particular algorithms on TINs, and to a survey by Magillo and Puppo [185] for issues on parallel algorithms for terrain processing. 4.1. Classical terrain models A topographic surface (or terrain) cr is the image of a real bivariate function / defined on a compact and connected domain ^ in the Euclidean plane, i.e., a = {(x, y^ f(x,y)), (x, y) e Q}. Given a real value ^, a set Ca(q) = {(x,y) e Q, f(x,y) = q] is the set of contours of a at height q, and it is formed by a set of simple lines (provided that / has no relative maxima, minima, saddles, or horizontal plateaus at height q). A Digital Terrain Model {DTM) is a model providing information on such a surface on the basis of a finite set of data. Terrain data are measures of elevation at a set of points y — {uo,..., fA^} C A?, plus possibly a set of non-crossing straight-line segments E — {e^,.. ..CM} having their endpoints in W. Data points in V can either be scattered or form a regular grid. Lines in E can possibly form a collection of polygonal chains. Three classes of DTMs are usually considered in the context of GIS [216]: • Polyhedral terrains', a polyhedral terrain is the image of a piecewise-linear function / . A polyhedral terrain model can be described on the basis of a partition of the domain Q into polygonal regions having their vertices in V (and such that the segments of E appear as borders of regions). The image of / over each region is a planar patch. The most commonly used polyhedral terrain models are Triangulated Irregular Networks (TINs), in which all regions are triangles. • Gridded elevation models: a gridded elevation model is defined by a domain partition into regular polygons induced by a regular grid over Q. The most commonly used gridded model is the Regular Square Grid (RSG), in which all regions are squares. Function / is usually a bilinear function interpolating the vertices of the grid. In some cases, an RSG is treated as a raster, and a constant function is used over each region. This is said to be a stepped model, and it is obviously non-continuous across the edges of the grid. • Contour maps: given a sequence Q = [qo,.. .,qh} of real values, a digital contour map of cr is an approximation of the collection of contours Ca,Q = [Ca (qi), / = 0 , . . . , h}. Contours in digital contour maps are often represented as point sequences; a line interpolating points of a contour can be obtained in different ways: from the simplest case of a polygonal chain to spline curves of various orders. Contour maps are easily transposed onto paper and best understood by humans, but are not suitable for performing a complex computer-aided terrain analysis. This is due to the complete lack of information about terrain morphology between two contour lines. An automated terrain analysis is performed on RSGs or TINs. As for maps, some operations are better supported by RGSs, and others by TINs. Advantages and disadvantages of the two models are illustrated in [27,9].

Applications

of computational

geometry to geographic information systems

353

Triangulated Irregular Networks are the most interesting models for application of computational geometry techniques. A TIN is based on a triangulation T of the domain with vertices at the projection on the x-y plane of data points (or of a subset of them) and sometimes including a set of straight-line segments as a subset of the edges of T. The resulting piecewise-linear surface interpolates data elevations at all points and lines whose projections are vertices and edges of the triangulation, respectively. Data structures describing a TIN are those for encoding triangulations in the plane (see Section 2.4). There also exists a flourishing literature on TINs using smooth patches at each triangle, in order to achieve C^ continuity of the resulting surface [206]. Building smooth patches is easy, though expensive because of numerical computation involved, if surface normals are provided (or estimated) at each datum. There are different strategies to define a TIN having its vertices at a given set of data. In general, the connection topology of vertices is required to satisfy some optimality criterion. Optimality criteria can be defined either for the underlying triangulation, or for the surface itself. A measure can be assigned to each triangle of triangulation (e.g., its area, the minimum/maximum/mean size of its edges, or its angles). Then, the key for comparing triangulations can be the sum, or the maximum, or the minimum, or a lexicographically ordered vector of the measures of all triangles. The most commonly used triangulation is the Delaunay triangulation which is optimal according to the following criteria: it maximizes the minimum angle of its triangles [172,221], it minimizes the maximum circumcircle [48], and it minimizes the maximum containing circle [227]. Optimality criteria defined on the approximating surface have also been considered in the context of scattered data interpolation. Triangulations defined by such criteria are referred to as data-dependent triangulations, since the optimization also depends on the z-values of data points, rather than simply on their x-y coordinates. Dyn, Levin, and Rippa [78] consider optimality criteria that can be defined through some cost function of the resulting surface. Examples of optimality criteria are the following: • three-dimensional criteria: criteria like the max-min angle, the min-max angle, or the minimum edge length are considered on triangles and edges of the surface, rather than on their projections on the plane; • nearly C^ criteria: the piecewise-linear surface that is as close as possible to a C^ approximation on the edges of the triangulation is selected; • variational criteria: the unknown surface is assumed as the one minimizing some functional defined over a suitable space of functions. One example of such a functional is the roughness, defined by the Sobolev semi-norm of the function. Surprisingly, the surface with minimum roughness is always given by the Delaunay triangulation of projected points, independently of the z-values of its vertices [230]. When the data set also includes a set of line segments, it is also important that such segments appear as either edges, or chains of edges in the TIN. In this case, either constrained [173,248] or conforming [83,239] triangulations are used. A constrained triangulation T of a set V of points and a set E of straight-line segments is a triangulation of V that contains £" as a subset of its edges. A constrained Delaunay triangulation is a constrained triangulation satisfying a weaker version of the circumcircle property. A conforming triangulation r of a set V of points, and a set E of straight-line segments is a triangulation of V such that every segment of E may be decomposed into a chain of edges of T. In other

354

L. de Floriani et al.

words, the constraint segments are broken into pieces in the conforming triangulation in order to avoid long and thin triangles. See Chapter 5 for details. For the conforming and constrained triangulations, only optimality criteria defined on the underlying triangulation have been considered so far. A survey of Delaunay triangulations (both non-constrained and constrained) for TIN creation is provided in [264].

4.2. Construction and conversion algorithms Raw data can come in the form of a set of points V, and possibly of a set of lines E. In case the points of V are distributed regularly, an RSG is implicitly provided. Since sometimes DTMs are derived from digitizing the existing contour maps, contours may also play the role of raw data. Hence, we have the following possible construction and conversion problems. Note that a clear distinction between conversion and construction techniques cannot be done since sometimes models and raw data are the same (e.g., regular distributed data points and RSGs). 1. RSG from sparse points. There are three possible approaches for constructing an RSG from a set V of scattered points [216]: (a) pointwise methods: the elevation at each grid point p is estimated on the basis of a subset of data that are neighbors of p. There are different criteria to define the neighbors that must be considered, e.g.: the closest k points, for some fixed k\ all the points inside a given circle centered at p, and of some given radius; the neighbors of p in the Voronoi diagram of V U {/7}; etc. The basic geometric structure for all such tasks is the Voronoi diagram of the given data points. If raw data include line segments, then a Voronoi diagram of segments, or a bounded Voronoi diagram [248] can be used. (b) global interpolation methods: a unique interpolating function (usually, a polynomial) is computed, which interpolates elevations at all points of V. The RSG is obtained by sampling such a function at grid nodes. (c) pathwise interpolation methods: the domain is subdivided into a number of patches, which can be either disjoint or partially overlapping, and either of regular or irregular shape. The terrain is approximated first within each patch through a function that depends only on data inside the patch. The elevation at grid nodes inside each patch is estimated by sampling the corresponding function. 2. RSG from TIN. Some systems first compute a TIN from sparse data points (and, possibly, fines), then they convert such a representation into an RSG [284]. This conversion is indeed a special case of patch interpolation methods described above. After computing the TIN, elevations at RSG nodes can be efficiently evaluated by scan-line conversion algorithms that run in linear time [98]. 3. RSG from contours. Early methods performed this conversion as follows: a number of straight lines at given directions (at least a horizontal and a vertical line) are drawn through each node of the grid, and their intersections with the contour lines are computed; terrain profiles along each line are approximated by some function interpolating contours at intersection points; the elevation at the grid node is computed as an average of its approximate elevations along the various profiles. The underlying

Applications

4.

5.

6.

7.

8.

9.

of computational

geometry to geographic information systems

355

geometric problem is to find intersections between the contour map, and the fines through each grid node: red-blue intersection algorithms can be used for this purpose (see Section 3.2). More recent approaches perform this conversion in two steps: contours are first converted into a TIN, then such a TIN is scan-converted to produce the final RSG [216]. Carrara et al. [29] provide an experimental analysis of four different methods for deriving an RSG from contour lines; the results on RSG's accuracy in preserving a coastline are reported in [28]. Eklund and Martensson [88] compare the accuracy of RGSs generated from contour lines and from scattered points. TIN from points. A TIN is obtained from sparse data points by computing the triangulation having vertices at data points. In case raw data also include line segments, the constrained or the conforming triangulation is computed. Thus, the basis for TIN construction are algorithms for building the Delaunay triangulation, the datadependent triangulation [78,230], the constrained Delaunay [173] and the conforming Delaunay triangulation [83,239]. The interested reader is referred to Chapter 5, and to [11] for further details. TIN from RSG. This conversion is usually aimed at data compression: the adaptivity of the TIN to surface characteristics is exploited to produce a model of terrain that can be described on the basis of a reduced subset of elevation data from the input RSG. Hence, RSG to TIN conversion involves approximation. The construction of approximated TINs is treated in detail in Subsection 4.3. TIN from contours. A TIN conforming to a given contour map should be based on a triangulation that conforms to the set of contours. This problem has been studied in the literature both in the context of GIS, and, more generally, in the reconstruction of three-dimensional object models. The constrained Delaunay triangulation, using projections of the contours on the x~ y plane as input constraints, gives a conforming TIN, but it may produce artifacts, due to triangles having all vertices on the same contour line. Many algorithms adopt heuristics to avoid artifacts. In [37], flat triangles are avoided by adding the medial axis of each pair of adjacent contour lines to the dataset before computing the triangulation. Specific algorithms developed for TINs are described in [38,110]. More general algorithms for 3D objects produce a triangulated surface from a sequence of cross-sections, resulting from the intersection of the object with a collection of parallel planes [159,109,111,255]. Contours from points. As for RSG from contours, this conversion is usually done in two steps: an RSG or a TIN is built first from raw data, then contours are extracted from such a model. Contours from RSG. If the RSG is a bilinear model, then all square regions can be processed independently: possible intersections of the four edges of each region with contour lines are found in constant time. Contour segments inside each region are obtained by connecting intersection points at corresponding elevations. In the second step, corresponding contour segments from adjacent regions must be sewn together to form contour lines by a contour following technique. Contours from TIN. Building contours from a TIN is completely similar to constructing contours from a bilinear model. Contour segments for triangular regions

356

L. de Floriani et ah

can be found independently while contour extraction needs contour following. In [13,180,269], efficient structures based on interval trees [269], kd-trees [180], and segment trees [13] are proposed, which permit us to extract contours from a TIN in 0 (log n-\-k)ov 0(>/n + k) time, where n is the size of the TIN, and k is the size of the resulting contour map. The search structure, however, can be a serious storage overhead. Van Kreveld et al. [272] reduce the storage requirements by observing that a contour can be traced directly if one mesh element through which the contour passes is known. They give an algorithm which constructs the so called contour tree, a tree that captures the contour topology of a TIN, or an RGS, in 0(n logn) with less additional storage than a previous algorithm proposed by de Berg and van Kreveld [54]. The problem of efficiently transmitting and compressing a TIN has been recently addressed by Snoeyink and van Kreveld [256] for the case of TINs based on Delaunay triangulations. Given a 2D Delaunay triangulation, they show that one can determine a permutation of the data in 0(n) time such that the Delaunay triangulation can be reconstructed from the permuted data in 0(n) time. More general methods for compressing triangular meshes (not just TINs), which exploit the compactness of triangle strips, have also been proposed by other authors [36,55,89,262].

4.3. Terrain generalization The more data are available, the better a terrain can be represented. Modem acquisition techniques provide huge datasets that permit to represent terrain accurately. However, a high representation accuracy is paid for in terms of high costs for storage and processing. Generalization techniques provide the means to bring such costs into manageable bounds, by trading-off representation costs and accuracy. An approximate terrain model is a model built by using a reduced set of data. The approximation error e is measured with respect to a reference model built on the whole data set: a common choice is the maximum difference between the elevation at a data point, and its interpolated elevation in the model. The accuracy of an approximated representation with respect to the reference model — which has no relation with the accuracy of samples with respect to reality — can be defined as the inverse of the error, namely, \/{\-\- e). An application can optimize the performance by adopting an approximate terrain model whose accuracy is within a required threshold. In some cases, the threshold may be variable through the surface domain. For instance, a higher accuracy may be needed in the proximity of important features, for the purpose of terrain analysis; the accuracy may be decreasing with distance from a given viewpoint, for the purpose of landscape visualization. However, it is easy to transform a generalization problem with a variable threshold into a problem with the constant threshold by performing a simple rescale of the input data through the inverse of the threshold function [194]. The ideal aim of terrain generalization is to achieve an optimal ratio between the accuracy and size of representation. As for line simplification, there are two different optimization criteria: • minimizing the number of vertices of the model for a given accuracy; • maximizing the accuracy for a given number of vertices.

Applications of computational geometry to geographic information systems

357

For the first problem a negative result has been proven by Agarwal and Suri [1]: they consider a polyhedral terrain (for simplicity, a TIN), and show that the problem of approximating it at a given accuracy with another TIN having a minimum number of triangles and vertices at arbitrary positions is NP-hard. It follows that also the second problem is NP-hard. It was conjectured that such problems remain NP-complete even if vertices of the approximate terrain are constrained to lie at original data points. Only for the first problem, there exist algorithms that can achieve a suboptimal solution in polynomial time, while guaranteeing some bounds on its size. A first algorithm proposed by Agarwal and Suri [1] can build an approximate solution in 0(n^) time, having 0(klogk) triangles, where k is the number of triangles in the optimal solution. More recently, a randomized algorithm was proposed by Agarwal and Desikan [6] that can achieve size 0(k^ log^ k) in 0(n^+^ -h k^ log^ k log | ) expected time. Both algorithms build a simplicialpartition, defined as a set of disjoint triangles whose union covers all point data, and such that each triangle is compatible with the required accuracy. Such a set of triangles is completed to form a TIN. Finding a simplicial partition involves sophisticated techniques whose implementation is hardly feasible in practice; however, a simplified implementation of the latter algorithm has given empirical results that are comparable to those obtained with other methods. An algorithm that solves the first problem with a greedy technique has been proposed by Silva et al. [253]. The approximate TIN is built incrementally by iteratively cutting triangles (ears or bites) from hollow polygons that span the domain: at each step the algorithm considers a polygon, and bites from it a triangle of maximal area that is compatible with the required accuracy. It is conjectured in that also this algorithm might guarantee a bound on the size of the solution, but proving this fact is still an open issue. This algorithm works with simple data structures, and requires a small amount of memory. Most other algorithms use iterative methods based on heuristics, which try to select a "good" subset of the input dataset as a set of vertices for the approximate model. Such methods can be used either for the first or for the second problem, depending on the test adopted to stop iteration. Refinement heuristics start from a grid whose vertices are a very small subset of input data. New data points are iteratively inserted as vertices of the model, until the required constraints are satisfied. Simplification heuristics work in the opposite direction, by iteratively discarding points from an initial model built over the whole data set. In this case, points are selected at each iteration in order to cause the least possible increase in the error. Some authors have also adopted techniques that extract morphological features from the initial dataset in order to obtain constraints for the approximate TIN [33, 217,245,257,283] (see also Section 5.2). Approximate RSGs can only be obtained by subsampling the grid, while TINs are much more flexible because of their irregular structure. Many possible approximate models at different accuracies, with different sizes, can be built from a given TIN. In the case of contour maps, there are limited possibilities to vary the resolution by removing lines, while line simplification permits to obtain a wide range of different resolutions, and data sizes. Refinement methods. In [101], Fowler and Little propose a method that starts from an RSG and builds an approximate TIN in two steps. First, point features are selected using the method described in [217], and an initial TIN is built based on the Delaunay triangulation

358

L. de Floriani et ah

of such points. Then, the model is refined by iteratively adding the point corresponding to the maximum approximation error and updating the Delaunay triangulation. A similar algorithm (not involving point features) was also developed by Franklin, and an efficient implementation is available in the public domain [108]. A similar strategy was applied in [257] by using a different method for feature selection, and a data dependent triangulation: the local optimization procedure (edge swapping) applied after each point insertion tries to minimize the error of the current model. However, convergence to a global minimum is not guaranteed. Variants of the same strategy, modified for a number of data dependent triangulations were also applied in [231]. Simplification methods. Lee proposed a simplification technique based on the Delaunay triangulation [174]. A Delaunay TIN is built first over the whole dataset. Then, vertices of the TIN are iteratively dropped, based on a criterion symmetric to the one of Fowler and Little: at each iteration, the point whose deletion causes the least increase in the error is dropped, and the triangulation is updated consequently. Another simplification algorithm based on the hierarchical triangulation scheme of Kirkpatrick [161] has been proposed by de Berg and Dobrindt [50] with the purpose of building a multiresolution model (see Section 4.4). In this case, a maximal set of independent vertices is dropped at each iteration, and holes are filled by a Delaunay triangulation. Variants of the above algorithms have been analyzed and compared by De Floriani et al. [63]. A number of other simplification methods have been proposed for the more general case of free form surfaces. For surveys on such methods, see [135,226]. 4.4. Multiresolution terrain models Approximation algorithms reviewed in the previous section are usually time intensive: the construction of an approximate TIN from a dataset of 1OOK points can take minutes, or even hours, on state-of-the-art workstations, depending on the algorithm used. On the other hand, many applications need to access and process terrain models at different accuracies in a real-time context. For example, in a flight simulator, a terrain must be rendered at a high detail only close to the observer. Since the viewpoint is continuously moving, the required accuracy is changing from frame to frame, while the system cannot wait until a suitable terrain approximation is recomputed each time from scratch. The ability to provide a representation whose accuracy is variable over the domain is often called selective refinement. A so-called LOD {level-of-detail) model is a simple sequence of approximate representations at increasing levels of detail. LOD models are standard technology in graphics languages and packages, such as Openlnventor™ [209,288], and VRML [277], and are used to improve efficiency of the rendering on the basis of distance from the observer. However, such models cannot support selective refinement. In order to overcome such limitations, more sophisticated multiresolution terrain models have been developed in the literature. A multiresolution model must: • perform selective refinement for any given accuracy in short (real) time; • provide always conforming meshes, i.e., avoid cracks due to abrupt transition between different levels of detail within a mesh;

Applications

of computational

geometry to geographic information systems

359

.^^

SIMPLIFICATION

Fig. 2. The sequence local modifications generated during the incremental refinement/simplification of a triangulation.

• have a size not much higher than the size of the model at full resolution. In the following, we describe a general framework, called a Multiresolution Mesh (MM) [62,224]. Multiresolution models proposed in the literature will be reviewed next as instances of such a framework. Existing multiresolution models based on domain decomposition are all obtained from an initial model, which is progressively modified by a generalization algorithm (either by refinement or simplification). In a straightforward approach, a modification simply replaces a whole terrain representation with another one. More general modifications are local, i.e., they affect only a small part of the surface. Figure 2 shows a sequence of triangulated grids generated through incremental refinement/simplification and the sequence of modifications corresponding to the refinement process (including the initial grid). Two successive modifications can be either independent, if they affect disjoint parts of the mesh, or the second modification may depend on the first one, if some of the grid elements introduced by the first one are removed by the second one. Dependency means that the second modification cannot occur if the first one did not occur before. Each local modification can be represented by the set of mesh elements it introduces, which is called a component. Dependency relations define a partial order on the set of components, which can be represented by a DAG (see Figure 3). The collection of such components, plus the partial order, form a Multiresolution Mesh (MM), which is at the heart of any multiresolution model. An MM has several interesting properties that have been investigated in detail in [224]. In particular, any consistent (i.e., dependency-closed) subset of components defines a mesh representing the terrain at a given level of detail: such a mesh is obtained by performing the corresponding local modifications in any temporal sequence consistent with the dependency relation. Moreover, it is also true that any possible mesh made of elements belonging to any component of the MM can be obtained from some consistent subset. Therefore, selective refinement on an MM consists of finding the consistent subset that is most suitable to a given LOD.

360

L. de Floriani et al.

Fig. 3. The MM corresponding to the sequence of local updates in Figure 2.

We generically refer to the capability of an MM to provide a larger or smaller number of consistent subsets (hence, different meshes) as its expressive power. The internal structure of any specific MM determines its expressive power. An MM with few large components with many mutual dependencies will have an expressive power much lower than an MM with many, small components with few mutual dependencies. If an MM is obtained through refinement, any local modification increases resolution (and size) of the mesh locally. In this case the MM is said increasing. Even if an MM is built through simplification, it is possible to reverse modifications composing it, in order to obtain an increasing MM. All multiresolution models proposed in the literature can be interpreted as increasing MMs: in practice, every model is characterized by special rules used for generating the local updates of the MM, by the data structures adopted, by the operations supported, and by the efficiency of accessing algorithms. However, viewing the model as an MM helps understanding its expressive power independently of the specific data structures and algorithms. Multiresolution models can be subdivided in two major classes, according to their structure: • tree-like models are based on nested subdivisions; in order to give an interpretation of such models as MMs, the corresponding tree structure must be suitably translated into a DAG; • evolutionary models directly store the evolution of a mesh through a refinement/simplification process; since such models are based on the concept of local modification, they are all straightforward specializations of the MM. Tree-like models. Early hierarchical models are based on the recursive subdivision of a domain either into four quadrants {quadtree) [242], or into four equilateral triangles {quaternary triangulation) [77,118,120]. The resulting hierarchy is represented by a quaternary

Applications

of computational

geometry to geographic information systems

361

tree. The main drawback with such models is a poor expressive power, due to the impossibility to combine elements from different levels of the tree into a conforming mesh. A surface corresponding to a non-conforming mesh is affected from cracks between elements from different levels. If a quadtree (or a quaternary triangulation) is regarded as a special instance of an MM, it is easy to see that all nodes at each level of the tree must be clustered to form a single component, hence giving a pyramid of full regular grids. The basic problem with these models is that all edges of a region are split when the region is refined. In order to avoid cracks in the surface, the refinement of one region enforces refinement of its neighbors, and so on, propagating through the whole domain. Von Herzen and Barr [278] proposed a method to build conforming meshes from a quadtree that works in two steps: the extracted mesh is locally balanced first, by allowing adjacent elements to differ for no more than one level in the tree; next, each element is triangulated through a fixed pattern that depends on the levels of its neighbors. Lately, other authors have worked on similar concepts: hierarchies of right triangles are based on the recursive split of a right triangle into two, by joining the midpoint of its longest edge to its opposite vertex [73,90,134,178]. The hierarchy is described by a binary tree, and it can be interpreted as an MM by clustering triangles at the same level that are adjacent along their longest edges: components of the MM are either squares or diamonds, each formed by four adjacent triangles, except border components. Because the resulting MM is made of a high number of small components, it has a high expressive power. This model has been used successfully for rendering terrains in flight simulation through selective refinement [73,90,178]. All models based on regular subdivisions can be encoded by extremely compact and efficient data structures, by exploiting their algebraic properties (see [90,134,242] for details). On the other hand, such models can be used only if data points are distributed on a square grid. Other tree-like models have been developed, based on irregular triangulations, which can work on arbitrary datasets, and can achieve a better ratio between size and accuracy of an extracted mesh, because no constraint is imposed on the vertex distribution [61, 246]. A triangle is refined by inserting a non-fixed number of points, either inside it, or on its edges, on the basis of an error-driven refinement criterion. Edges that survive across different levels of the hierarchy permit to combine surface patches from different levels of the tree. Components of an MM corresponding to one such model are obtained by clustering nodes that refine adjacent triangles inserting vertices on their common edges. Each level of the tree is partitioned into a set of components, each bounded by edges that are not refined at that level. In general, a model with many refined edges will lead to an MM with few large components, hence reducing its expressive power. On the other hand, a model with few refined edges may contain many slavery triangles: this gives unpleasant visual effects, and stability problems in numerical computation. In [61], a comprehensive analysis of tree-like models is presented, covering several issues about data structures, neighbor finding, and selective refinement. Evolutionary models. Multiresolution is obtained by recording the evolution of a mesh through a sequence of local modifications (either refinement or simplification). Interpreting

362

L. de Floriani et al.

an evolutionary model as an MM is straightforward: each modification step performed by the algorithm generates a new component. The forerunner of evolutionary models is the Delaunay pyramid proposed in [56]. Such a model is based on a sequence of Delaunay triangulations at increasing resolutions, obtained according to a predefined sequence of decreasing error thresholds. The transition from a triangulation to the next is obtained by iterative refinement [101]. The corresponding MM is obtained by decomposing such global modification into maximal local independent modifications. The expressive power of the model may be low because it does not trace the insertion of single vertices: the MM may have a relatively small number of large components. The model proposed in [50] is also based on a sequence of Delaunay triangulations, but it is built bottom-up: each level is obtained from the previous one by eliminating an independent set of vertices of bounded degree. The corresponding MM is formed by a large number of components, namely one for each decimated vertex; moreover, the height of the DAG is logarithmic in the number of nodes. This property guarantees a good worstcase complexity of geometric search algorithms such as point location. The data structure is a direct implementation of the DAG. The algorithm for selective refinement proposed in [50] achieves output-sensitive optimal time, but it does not guarantee that the desired threshold will be fulfilled everywhere. Evolutionary models based on a totally ordered sequences of local updates, rather than a DAG, can be encoded by very compact data structures. In order to perform selective refinement, the basic idea is to scan the list of updates, performing only those relevant to achieve the required accuracy. Unfortunately, a straight scan is not sufficient, since skipping an update may prevent performing some later update that is necessary to fulfill the accuracy. Therefore, dependencies between updates must be computed on-the-fly, thus involving complex, and computationally expensive algorithms. An extremely compact model based on the Delaunay triangulation is proposed in [162]. In this case, a greedy refinement/decimation through on-line insertion/deletion of single vertices into a Delaunay triangulation is performed. The data structure stores only the simplest mesh, plus the sorted sequence of vertices inserted to refine it, each tagged with the approximation error of triangles incident at such vertex at time of its insertion. A mesh at uniform accuracy can be extracted through an on-line algorithm for Delaunay triangulation that inserts vertices from the sequence until the desired accuracy is obtained. An algorithm for selective refinement is also outlined, which perform complex tests based on the circumcircle criterion to find out vertex dependencies. A similar model, called Progressive Meshes {PM), is proposed in [145], based on a different update strategy. In this case, the basic local modification is vertex split, which expands a vertex to an edge. As for the previous model, the data structure stores only the simplest mesh, and the sequence of vertex splits. The extraction at uniform accuracy is faster than in [162] because no numerical computation is required. In the attempt to obtain efficient algorithms for selective refinement from a PM, more sophisticated data structures encoding also vertex dependencies have been proposed in [146,186,293]. In [293], a progressive mesh is built by collapsing independent sets of edges, following a technique similar to that used in [50].

Applications

of computational

geometry to geographic information systems

363

The Multi-Triangulation (MT) is a straightforward implementation of the MM framework, where every component is a triangulation [62,224]. An explicit data structure encodes all components of an MT as nodes, and maintains all arcs of the DAG describing the MT: every arc (7/, Tj) is labeled by the set of triangles of 7/ covered by Tj. Each leaf of the DAG is connected to an additional drain node, and each new arc is labeled with triangles of its source node that survive in the drain node. Based on this arc labeling, the explicit data structure represents triangles on arcs rather than on nodes of an MT. This data structure supports the extraction of a mesh by collecting all triangles labeling arcs in a given cut of the DAG. An algorithm for selective refinement that finds the minimal triangulation for a given arbitrary threshold was described first in [224], and improved in [63]. The time complexity is linear in the size of the visited consistent subset. Independently, a similar algorithm was proposed in [26], achieving similar results. A dynamic algorithm proposed recently [65] permits to update a mesh extracted from an MT after a small change in the threshold function. Such an algorithm only visits the portion of DAG separating the old and the new solution, respectively. This algorithm is especially efficient for generating gradually changing levels of detail in an interactive environment (e.g., flight simulation). A cheaper, but less efficient implicit data structure for MTs built through refinement/simplification of a Delaunay triangulation has been proposed in [64]. In this case, each component consists of a fan of triangles incident at a vertex, and is represented in the structure by storing just the vertex. Implicit data structures can be defined for MT obtained as progressive meshes, and for the hierarchy of right triangles as well. Although the storage cost is reduced by these structures, empirical results show that the algorithm for selective refinement may run about ten times slower, and also the size of the output mesh may be about twice larger [64]. The HyperTriangulation (HyT) proposed in [40] can be also considered an implementation of an MT, but the data structure is completely different. Based on the observation that the boundary edges of a component T coincide with those of the triangulation covered by r , components in a hypertriangulation are interpreted in a three-dimensional space, where the third dimension corresponds to a resolution axis. Each component forms a "dome" in such a space, resting on the components it covers through its boundary edges. The data structure stores the triangles of the MT, and triangle adjacencies in the complex formed by such domes. Adjacency information supports algorithms based on domain traversal. Such an algorithm is used to perform selective refinement for threshold functions that are monotonically increasing with distance from a given viewpoint. The time complexity in the worst case is 0(n logn) where n is the size of the HyT. Empirical results of the extraction algorithm show a performance of about lOOK triangles per second on state-of-the-art workstations. Methods based on wavelets. In the literature, also other kinds of multiresolution models have been proposed, which follow afunctional approach rather than a geometric one. The basic idea is that a function can be decomposed into a simpler part at low resolution, together with a collection of perturbations, called wavelet coefficients, which define its details at progressively finer levels of resolution.

364

L. de Floriani et ah

Wavelets have been widely used for multiresolution representation and compression of signals and images (see, e.g., [259,260] for a survey), while their applications to surfaces is quite recent (see [181] for a survey, and [122] for an application to terrain surfaces). The discrete computation of wavelets requires a recursive subdivision of the domain into regular shapes, like equilateral triangles, or squares. Therefore, resulting hierarchies correspond to either quaternary triangulations, or quadtrees. The use of tools from numerical analysis is predominant, while computational geometry techniques do not play a relevant role in this context. 5. Terrain analysis Complex analyses of terrain models include [286]: • computation of visibility maps, and solution of visibility-related problems; • terrain generalization, and feature extraction; • computation of watershed, and drainage networks; • realistic visualization; • path planning. Terrain generalization issues have been treated in Section 4.3 for building approximated and multiresolution models. Related issues will also be considered in Section 5.2 on topographic features. Rendering of DTMs and related information includes procedures for orthographic display (contours, hillshading, etc.), perspective display as well as advanced visualization techniques (interactive scene navigation, animation, etc.). Terrain visualization issues are related to viewshed computation problems (see Subsection 5.1). Also, some modeling issues related to effective real-time rendering of a DTM have been discussed in Section 4.4: approximate and variable resolution terrain models directly support compression for efficient rendering, and animation. 5.1. Visibility Visibihty problems are concerned with the computation of visibility information from a viewpoint which can lie outside or inside the domain, or with the use of visibility to solve optimization problems (see also Chapter 19). Examples of visibihty computations are finding the horizon or computing the visible portions of the surface from a given viewpoint; in the latter case, the problem is a special case of the Hidden Surface Removal (HSR) for terrains. Visibility computation algorithms are the basis for solving optimization problems. Examples of optimization problems related to visibility are finding the minimum number of towers of a given height necessary to view an area of the terrain, or finding the minimum path with specified visibility characteristics (e.g., hidden paths, scenic paths). Applications include the location of fire towers, radar sites, radio, TV or telephone transmitters, path planning, navigation and orientation (see [203] for a survey). Visibility computation problems can be classified into: • visibility queries, which consist of determining whether a given entity located on the terrain is visible from a viewpoint, and possibly which portions of it are visible;

Applications of computational geometry to geographic information systems

365

• computation of visibility structures, which provide information about the visibiUty of the terrain itself, i.e., which portions of the terrain are either visible, or invisible from a given viewpoint; knowing suitable visibility structures for a terrain also helps answering visibiHty queries. A visibility query related to a point Q simply requires to determine whether Q is visible or not. For a non-point query object (a line, a region) one may ask for either a Boolean answer (e.g., "visible" means "at least partially visible"), or for a partition of the object into visible and non-visible portions. The basic visibility structure for a terrain is the viewshed which is the collection of the surface portions visible from a viewpoint V. Another visibility structure is the horizon of a viewpoint V, which informally corresponds to the 'distal boundary' of the viewshed. Such reduced information can replace the viewshed in some applications, with the advantage of lower storage costs. More precisely, the horizon determines, the farthest point on the terrain that is visible from V, for every radial direction around V in the x-y plane. The result of visibility computations is affected by interpolation conventions adopted in the underlying terrain model. On TINs, a piecewise linear interpolation over each triangle is the standard choice. On RSGs, different GIS packages may use bilinear functions, or step functions, or other interpolation conventions; a common approach is considering a linear approximation of the edges, while disregarding the interior of the cells. Therefore, results can be different even in different implementations of the same algorithm [96]. The reminder of this subsection contains a survey of visibility algorithms. Algorithms working on TINs receive a special attention since they provide more interesting examples of applications of techniques from computational geometry. Visibility query algorithms. The simplest visibility query problem consists in determining the mutual visibility of two points P and Q on a terrain. In a "brute-force" approach, this reduces to finding either the terrain edges (for a TIN), or grid cells (for an RSG) intersected by the vertical plane passing through segment PQ. For each intersected element (edge or cell) e, a test is performed to decide whether e lies above PQ, and the two points are reported as not visible in case of a positive answer for at least one of such tests. A point visibility query can also be regarded as a special instance of the ray shooting problem on a polyhedral terrain. Given a polyhedral terrain T, a viewpoint V and a view direction (^, of), the ray shooting problem consists of determining the first face of T hit by a ray emanating from V with direction {0,a).A method for answering repeated ray shooting queries from a fixed viewpoint on a terrain has been proposed by Cole and Sharir [45], with a logarithmic query time. They build a balanced binary tree, in which every node stores a partial horizon, computed for a subset of the edges of the terrain. A ray shooting query, represented by a view direction (^, a), reduces to traversing a path on such tree, driven by a height comparison between the given ray and the horizon stored in the currently visited node. The data structure has size 0{na{n) log n), where a(n) is the inverse of Ackermann's function (which is almost a constant). Viewshed algorithms. On RSGs, the viewshed is usually represented in a discrete way, by marking each grid cell as visible or invisible [96]. Extended viewsheds are often considered, i.e., viewsheds enriched with additional information: for example, tagging each

366

L. de Floriani et al.

grid element with the number of viewpoints from which it is visible. Although a discrete viewshed is sometimes considered also for a TIN [175], viewshed computation in this case is usually performed by computing the visible portions of individual triangles. The continuous approach on RSGs is not used for two reasons: the size of the grid, and the complexity of geometric computations involving quadratic interpolating functions. Visibility for an RSG cell can be computed easily by walking from the viewpoint to the given element, along the elements intersected by a line-of-sight, until either visibility is blocked, or the target is reached. This method performs redundant computations, since rays to different cells of the grid may overlap partially, van Kreveld [271 ] proposes a sweep-line approach to compute discrete viewsheds on an RSG in 0{n \ogn) time on a v ^ x V^ grid. Visibility computation on RSGs is conceptually simple, but it becomes computationally intensive due to the size of the grid. Viewshed computation on a TIN is related to the Hidden Surface Removal (HSR) problem in a three-dimensional scene. The general quadratic upper bound to HSR for a polyhedral scene [247] applies also to the special case of a polyhedral terrain, thus giving a worst-case space complexity equal to G(n^) for the viewshed of a TIN with n vertices. The HSR problem for a three-dimensional scene has been extensively studied in the literature [49,158,191,211,247]. Some specific algorithms for polyhedral terrains have been developed [222,229]. Another approach to HSR reduces to computing the upper envelope of a set of disjoint polygons in space [23,69,80]. The upper envelope of a set of polygons in 3D defines a partition of the x-y plane into maximal connected regions, each of which is labeled with a polygon: the polygon n labeling a region R is the polygon with maximum height over R. The visible image of a polyhedral terrain with respect to a viewpoint V is equal to the upper envelope of the set of its faces, whose height is considered with respect to their distance from the viewplane. Computing the viewshed on a TIN reduces to computing the upper envelope of a set of semi-disjoint triangles. The approaches used in existing algorithms for viewshed, HSR, or upper envelope computation can be classified as follows: • an approach in which the faces are processed in front-to-back order from the viewpoint; • a divide-and-conquer approach; • a sweep-line approach; • an on-line approach, which processes faces incrementally in any order; • an approach based on special data structures for answering ray shooting queries. Some algorithms also combine different approaches together. In the following, we provide a brief survey of existing algorithms, according to the above classification; the reviewed algorithms are summarized in Table 2. ThQ front-to-back approach is the most popular method for performing HSR in a scene. A scene is front-to-back sortable with respect to a viewpoint if there exists an ordered sequence of its faces such that no face coming later in the sequence can obscure a face preceding it. Not all scenes are sortable: terrains are always sortable for viewpoints lying outside the domain, while Delaunay-based TINs are guaranteed to be sortable also for internal viewpoints [57]. There are standard techniques for making a generic PTM sortable by splitting some of its faces. Most front-to-back algorithms process faces in increasing distance order from the viewpoint, and maintain at each step the "contour" of the current visible image (i.e., the bound-

Applications

of computational

geometry to geographic information systems

367

Table 2 Major existing algorithms for region visibility computation Algorithm

Approach

Input

Sortable

[80] [158] [222,229] [208,247] [121] [49] [211] [69] [49]

Divide-and-conquer Divide-and-conquer, front-to-back Front-to-back Sweep-line Sweep-line Sweep-line, ray shooting Back-to-front, incremental On-line On-line, ray shooting

Triangles Polygons PTM Polygons Polygons Polygons Triangles Triangles Polygons

No Yes Yes No Yes No Yes No No

ary of the union of the projections of the faces processed so far). When a new face is projected onto the viewplane, its edges are tested for intersections only against the edges of the contour. When the scene is a terrain, the contour is the same as the horizon, restricted to the subset of terrain faces which have been already examined. The algorithms by Reif and Sen [229], and Preparata and Vitter [222] are based on such approach, and run in 0((^ + k) log^ n) time, where n is the size of the terrain model, and k is the size of the computed visible image. The algorithm proposed by Overmars and Sharir in [211] represents a very effective method for computing the visible image of a sortable set of triangles. This algorithm combines the incremental and the divide-and-conquer approaches: triangles are added in backto-front order, in groups, and at each step the visible image of the new set of triangles is merged with the old visible image. The algorithm works in 0(n^/clogn + k), time, where c is the maximum contour size; for a polyhedral terrain, c = 0(na(n)), and thus we have an 0(n^/na(n)logn -f k) complexity. The divide-and-conquer approach includes the worst-case optimal algorithm by Edelsbrunner, Guibas and Sharir [80], which computes the upper envelope of a set of n disjoint triangles in optimal 0{n^) time, and can be used for determining the viewshed on a TIN. The method proposed by Katz, Overmars and Sharir [158] applies a divide-and-conquer strategy in a sortable three-dimensional scene to achieve an output-sensitive complexity. A balanced binary tree is built, whose leaves correspond to the faces (the left-to-right order on the leaves reflects the front-to-back order on the corresponding faces), and every internal node represents the union of the faces stored in its subtree. Then, the visible portion of the object associated with each node are computed during a traversal of the tree; at the end of the traversal, the visible portion of each face are found in the leaves of the tree. The time complexity is output-sensitive and, for a sortable PTM, it is equal to 0{{na{n) + d) logn), where n and d are, respectively, the input and the output size. Sweep-line algorithms first project the whole scene onto the viewplane, and then traverse it by moving a vertical line from left to right. The time complexity is equal to 0((w + k)\ogn), where k is the number of intersection points between terrain edges projected onto the viewplane [121,208,247]. The algorithm by de Berg [49] computes the viewshed

368

L. de Floriani et ah

on a TIN in an output-sensitive way, by combining a sweeping technique with the use of efficient data structure for answering ray shooting queries. On-line incremental algorithms operate on a generic set of polygons or triangles, without any assumption about properties of the scene. An on-line randomized algorithm for computing the upper envelope of a set of triangles in the space (and thus the viewshed of a TIN) has been proposed by Boissonnat and Dobrindt [23]. The expected time complexity is equal to 0{n^ \ogn) for disjoint triangles. Horizon algorithms. The horizon of a viewpoint on a PTM is equal to the upper envelope of the set of segments obtained by projecting the terrain edges onto a viewplane. The upper envelope of a set of segments in the plane (in our case, the viewplane) is a piecewise linear, possibly non-continuous line formed by the collection of points belonging to some of the segments and lying below no other segment in the given set. The size of the upper envelope of p segments in the plane is &{pa{p)) [45]. Thus, the complexity of the horizon of a polyhedral terrain with n vertices is equal to 0{na{n)). Computing the horizon of a viewpoint on a polyhedral terrain reduces to the computation of the upper envelope of a set of possibly intersecting segments in the plane. Upper envelope algorithms have been proposed based either on a divide-and-conquer or on an incremental paradigm. The classical divide-and-conquer scheme [10] runs in 0{pa(p) log p) time. A more sophisticated technique, which apphes special care to split the set of segments into subsets with certain characteristics, achieves an optimal 0(p log p) computational complexity [137]. A simple incremental approach has a complexity of 0(p^a(p)), since adding the i-th segment can cause the modification of each of the ia(i) intervals of the current envelope, while a randomized incremental algorithm proposed in [60] has an expected time complexity of 0(pa(p) log p) for inserting all the given segments in random order. Visibility computation on hierarchical models. Visibility computation problems have been addressed in the context of hierarchical terrain models by De Floriani and Magillo [66]. In this framework, two issues have been considered: the computation of visibility information, related to a terrain representation at a certain resolution level, and the update of such information, as the required resolution changes. Distance-based and sweep-line viewshed algorithms can work on a hierarchical terrain model by performing a traversal of its tree-like structure. Dynamic algorithms can be used to update viewsheds and horizons after local changes in the level of detail of a terrain representation (i.e., replacement of a subset of terrain faces and edges with a finer/coarser description) [69,66]. Visibility-related problems. Examples of interesting application problems on a terrain, which can be solved based on visibility information, are: • problems requiring the placement of observation points on a topographic surface according to certain requirements; • line-of-sight communication problems; • problems regarding the computation of visibility paths on a terrain, with certain properties.

Applications

of computational

geometry to geographic information systems

369

Puppo and Marzano [225] investigate the connection between visibility-related problems in a discrete setting, and classical problems in graph theory: they show that graph algorithms can provide efficient and practical solutions to many visibility-related problems, belonging to all three classes listed above. Viewpoint placement problems require placing several observation points on a terrain in such a way that each point of the terrain (or of a focus area on it) is visible from at least one observation point. Applications include the location of fire towers, artillery observers, and radar sites. For a single observation point, algorithms running in polynomial time are known. If the height of the viewpoint is fixed, an existing solution can be determined in 0(n logn) time, while the point with the lowest elevation, from which the entire terrain is visible, can be determined in 0(n\og^n) time [252], on a polyhedral terrain model with n vertices. Recently Zhu [295] improved the complexity to 0(n log* n). The more general problem of determining the smallest number of observation points, which collectively see the entire terrain is known as the guard allocation problem and it is usually addressed by constraining viewpoints to be placed on the vertices of a polyhedral terrain. The problem has an exponential complexity, and it is equivalent to set-covering. Several heuristic algorithms to minimize the number of viewpoints, with fixed or variable height, are discussed in [175]. In the same paper, also the inverse problems are considered, that is, finding the optimal locations of viewpoints to maximize the visible area, assuming that the number and, possibly, the height of viewpoints as fixed. Bose et al. [24] reduce the guard allocation problem to a graph coloring problem and provide efficient heuristics to place viewpoints located at the vertices or on the edges. Line-of-sight communication problems consist of finding a visibility network connecting two or more sites, such that every two consecutive nodes of the network are mutually visible. Applications are in the location of microwave transmitters for telephone, FM radio, television and digital data networks. Usually, sites are restricted to be at terrain vertices. Finding the minimum number of relay towers necessary for line-of-sight communication between two sites can be formulated as a shortest path search in a graph where nodes are terrain vertices and arcs represent pair of mutually visible vertices, called a visibility graph. The problem of constructing a line-of-sight network between several sites is addressed in [59] by reducing it to the computation of a minimum Steiner tree on the visibility graph; memory requirements are kept low by computing the arcs of the graph on-line. Visibility paths can be defined on a terrain, with application-dependent visibility characteristics. A smuggler's path is the shortest path, connecting two given points, such that no point on the path is visible from a predefined set of viewpoints. Conversely, a path where every point can be seen from all viewpoints is known as a scenic path [203]. Simple solutions can be obtained on a TIN by restricting the viewpoints to be vertices, and the path to pass along edges. The solution (if there exists) can be determined by first computing the vertices which are visible/invisible from all the viewpoints, and then applying a standard shortest path algorithm to the edges connecting them.

370

L. de Floriani et al

5.2. Topographic features Topographic features are special points, lines and regions that have a specific meaning for describing the shape of the terrain: they correspond to local differential properties of the terrain surface. A point of the surface belongs to some characteristic class depending on the structure of the surface in its neighborhood. Point features are peaks, pits 2iXid passes. A peak is a point of relative maximum, a pit is a point of relative minimum, a pass is a saddle point for the surface, i.e., a point where there is a local maximum in one direction and a local minimum in a different direction. Lineal features are ridges and valleys. A ridge is a curve consisting of ridge points: a point lies on a ridge if its neighborhood can be subdivided by a line passing through it, and such that the surface in each half-neighborhood is monotonically decreasing when moving away from the line. A ridge occurs where there is a local maximum in the surface in one direction; a valley occurs where there is a local minimum in one direction. Based on point and lineal features. Wolf [289] proposes the use of weighted surface networks for terrain description. A weighted surface network is a graph whose vertices and edges are point and lineal features, respectively; each edge is weighted with the height difference between its two endpoints. Wolf also suggests a method for contracting such networks, thus providing also a tool for terrain generalization. Areal features are maximal regions on the terrain where the surface is flat, convex or concave. Such features are related to the curvature of the surface along two independent directions. For smooth surfaces they depend on the sign of the second derivative. According to this classification, any point on a terrain can be labeled to belong to one of the above features by applying differential geometry tools. An interesting classification of topographic features can be found in [283]. In the discrete case, algorithms are essentially based on the analysis of the neighborhood of the points in the domain. In the literature, most methods have been developed for RSGs with the stepped model, because the neighborhood of each pixel is directly implied from the grid structure. The interpolation is commonly assumed to be constant on each grid cell, that is actually considered as a pixel in an image. The classical algorithm by Peucker and Douglas [217] classifies each pixel based on the height difference from its eight neighbors in the grid. In [254] the method is improved to find also ridges and valleys wider than one pixel. Methods developed in the image processing literature can also be used [132, 163,169]. Falcidieno and Spagnuolo [91] consider a virtual triangulation of the grid, where each cell is subdivided into four right triangles, and apply a method suitable for TINs (see below). Mc Cormack et al. [190] identify depressions and plateaus by propagating "seeds" consisting of connected regions of pixels with the same elevation, and bounded by pixels with a higher and lower elevation, respectively. Watson et al. [283], and Haralick [132] use RSGs with surface interpolation techniques. In the case of TINs, point and lineal features are vertices and chains of edges in the underlying triangulation, respectively. Areal features are connected collections of triangles defining convex, concave or flat regions. A characterization of features on a TIN can be found in [103]. An algorithm for surface characterization from a TIN, based on the local analysis of the dihedral angle formed by adjacent triangular patches, is proposed in [91].

Applications

of computational

geometry to geographic information systems

371

Each TIN edge is labeled as convex, concave or flat depending on the slopes of its two incident triangles. Then, a triangle can be labeled based on the types of its edges (e.g., a triangle is convex if its three edges are convex), and convex, concave and flat surface regions are computed by aggregating triangles labeled in the same way. A compact representation of a TIN obtained by merging triangles into regions of uniform curvature is proposed as well.

5.3. Drainage networks Terrain drainage characteristics provide information on water resources, flood areas, erosion, and natural resource management. Manual quantification of terrain drainage characteristics is a tedious and time-consuming task. Thus, efforts have been undertaken to compute such characteristics from a terrain model. On a terrain water flows downstream according to gravity and collects into streams, which join and form rivers. The collection of all streams and rivers forms the drainage network of the terrain. A drainage network can be seen as a forest in which the arcs are directed towards the pits. Hence, the computation of a drainage network is strongly related to the computation of point and lineal features of the underlying terrain (see [294] for a survey). If we consider a terrain as a continuous surface, a drainage network can be characterized in terms of differential geometry (see [163,164]). There is no accepted definition of the drainage network on an RSG. As noted by Yu et al. [294], the network is defined as the result of an algorithmic process. As for the case of feature extraction, local methods based on image processing techniques are applied on RSGs [72,276]. Mark [188] proposes a global approach: all the cells are sorted in order of decreasing elevation and, initially, each cell is assigned one unit of fluid. Each cell in order adds its fluid to the lowest of its eight neighbors. At the end, cells receiving more than a certain quantity of fluid are declared to be on the drainage network. In this way, also the accumulation of water flowing in the terrain is modeled. Quantifying the accumulation of water is also useful for identifying drainage basins, i.e., surface regions formed by pixels whose water flow reaches the same pit. The general assumption is that water flows from higher to lower heights. Flat regions and plateaus cause disturbs in water flow computation because the direction of flow is not well defined. Similar problems occur at depressions: in a real terrain, water cannot accumulate infinitely, thus it is necessary to determine the overflow points and directions for a depression. Many algorithms use simple heuristics based on local modification of the flow [150,200,143]. Mc Cormack et al. [190] note that the local modification of a drainage network without knowledge of its overall structure may have unpredictable effects. Therefore, they propose a feature-based approach to computation of drainage networks and basins on an RGS, which identifies inflow and outflow points for each depression and plateau, and defines the flow through the flat region as a line connecting them. On a TIN, a definition of a drainage network, as a collection of all valley edges has been proposed by Frank et al. [103]. This definition has the drawback of possibly producing

372

L. de Floriani et al.

interrupted streams, because the actual location of a stream may not coincide with a triangle edge in the TIN. In addition, there is no concept of flow, so basins cannot be modeled. Yu et al. [294] introduce a broader definition of drainage network on a TIN, which also allows streams to pass through the interior of a triangle. Their definition is based on the following four assumptions: (i) at any point, water follows the steepest descent; (ii) watercourses can merge (thus the drainage network is a forest); (iii) watercourses end only at local minima; (iv) at any point there is a unique direction of steepest descent. The drainage network is defined as the collection of those points on the terrain that receive water from a region whose area exceeds a certain threshold value. They propose an algorithm for computing the network which runs in 0(n + k) time, where n is the size of the TIN and k the size of the drainage network. In [52] it has been proved that the complexity of the drainage network, as defined in [294], in a TIN with n vertices may be 0{rv'). By contrast, the drainage network defined according to Frank et al. [103] has a linear space complexity.

5.4. Path problems In this subsection, we deal with path problems on a terrain model (for path computation on a map see Subsection 3.5). Refer also to Chapter 15. Different optimality criteria for paths can be defined on a terrain [268]: measures of the quality of a path include the Euclidean length, the maximal height along the path, the total height difference, and the maximal slope of the path. Several results appeared in the computational geometry literature on the shortest geodesic path problem. In particular algorithms have been proposed for computing such path on convex [201,251] and on non-convex [34,195] polyhedra. The best result is due to Chen and Han [34] who propose an O(n^) algorithm. Since polyhedral terrains are approximations of real surfaces, simpler and more practical algorithm which find an approximation to the optimal solution are sufficient and more desirable. Approximate algorithms for convex polyhedra are described in [3,35,133,170], and for general polyhedra in [5]. Lanthier et al. [170] consider the problem of computing an approximate shortest path between two points on a TIN when each face has an associated weight. This is an instance of the weighted region problem discussed in Subsection 3.5. They propose practical algorithms based on the placement of Steiner points at edges of a polyhedral terrain, and on shortest paths on graphs. A good feature of such algorithms is that execution time and space requirements can be traded for accuracy, thus such algorithms can be used to build a sequence of shortest paths with increasing accuracy. Experimentally, the complexity of such an approach is 0{n logn), while a worst case analysis leads to O(n^) and 0{rv' \ogn) complexities. A heuristic technique based on discarding the parts of the terrain where the shortest path cannot pass through has been proposed in [15]; this approach, however, applies to regular triangulations, like quaternary ones. In [268] algorithms are presented for the other quality measures discussed at the beginning of this subsection. In [54], algorithms are presented for answering path queries on a TIN in O(logn) time (with a preprocessing time of 0{n\ogn)). Such algorithms support queries like deciding whether there exists a path between two points whose height de-

Applications of computational geometry to geographic information systems

373

creases monotonically, or deciding whether there is a path joining two points and staying below a given height. 6. Three-dimensional GIS Classical GISs deal with just two dimensions: terrains are not true three-dimensional objects, since most part of modeling and reasoning is done in their two-dimensional domain. Adding true three dimensionality to GIS is a more recent research area, in which computational geometry is directly involved. Three-dimensional capabilities in a GIS system are required in cadastral and architectural applications for representation of buildings, and in geosciences and environmental modeling for representing the structure of either earth, or atmosphere (e.g., levels of rocks in geology; air flow, temperature and pressure in metereology). Three-dimensional geographic data can be handled with techniques inherited from 3D geometric modeling and CAD systems (see, e.g., the book by Mantila [187,199]). The shape of man-made objects (buildings, etc.) is usually regular and completely known, and directly modeled with classical CAD techniques. When representing physical phenomena (e.g., in geology), complex structures are known only at a finite set of sampled points, and interpolation must be used in order to recover a complete model of a three-dimensional area. The situation is similar to the case of terrain modeling, in one dimension higher. Jones [153] provides an overview of data models for three-dimensional objects in geology. While a uniform and dense sampling of two-dimensional domains is easily achieved, three-dimensional sampling is usually sparse because of high costs. Thus, a regular threedimensional grid would result in a useless waste of memory. Adaptive regular grids, such as octrees and their variants, are more appropriate. The octree [242] is a three-dimensional generalization of the quadtree. It is based on a recursive partition of a cubic universe into nested cubes of different sizes, which can be unfolded as a tree with degree eight. Each cubic cell is assigned an attribute value (e.g., a type of rock, a pressure value, etc.). Drawbacks of this representation are the necessity of a regular distribution of data points, and the rough approximation of boundaries between zones characterized by different attribute values: this is not admissible when information about the surface separating two different attributes (e.g., the surface between two different types of soil), must be maintained in the 3D model. Irregular tetrahedralizations (i.e., the 3D extensions of triangulations) are also used. They provide an object-centered decomposition and adapt to the distribution of the sampled data. In particular, the Delaunay tetrahedralization extends the empty circle property of the Delaunay triangulation to an empty sphere property. A tetrahedral complex has the advantage of being able to include surface constraints (e.g., the contact surface between two layer of rock), as specific faces, edges, or vertices of the complex. Efficient algorithms to build Delaunay tetrahedralizations of a point set in 3D are available in the computations geometry literature. Such algorithms can be classified into indirect algorithms (which compute the Delaunay tetrahedralization of a given set of points in 3D space by projecting the convex hull of a transformed set of points in 4D), incremental algorithms, based on the progressive insertion of points into an initial tetrahedralization [282,

374

L. de Floriani et al.

151,81], and algorithms which start from a single tetrahedron and progressively add new tetrahedra adjacent to the existing ones [12]. The problem of computing a constrained Delaunay tetrahedralization has been shown to be NP-complete [236]. Heuristic algorithms have been proposed which compute a conforming Delaunay tetrahedralization: the idea is replacing each data segment with a set of points, in such a way that the original segment will be a chain of edges in the output tetrahedraUzation [21]. Although there is no bound to the number of points which must be added to guarantee that all given segments are included in the resulting tetrahedralization, these methods seem good in practice. For details refer also to Chapter 5. The Delaunay tetrahedralization has also been applied to reconstructing the boundary surface of solid objects. Input data for this problem are scattered points and/or segments lying on the object boundary. A common approach starts with the Delaunay tetrahedralization of the whole data, whose domain is the convex hull of the data set, and progressively removes tetrahedra in order to move internal points, or internal segments, on the boundary. The sculpturing method [20,21] deletes tetrahedra iteratively based on heuristic considerations and topological constraints. The alpha-shape approach [84] deletes all tetrahedra which can be "erased" by a sphere of a given radius, under the constraint that the sphere cannot pass through the data point. The radius is chosen through a binary search as the smallest value able to guarantee the consistency of the resulting shape. The alphashape method cannot resolve deep and narrow holes in the surface: such problem can be solved by applying the sculpturing method in a final step [14]. The method proposed by Veltkamp [267] for scattered segments is based on a different geometric structure, called the y-neighborhoodgraph. The y-neighborhood graph is a parametric graph that provides a continuous spectrum of graphs ranging from the empty to the complete graph (and including the Delaunay tetrahedralization as well). A suitable value of the parameter is chosen in order to obtain a feasible approximation of the object shape. A completely different approach is used in [144]: based on the available points, a function in 3D space is defined which pointwise estimates the distance from the unknown surface; then, a conventional algorithm (e.g., marching cubes) is used to extract the isosurface corresponding to the zero value of the function, which provides the final surface approximation.

7. Concluding remarks Computational geometry offers rigorous and powerful tools for solving a variety of geometric problems in GIS. In this survey, we have reviewed issues concerning efficient data structures to store geometric information on geographic data, and algorithms for basic spatial queries, overlay, map generalization, labeling, construction and conversion of terrain models, terrain approximation and multiresolution, visibility, extraction of topographic features, drainage networks, paths, as well as a handful of other specific problems. At the state of the art, computational geometry tools are effective for many such problems, yet GIS practitioners often adopt heuristics that are less rigorous and efficient. This gap between theory and practice used to be justified by the difficulty of coding computational geometry algorithms, and especially because practical performance often did not follow from theoretical efficiency. Indeed, the search for asymptotic efficiency in the

Applications

of computational

geometry to geographic information systems

375

worst case (which hardly occurs) sometimes leads to very involved algorithms that perform poorly on real data configurations. Although in some cases this is still true, recent advances, and research trends in computational geometry show that many theoretically rigorous tools either are, or will become soon, competitive also in practical applications [261]. The following issues have great relevance in making computational geometry effective, and easily available to practitioners. • Randomized algorithms use more realistic models for evaluating the computational complexity, and may provide solutions as easy to program as brute-force approaches, and with provably good performance on real-world data. This class of algorithms is consolidated in computational geometry, and textbooks are already available [202]. Refer also to Chapter 16. • Robustness in numerical computation is of primary importance, since errors due to floating-point arithmetic can compromise not only the accuracy, but even the correctness of results [142]. The problem of robustness has been faced by developing either methods for exact arithmetic [100], or algorithms that guarantee a correct result with finite precision [126], ox adaptive precision [179] in geometric computations. • Computational geometry libraries provide standard implementations of geometric algorithms, together with application programming interfaces that make such tools easily usable by practitioners. Developing and engineering libraries is perhaps the most successful way to export computational geometry technologies to applied fields. For instance, the LEDA library [204] provides a carefully engineered platform for developing discrete and geometric algorithms that is already used in industrial environments; the CGAL library [212] is under development, which can be integrated with LEDA, and offers tools for geometric robustness and specific algorithms for computational geometry; another geometric library under development is GeomLib [2]. From the point of view of researchers in computational geometry, the field of GIS is a highly stimulating arena, and a plentiful source of open problems. Strategic directions reported by the Computational Geometry Working Group [261] clearly indicate a list of hot issues — such as robustness, fine-grain complexity analysis, implementation, experimental evaluation, external-memory algorithms, dynamic and real-time computing, approximation algorithms, and parallel and distributed computing — all of which are relevant, and in some cases vital to GIS applications. The following open issues in GIS offer excellent cues for research. • Realistic map models should be based on decompositions more general than planar subdivisions or CW complexes, in order to capture situations like regions with holes, and point and lineal features. Also, efficient data structures to encode such maps should be developed. • Realistic queries in spatial databases always involve both geometric, and non-geometric aspects. A solution for such queries necessarily involves both geometric algorithms, and information retrieval techniques. A separate treatment of geometric and non-geometric issues often leads to inefficient solutions. An integrated approach to hybrid queries is currently lacking, and not even models for the evaluation of performances in resolving hybrid queries have been proposed yet. • The management of large spatial datasets in external memory is also very important, and related to the previous issue. Most algorithms and data structures in computational ge-

376















L. de Floriani et ah

ometry have been studied and developed for the primary storage, while this assumption is hardly realistic for any problem in GIS. Map generalization is an important task that involves many challenging geometric problems, and also requires the integration of geometric and non-geometric information. A comprehensive approach to generalization is still lacking. Serious achievements towards the automation of such task will probably require managing knowledge at a level much beyond the scope of geometry. However, a high benefit would come from an integrated treatment at least of all geometric problems. Compression and progressive transmission of data is relevant, especially in the context of web-aware applications. While for terrains different techniques already exist, either based on geometric compression, or on wavelets, the compression and progressive transmission of maps is a more complex issue, since it involves both geometrical and topological approximation. Multiresolution is currently a topic of great interest. Most of the work in multiresolution modeling of terrains (and, more generally, of surfaces) has been dealing with the problem of selective refinement, while most of the problems related with query processing, and terrain analysis at variable resolution are still an open issue. Facing such problems with a multiresolution approach may be vital to cope with the huge (practically worldwide) terrain databases that are becoming available. In the field of map processing, multiresolution is at an even earlier stage. As we discussed in Section 2.5, there are only a few empirical approaches based on heuristics, and few theoretically oriented works, based either on graph manipulation, or on transformations of cell complexes. The gap between theory and practice is still quite large. Important open problems are related to the design and development of efficient data structures, construction of multiple representations, and algorithms for answering spatial queries in a multiresolution framework, especially concerning spatial join (overlay), visibility, path computation, etc. Many optimization and approximation problems in GIS (e.g., terrain approximation, map labeling, some visibility-related problems) are intractable. Approximated algorithms that can run in reasonable time, and can guarantee bounds on the size of the result are very important to provide practical solutions to such problems. The field of 3D GIS, which is quite recent, offers many other cues of research. Much preliminary work is still needed in order to achieve a level of standardization sufficient at least to establish a common basis for researchers. In 3D, a classification of the different types of data, and of the relevant queries on them has still to be provided. Although both the kind of information and the queries will be often strictly dependent on the specific application (e.g., modeling geological phenomena, or modeling buildings in an architectural environment), a serious analysis from practice to theory may lead to identifying general modehng and computational issues that offer a key to a more integrated approach to spatial information handling. Besides the issues listed above, there are plenty of specific problems that still need to be investigated, such as intersection-free line and map simplification, conflation and rubber sheeting of maps, edge matching and change detection, progressive transmission of maps and terrains, etc.

Applications

of computational

geometry to geographic information systems

377

References [1] P.K. Agarwal and S. Suri, Surface approximation and geometric partitions. Proceedings 5th ACM-SIAM Symposium on Discrete Algorithms (1994), 24-33. [2] P.K. Agarwal, M.T. Goodrich, S.R. Kosaraju, F.P. Preparata, R. Tamassia and J.S. Vitter, Applicable and robust geometric computing (1995), available from http://www.cs.brown.edu/cgc/. [3] P.K. Agarwal, A. Har-Peled, M. Sharir and K.R. Varadarajan, Approximating shortest paths on a convex polytope in three dimensions, J. ACM (1997). [4] P.K. Agarwal, M. van Kreveld and S. Suri, Label placement by maximum independent sets in rectangles. Proceedings 9th Canadian Conference on Computational Geometry (August 1997). [5] P.K. Agarwal and K.R. Varadarajan, Approximating shortest paths on a polyhedron. Proceedings 38th IEEE Symposium on Foundations of Computer Science (1997). [6] P.K. Agarwal and P.K. Desikan, An efficient algorithm for terrain simplification. Proceedings 8th ACMSIAM Symposium on Discrete Algorithms (1997). [7] D.S. Andrews, J. Snoeyink, J. Boritz, T. Chan, G. Denham, J. Harrison and C. Zhu, Further comparison of algorithms for geometric intersection problems. Proceedings 6th International Symposium on Spatial Data Handling (1994), 709-724. [8] W.G. Aref and H. Samet, Uniquely reporting spatial objects: Yet another operation for comparing spatial data structures. Proceedings 5th International Symposium on Spatial Data Handling, Charleston, SC (1992), 178-189. [9] S. Aronoff, Geographic Information Systems: A Management Perspective, WDL Publications, Ottawa (1989). [10] M. Atallah, Dynamic computational geometry. Proceedings 24th IEEE Symposium on Foundations of Computer Science, IEEE Computer Society, Baltimora (1983), 92-99. [11] F. Aurenhammer, Voronoi diagrams —A survey of fundamental geometric data structures, ACM Comput. Surveys 23 (3) (1991), 366-405. [12] D. Avis and B.K. Battacharya, Algorithms for computing d-dimensional Voronoi diagrams and their duals. Advances in Computing Research 1, F.P. Preparata, ed., JAI Press (1983), 159-180. [13] C.L. Bajaj, V. Pascucci and D.R. Schikore, Fast isocontouring for improved interactivity. Proceedings 1996 SPIE Symposium on Volume Visualization, San Francisco (1996), 3 9 ^ 6 . [14] C.L. Bajaj, F. Bernardini, J. Chen and D.R. Schikore, Automatic Reconstruction of 3D CAD Models, Theory and Practice of Geometric Modeling, W. StraBer, R. Klein and R. Rau, eds. Springer-Verlag (1996). [15] R. Barrera and J. Vazques-Gomez, A shortest path algorithm for hierarchical terrain models. Proceedings AUTO-CARTO 9 (1989), 156-163. [16] B. Baumgart, A polyhedral representation for computer vision. Proceedings of the AFIPS National Computer Conference (1975), 589-596. [17] N. Beckmann, H.P. Kriegel, R. Schneider and B. Seeger, The R*-tree: An efficient and robust access method for points and rectangles. Proceedings ACM SIGMOD (1990), 81-90. [18] J.L. Bentley and T.A. Ottman, Algorithms for reporting and counting geometric intersections, IEEE Transactions on Computers 28 (9) (1979), 643-647. [19] M. Bertolotto, L. De Floriani and E. Puppo, Multiresolution topological maps. Advanced Geographic Data Modelling — Spatial Data Modelling and Query Languages for 2D and 3D Applications, M. Molenaar and S. De Hoop, eds. Publication on Geodesy — New Series 40 (1994), 179-190. [20] J.D. Boissonnat, Geometric structures for three-dimensional shape representation, ACM Trans, on Graphics 3 (4) (1984), 266-286. [21] J.D. Boissonnat, O.D. Faugeras and E. Le Bras-Mehlman, Representing stereo data with the Delaunay triangulation, Artif. Intell. 44 (1990), 41-89. [22] J.D. Boissonnat, O. Devillers, R. Schott, M. Tailland and M. Yvinec, Application of random sampling to on-line algorithms in computational geometry. Discrete Comput. Geom. 8 (1992), 51-71. [23] J.D. Boissonnat and K. Dobrindt, On-line construction of the upper envelope of triangles in M^, Proceedings 4th Canadian Conference on Computational Geometry, C.A. Wang, ed.. Memorial, Newfoundland (1992), 311-315. [24] P. Bose, T. Shermer, G. Toussaint and B. Zhu, Guarding polyhedral terrains, Comput. Geom. 7 (3) (1997), 173-185.

378

L. de Floriani et al.

[25] E. Brisson, Representing geometric structures in d dimensions: Topology and order. Discrete Comput. Geom. 9 (4) (1993), 387^26. [26] RJ.C. Brown, A/fl5? algorithm for selective refinement of terrain meshes. Proceedings COMPUGRAPHICS 96, GRASP (December 1996), 70-82. [27] P. A. Burrough, Principles of Geographic Information Systems for Land Resources Assessment, Clarendon Press, Oxford (1986). [28] S.J. Carver and C.F. Brunson, Vector to raster conversion error and feature complexity: An empirical study using simulated data, Intemat. J. Geogr. Inform. Systems 8 (3) (1994), 261-270. [29] A. Carrara, G. Bitelli and R. Carla, Comparison of techniques for generating digital terrain models from contour lines, Intemat. J. Geogr. Inform. Sci. 11 (5), 451-474. [30] B. Chazelle and H. Edelsbrunner, An optimal algorithm for intersecting line segment in the plane, J. ACM 39(1) (1992), 1-54. [31] B, Chazelle, H. Edelsbrunner, L.J. Guibas and M. Sharir, Algorithms for bichromatic line-segment problem and polyhedral terrains, Algorithmica 11, Springer-Verlag, New York (1994), 116-132. [32] T. Chan, A simple trapezoidal sweep algorithm for reporting red/blue segment intersections. Proceedings 6th Canadian Conference on Computational Geometry (1994), 263-268. [33] Z.-T. Chen and J.A. Guevara, Systematic selection of very important points (VIP) from digital terain model for constructing triangular irregular networks. Proceedings AUTO-CARTO 8, Baltimore, MD, USA (1987), 50-56. [34] J. Chen and Y. Han, Shortest paths on a polyhedron. Proceedings 6th ACM Symposium on Computational Geometry (1990), 360-369. [35] J. Choi, J. Sellen and C.K. Yap, Approximate Euclidean shortest path in 3-space, Proceedings 10th ACM Symposium on Computational Geometry (1994), 4 1 ^ 8 . [36] M.M. Chow, Optimized geometry compression for real-time rendering. Proceedings IEEE Visualization '97 (1997), 347-354. [37] J. Christensen, Fitting a triangulation to contour lines. Proceedings AUTOCARTO 8, Baltimore, Maryland (1987), 57-67. [38] J. Christensen, J. Marks and S. Stuart, An empirical study of algorithms for point-feature placement, ACM Trans, on Graphics 14 (3) (1995), 203-232. [39] A. Ciampalini, P. Cignoni, C. Montani and R. Scopigno, Multiresolution decimation based on global error. The Visual Computer 13 (5) (1997), 228-246. [40] P. Cignoni, E. Puppo and R. Scopigno, Representation and visualization of terrain surfaces at variable resolution. The Visual Computer 13 (1997). [41] K.L. Clarkson and P.W. Shor, Application of random sampling in computer geometry. Discrete Comput. Geom. 4 (5) (1989), 387-421. [42] E. Clementini and P. Di Felice, A comparison of methods for representing topological relationships. Inform. Sci. 3 (1995), 149-178. [43] E. Clementini and P. Di Felice, A model for representing topological relationships between complex geometric features in spatial databases. Inform. Sci. 90 ( 1 ^ ) (1996), 121-136. [44] J. Cohen, A. Varshney, D. Manocha, G. Turk, H. Weber, P. Agarwal, F Brooks and W. Wright, Simplification envelopes, ACM Computer Graphics Proceedings, Annual Conference Series (SIGGRAPH '96) (1996), 119-128. [45] R. Cole and M. Sharir, Visibility problems for polyhedral terrains, J. Symbolic Comput. 17 (1989), 11-30. [46] J.P. Corbett, Topological principles in cartography. Technical Report 48, U.S. Bureau of Census, USA (1979). [47] R.G. Cromley, A spatial allocation analysis of the point annotation problem. Proceedings 2nd International Symposium on Spatial Data Handling, Seattle, WA (July 1986), 38-49. [48] E.F. D'Azavedo and R.B. Simpson, On optimal interpolation triangle incidences, SIAM J. Sci. Statist. Comput. 20 (6) (1989), 1063-1075. [49] M. de Berg, Ray shooting, Depth Orders and Hidden Surface Removal, Lecture Notes in Comput. Sci. 703, Springer-Verlag, New York (1993). [50] M. de Berg and K.T.G. Dobrindt, On the levels of detail in terrains, 11th ACM Symposium on Computational Geometry, Vancouver, BC (Canada), June 5-7 (1995).

Applications

of computational

geometry to geographic information systems

379

[511 M. de Berg, M. van Kreveld and S. Shirra, A new approach to subdivision simplification. Technical Papers, ACSM/ASPRS Annual Convention 4 (AUTO-CARTO 12) (1995), 79-88. [52J M. de Berg, P. Bose, K. Dobrindt, M. van Kreveld, M. Overmars, M. de Groot, T. Roos, J. Snoeyink and S. Yu, The complexity of rivers in triangulated terrains. Proceedings 8th Canadian Conference on Computational Geometry, Ottawa (1996), 325-330. [53] M. de Berg, M. van Kreveld, R. van Oostrum and M. Overmars, Simple traversal of a subdivision without extra storage, Internat. J. Geogr. Inform. Sci. 11 (4) (.Tune 1997), 359-373. [54] M. de Berg and M. van Kreveld, Trekking in the alps without freezing or getting tired, Algorithmica 18, 306-323. [55] M. Deering, Geometry compression. Computer Graphics (SIGGRAPH'95) (1995), 13-20. [56] L. De Floriani, A pyramidal data structure for triangle-based surface description, IEEE Comput. Graphics Appl 8 (March 1989), 67-78. [57] L. De Floriani, B. Falcidieno, G. Nagy and C. Pienovi, On sorting triangles in a Delaunay tesselation, Algonthmica 6 (1991), 522-535. [58] L. De Floriani, P. Marzano and E. Puppo, Spatial queries and data models. Spatial Information Theory — A theoretical basis for GIS, A.U. Frank and I. Campari, eds, Lecture Notes in Comput. Sci. 716, SpringerVerlag (1993), 113-138 [59] L. De Floriani, P. Marzano and E. Puppo, Line-of sight communication on a terrain models, Internat. J. Geogr. Inform. Systems 8 (4) (1994), Taylor & Francis, London, 329-342. [60] L. De Floriani and P. Magillo, Horizon computation on a hierarchical terrain model. The Visual Computer: An International Journal of Computer Graphics 11 (1995), 134-149. [61] L. De Floriani and E. Puppo, Hierarchical triangulation for multiresolution surface description, ACM Trans, on Graphics 14 (4) (1995), 363-411. [62] L. De Floriani, P. Magillo and E. Puppo, A formal approach to multire solution modeling. Geometric Modeling: Theory and Practice, W. StraBer, R. Klein and R. Rau, eds, Springer-Verlag (1996). [63] L. De Floriani, P. Magillo and E. Puppo, Building and traversing a surface at variable resolution. Proceedings IEEE Visualization 97, Phoenix, AZ (USA) (October 1997), 103-110. [64] L. De Floriani, P. Magillo and E. Puppo, Efficient encoding and retrieval of triangle meshes at variable resolution. Technical Report DISI-TR-97-01, Department of Computer and Information Sciences, University of Genova, Genova, Italy (1997). [65] L. De Floriani, P. Magillo and E. Puppo, Efficient implementation of multi-triangulations. Proceedings IEEE Visualization 98, Triangle Park, North Carolina (October 1998), 43-50. [66] L. De Floriani and P. Magillo, Visibility computations on hierarchical triangulated terrain models, Geoinformatica 1 (3) (1997), 219-250. [67] V. Delis and T. Hadzilacos, On the assessment of generalization consistency. Proceedings 5th Symposium on Spatial Databases (SSD97), Berlin (July 1997). [68] G. Dettori and E. Puppo, How generalization interacts with the topological and geometric structure of maps. Proceedings 7th International Symposium on Spatial Data Handling, Delft (The Netherlands) (August 1996). [69] K. Dobrindt and M. Yvinec, Remembering conflicts in history yields dynamic algorithms. Proceedings 4th Annual International Symposium on Applied Computing (ISAAC '93), Lecture Notes in Comput. Sci. 762 (1993), 21-30. [70] J.S. Doerschler and H, Freeman, A rule-based system for dense-map name placement. Communications of the ACM 35 (1) (1992), 68-79. [71] J. A. Dougenik, N.R. Chrisman and D.R. Niemeyer, Ad algorithm to construct continuous area cartograms. Professional Geographer 37 (1985), 75-81. [72] D.H. Douglas and T.K. Peucker, Algorithms for the reduction of the number ofpoints required to represent a digitized line or its caricature. The Canadian Cartographer 10 (2) (1973), 112-122. [73] M. Duchaineau, M. Wolinsky, D.E. Sigeti, M.C. Miller, C. Aldrich and M.B. Mineed-Weinstein, ROAMing terrain: Real-time optimally adapting meshes. Proceedings IEEE Visualization'97 (1997), 81-88. [74] R.O. Duda and RE. Hart, Pattern Classification and Scene Analysis, Wiley, New York (1973). [75] G. Dutton, Geodesic modelling of planetary relief Cartographica 21 (1988), 188-207. [76] G. Dutton, Polyhedral hierarchical tessellation: The shape of GIS to come, Geo Info Systems 2 (1) (1991), 49-55.

380

L. de Floriani et al

[77] G. Dutton, Improving locational specificity of map data — a multiresolution, metadata-driven approach and notation, Intemat. J. Geogr. Inform. Systems 10 (3) (1996), 253-268. [78] N. Dyn, D. Levin and S. Rippa, Data dependent triangulations for piecewise linear interpolation, IMA J. Numer. Anal. 10 (1990), 137-154. [79] H. Edelsbrunner, Algorithms in Combinatorial Geometry, Springer-Verlag (1987). [80] H. Edelsbrunner, L.J. Guibas and M. Sharir, The upper envelope of piecewise linear functions: Algorithms and applications. Discrete Comput. Geom. 4 (1989), 311-336. [81] H. Edelsbrunner and N.R. Shah, Incremental topological flipping works for regular triangulations. Proceedings 8th Annual ACM Symposium on Computational Geometry (1992), 43-52. [82] H. Edelsbrunner and R. Waupotitsch, A combinatorial approach to cartograms, Comput. Geom. 7 (1997), 343-360. [83] H. Edelsbrunner and T.S. Tan, An upper bound for conforming Delaunay triangulations. Discrete Comput. Geom. 10(1993), 197-213. [84] H. Edelsbrunner and E.R Mucke, Three-dimensional alpha shapes, ACM Trans, on Graphics 13 (1) (1994), 43-72. [85] M. Egenhofer and R. Franzosa, Point-set topological spatial relations, Intemat. J. Geogr. Inform. Systems 5 (2) (1991), 161-174. [86] M. Egenhofer, E. Clementini and R Di Felice, Topological relations between regions with holes, Intemat. J. Geogr. Inform. Systems 8 (2) (1994), 129-142. [87] M. Egenhofer, E. Clementini and R Di Felice, Evaluating inconsistencies among multiple representations, Proceedings 6th Intemational Symposium on Spatial Data Handling, Edinburgh, Scotland (1994), 901920. [88] L. Eklundh and U. Martensson, Rapide generation of digital elevation models from topographic maps, Intemat. J. Geogr. Inform. Systems 9 (3) (1995), 329-340. [89] F. Evans, S. Skiena and A. Varshney, Optimizing triangle strips for fast rendering. Proceedings IEEE Visualization '96 (1996), 319-326. [90] W. Evans, D. Kirkpatrick and G. Townsend, Right triangular irregular networks. Technical Report 97-09, University of Arizona (1997). [91] B. Falcidieno and M. Spagnuolo, A new method for the characterization of topographic surfaces, Intemat. J. Geogr. Inform. Systems 5 (4) (1991), 397^12. [92] C. Faloutsos and I. Kamel, Hilbert R-tree: An improved R-tree using fractals. Proceedings Intemational Conference on Very Large Data Bases (VLDB) (1994). [93] C. Faloutsos and I. Kamel, Beyond uniformity and independence: Analysis of R-trees using the concept of fractal dimension. Proceedings ACM SIGACT-SIGMOD-SIGART PODS (1994), 4-13. [94] U. Finke and K.H. Hinrichs, The quad view data structure — A representation for planar subdivisions. Advances in Spatial Databases, M.J. Egenhofer and J.R. Herring, eds. Lecture Notes in Comput. Sci. 951, Springer (1995), 2 9 ^ 6 . [95] U. Finke and K.H. Hinrichs, Overlaying simply connected planar subdivisions in linear time, Procceedings 11th Canadian Conference on Computational Geometry (1995), 119-126. [96] P.F Fisher, Algorithm and implementation uncertainty in viewshed analysis, Internat. J. Geogr. Inform. Systems 7 (4) (1993), 331-347. [97] P. Flajolet, G.H. Gonnet, C. Puech and J.M. Robson, Analytic variations on quedtrees, Algorithmica 10 (6) (1993), 473-500. [98] D.J. Foley, A. Van Dam, S.K. Feiner and J.F Hughes, Computer Graphics: Principles and Practice, Addison-Wesley, Reading, MA (1990). [99] M. Formann and F. Wagner, A packing problem with applications to lettering of maps. Proceedings 7th ACM Symposium on Computational Geometry (1991), 281-288. [100] S. Fortune and C. Van Wyk, Efficient exact arithmetic for computational geometry. Proceedings 9th ACM Symposium on Computational Geometry (1993), 163-172. [101] R.F Fowler and J.J. Little, Automatic extraction of irregular digital terrain models. Computer Graphics 13(1979), 199-207. [102] A.U. Frank and W. Kuhn, Cell graph: A provable correct method for the storage of geometry. Proceedings 2nd Symposium on Spatial Data Handhng (SDH'86), Seattle, WA (1986), 411^36.

Applications

of computational

geometry to geographic information systems

381

[103] A.U. Frank, B. Palmer and V.B. Robinson, Formal methods for the accurate definition of some fundamental terms in physical geography. Proceedings 2nd International Symposium on Spatial Data Handling (1986), 585-599. [104] W.R. Franklin, Efficient intersection calculations in large databases. Proceedings International Cartographic Association 14th World Conference, Budapest (1989), A62-A63. [105] W.R. Franklin, M. Kankanhalli and C. Narayanaswami, Geometric computing and the uniform grid data technique. Computer Aided Design 21 (7) (1989), 410-420. [106] A.U. Frank, Spatial concepts, geometric data models, and data structures. Computers and Geosciences 18 (4) (1992), 409-417. [107] A. Frank and S. Timpf, Multiple representations for cartographic objects in a multiscale tree —An intelligent graphical zoom. Computers & Graphics 18 (6) (1994), 823-829. [108] W.R. Franklin, Triangulated irregular network to approximate digital terrain. Technical Report, ECSE Dept., RPI, Troy (NY) (1994). Manuscript and code available on ftp://ftp.cs.rpi.edu/pub/franklin/. [109] H. Fuchs, S.P. Uselton and Z. Zedem, Optimal surface reconstruction from planar contours, Comm. ACM 20 (10) (1977), 693-702. [110] A.B, Garcia, C.G. Niciera, J.B.O. Mere and A.M. Diax, A contour-line based triangulation algorithm. Proceedings 5th International Symposium on Spatial Data Handling (1992), 411^21. [ I l l ] S. Ganapathy and T.G. Dennehy, A new general triangulation method for planar contours. Computer Graphics 16 (3) (1982), 69-75. [112] L. Gewali, A. Meng, J.S.B. Mitchell and S. Ntafos, Path planning in O/l/oo weighted regions with applications. Proceedings 4th Symposium on Computational Geometry (1988), 266-278. [113] S.K. Ghosh, Computing the visibility polygon from a convex set and related problems, J. Algorithms 12 (1991), 75-95. [114] C. Gold, Problems with handling spatial data — the Voronoi approach, CISM J. 45 (1991), 65-80. [115] CM. Gold, An object-based dynamic spatial model, and its application in the development of a userfriendly digitizing system. Proceedings 5th International Symposium on Spatial Data Handling, Charleston, IGU Commission on GIS (1992), 495-504. [116] C. Gold, Dynamic data structures. Advanced Geographic Data Modelling — Spatial Data Modelling and Query Languages for 2D and 3D AppHcations, M. Molenaar and S. De Hoop, eds, Netherlands Geodetic Commission, Publication on Geodesy — New Series 40 (1994), 121-128. [117] C. Gold and T. Roos, Surface modeling with guaranteed consistency — an object-based approach. Proceedings IGIS'94: Geographic Information Systems, J. Nievergelt, T. Roos, H.J. Schack and P. Widmayer, eds. Lecture Notes in Comput. Sci. 884, Springer-Verlag (1994), 70-85. [118] D. Gomez and A. Guzman, A digital model for three-dimensional surface representation, Geoprocessing 1 (1979), 53-70. [119] M.F. Goodchild, K.K. Kemp and T.K. Poiker, NCGIA Core Curriculum, National Center for Geographic Information Analysis, University of California, Santa Barbara, USA (1989), also available at http://www.env.gov.bc.ca/ tclover/giscourse/ToC.html. [120] M.F. Goodchild and Y. Shiren, A hierarchical data structure for global geographic information systems. Computer Vision, Graphics and Image Process. 54 (1992), 3 1 ^ 4 . [121] M.T. Goodrich, A polygonal approach to hidden line and hidden surface elimination. Graphical Models and Image Process. 54 (1) (1992), 1-12. [122] M.H. Gross, O.G. Staadt and R. Gatti, Efficient triangular surface approximation using wavelets and quadtree data structures, IEEE Trans. Visualization and Computer Graphics 2 (2), 130-143. [123] L.J. Guibas and J. Stolfi, Primitives for the manipulation of general subdivisions and the computation of Voronoi diagrams, ACM Trans, on Graphics 4 (1985), 74-123. [124] L.J. Guibas and R. Seidel, Computing convolutions by reciprocal search. Discrete Comput. Geom. 2 (1987), 175-193. [125] L.J. Guibas, J.E. Hershberger, J.S.B. Mitchell and J.S. Snoeyink, Approximating polygons and subdivisions with minimum-link paths, Intemat. J. Comput. Geom. Appl. 3 (4) (1983), 383-415. [126] L.J. Guibas and D.H. Marimont, Rounding arrangements dynamically. Proceedings 11th ACM Symposium on Computational Geometry (1995), 190-199. [ 127] A. Guttman, R-trees: A dynamic index structure for spatial searching. Proceedings ACM SIGMOD (1984), 47-57.

382

L. de Floriani et al.

[128] R.H. Giiting and M. Schneider, Realms: A foundation for spatial data types in database systems. Proceedings 3rd International Symposium on Large Spatial Databases, Singapore (1993), 14-35. [129] R.H. Giiting, T. Ridder and M. Schneider, Implementation of the ROSE algebra: Efficient algorithms for realm-based spatial data types. Proceedings 4th International Symposium on Large Spatial Databases, Portland, ME, Lecture Notes in Comput. Sci. 951, Springer-Verlag (1995), 216-239. [130] T. Hadzilacos and N. Tryfona, A model for expressing topological integrity constraints in geographic databases. Lecture Notes in Comput. Sci. 639, Springer-Verlag (1992), 252-268. [131] B. Hamman, A data reduction scheme for triangulated surfaces, Comput. Aided Geom. Design 11 (1994), 197-214. [132] R.M. Haralick, Ridges and valleys on digital images. Computer Vision, Graphics and Image Process. 22 (1983), 28-38. [133] S. Har-Peled, M. Sharir and K.R. Varadarajan, Approximate shortest paths on a convex polytope in three dimensions. Proceedings 12th Annu. ACM Sympos. Comput. Geom. (1996), 329-338. [134] D.J. Hebert and H.-J. Kim, Image encoding with triangulation wavelets. Proceedings SPIE 2569 1 (1995), 381-392. [135] P. Heckbert and M. Garland, Survey of polygonal surface simplification algorithms, Siggraph'97 Course Notes (1997). [136] J. Herring, TIGRIS: Topologically integrated GIS, Proceedings AUTO-CARTO 8, ASPRS/ACSM, Baltimore, MD (March 1987), 282-291. [137] J. Hershberger, Finding the upper envelope ofn line segments in 0{n logn) time. Inform. Process. Lett. 33 (1989), 169-174. [138] J. Hershberger and J. Snoeyink, Computing minimum length paths of a given homotopy class, Comput. Geom. 4 (1994), 63-97. [139] J. Hershberger and J. Snoeyink, Cartographic line simplification and polygon CSG formulae in 0(n log* n) time, Proceedings 5th Annual Workshop on Algorithms and Data Structures, Halifax, Nova Scotia, Canada (August 1997). [140] G.R. Hjaltason and H. Samet, Ranking in spatial databases. Advances in Spatial Databases, Lecture Notes in Comput. Sci. 951, Springer-Verlag (1995), 83-95. [141] E.G. Hoel and H. Samet, Efficient processing of spatial queries in line segment database. Advances in Spatial Databases, Lecture Notes in Comput. Sci. 525, Springer-Verlag (1991). [142] CM. Hoffmann, The problems of accuracy and robustness in computational geometry, IEEE Computer 22 (3) (1989), 31-41. [ 143] C. Hoffmann, Geometric and Solid Modeling: An Introduction, The Morgan Kaufmann Series in Computer Graphics and Geometric Modeling, Morgan Kaufmann, San Mateo, California (1989). [144] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald and W. Stuetzle, Surface reconstruction from unorganized points. Proceedings SIGGRAPH '92 Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH (1992), 71-78. [145] H. Hoppe, Progressive meshes, ACM Computer Graphics Proceedings, Annual Conference Series (SIGGRAPH '96) (1996), 99-108. [146] H. Hoppe, View-dependent refinement of progressive meshes, ACM Computer Graphics Proceedings, Annual Conference Series (SIGGRAPH '97) (1997). [147] H. Imai and M. Iri, Computational-geometric methodsfiyrpolygonal approximations of a curve, Comput. Vision, Graphics and Image Process. 36 (1986), 3 1 ^ 1 . [148] H. Imai and M. Iri, An optimal algorithm for approximating a piecewise linear function, J. Inform. Process. 9 (3) (1986), 159-162. [149] H. Imai and M. Iri, Polygonal approximation of a curve —formulations and algorithms. Computational Morphology, G.T. Toussaint, ed.. Machine Intelligence and Pattern Recognition 6, North-Holland (1988), 71-86. [150] S.K. Jenson and J.O. Dominique, Extracting topographic structure from digital elevation data for geographic information system analysis, Photogramm. Eng. Remote Sensing 54 (1988), 1593-1600. [151] B. Joe, Construction of three-dimensional Delaunay triangulations using local transformations, Comput. Aided Geom. Design 8 (2) (1991), 123-142. [152] C. Jones, Cartographic name placement with Prolog, IEEE Comput. Graphics Appl. 9 (5) (1989), 36-47.

Applications

of computational

geometry to geographic information systems

383

[153] C.B. Jones, Data structures for three-dimensional spatial information systems in geology, Internat. J. Geogr. Inform. Systems 3(1) (1989), 15-32. [154] C.B. Jones, G.L. Bundy and J.M. Ware, Map generalization with a triangulated data structure, Carthography and Geographic Information Systems 22 (4) (1995), 317-331. [155] J. A. Jorge and D. Vaida, A fuzzy relational path algebra for distances and directions. Proceedings EC AI-96 Workshop on Representation and Processing of Spatial Expressions, Budapest, Hungary (1996). [156] W. Kainz, Spatial relationships — Topology versus order. Proceedings 4th International Symposium on Spatial Data Handling, Zurich, Switzerland (July 1990), 814-819. [. .r/] W. Kainz, M.J. Egenhofer and J. Greasley, Modeling spatial relations and operations with partially ordered sets, Internat. J. Geogr. Inform. Systems 7 (3) (1993), 215-225. [158] M.J. Katz, M.H. Overmars and M. Sharir, Efficient hidden surface removal for objects with small union size. Proceedings 7th ACM Symposium on Computational Geometry, ACM Press, New York (1991), 3 1 40. [159] E. Keppel, Approximating complex surfaces by triangulation of contour lines, IBM J. Research and Development 19 (1) (1975), 2-11. [160] D.G. Kirkpatrick, Efficient computation of continuous skeletons. Proceedings 20th IEEE Symposium on Foundations of Computer Science (1979), 18-27. [161] D.G. Kirkpatrick, Optimal search in planar subdivisions, SIAM J. Comput. 12 (1) (1983), 28-35. [162] R. Klein and W. StraBer, Generation of multiresolution models from CAD data for real time rendering. Theory and Practice of Geometric Modeling, W. StraBer, R. Klein and R. Rau, eds. Springer-Verlag (1996). [163] J.J. Koenderik and A.J. van Doom, Local features of smooth shapes: Ridges and courses. Geometric Methods in Computer Vision II, Vol. 2031, SPIE (1993), 2-13. [164] J.J. Koenderik and A.J. van Doom, Two-plus-one-dimensional differential geometry, Pattern Recogn. Lett. 15 (1994), 439-443. [165] T.Y. Kong and A. Rosenfeld, Digital topology: Introduction and survey, Comput. Vision, Graphics and Image Process. 48 (1989), 357-393. [166] H.P. Kriegel, T. Brinkhoff and R. Schneider, The combination of spatial access methods and Computational Geometry in Geographic Database Systems, Proceedings 2nd Symposium on Spatial Databases (1991), 5-21. [167] H.P. Kriegel, T. Brinkhoff and R. Schneider, An efficient map overlay algorithm based on Spatial Access Methods and Computational Geometry, Proceedings International Workshop on DBMSs for Geographical Apphcations, Capri, Italy (May 1991), 16-17. [168] L. Kucera, M. Melhom, B. Preis and E. Schwarzenecker, Exact algorithms for a geometric packing problem. Proceedings 10th Symposium on Theoret. Aspects of Computer Science, Lecture Notes in Comput. Sci. 665, Springer-Verlag (1993), 317-322. [169] LS. Kweon and T. Kanade, Extracting topological terrain features from elevation maps, Comput. Vision, Graphics and Image Process. 59 (2) (1994), 171-182. [ 170] M. Lanthier, A. Maheshwari and J.R. Sack, Approximating weighted shortest paths on polyhedral surfaces. Proceedings ACM Symposium on Computational Geometry, Nice, France (1997), 274-283. [171] R. Laurini and D. Thompson, Fundamentals of Spatial Information Systems, Academic Press, London (1992). [172] C.L. Lawson, Software for C^ surface interpolation. Mathematical Software III, J.R. Rice, ed.. Academic Press Inc. (1977), 161-164. [173] D.T. Lee and A.K. Lin, Generalized Delaunay triangulation for planar graphs. Discrete Comput. Geom. 1 (1986), 201-217. [174] J. Lee, A drop heuristic conversion method for extracting irregular networks for digital elevation models. Proceedings GIS/LIS '89, Orlando, FL, USA (1989), 30-39. [175] J. Lee, Analyses of visibility sites on topographic surfaces. Internal. J. Geogr. Inform. Systems 5 (1991), 413-429. [176] Z. Li and S. Openshaw, Algorithms for automated line generalization based on a natural principle of objective generalization, Intemat. J. Geogr. Inform. Systems 6 (5) (1992), 373-389. [177] P. Lienhardt, Topological models for boundary representations: a comparison with n-dimensional generalized maps, Comput. Aided Design 23 (1) (1991), 59-82.

384

L. de Floriani et al.

[178] P. Lindstrom, D. KoUer, W. Ribarsky, L.F. Hodges, N. Faust and G.A. Turner, Real-time, continuous level of detail rendering of height fields, ACM Computer Graphics Proceedings, Annual Conference Series (SIGGRAPH '96), ACM Press, New Orleans, LA, USA (1996), 109-118. [179] G. Liotta, F.P. Preparata and R. Tamassia, Robust proximity queries: An illustration of degree-driven algorithm design. Proceedings 13th ACM Symposium on Computational Geometry (1997), 156-165 [180] Y. Livnat, H.W. Shen and C.R. Johnson, A near optimal isosurface extraction algorithm using the span space, IEEE Trans. Visuahzation and Comput. Graphics 2 (1996), 73-84. [181] M. Lounsbery, T.D. DeRose and J. Warren, Multiresolution analysis of surfaces of arbitrary topological type, ACM Trans, on Graphics 16 (1) (1997), 34-73. [182] H. Lukatela, Hipparchus data structures: Points, lines and regions in spherical Voronoi grid. Proceedings Auto-Carto 9 (1989), 164-170. [183] A.T. Lundell and S. Weingram, The Topology ofCW Complexes, Van Nostrand Reinhold Comp. (1969). [184] P. Magillo and L. De Floriani, Maintaining multiple levels of detail in the overlay of hierarchical subdivisions. Proceedings 8th Canadian Conference on Computational Geometry (1996), 190-195. [185] P. Magillo and E. Puppo, Algorithms for parallel terrain modelling and visualisation. Parallel Processing Algorithms for GIS, Ch. 16, R.G. Healey, S. Dowers B.M. Gittings and M.J. Mineter, eds, Taylor & Francis (1998). [186] A. Maheshwari, P. Morin and J.-R. Sack, Progressive TINs: Algorithms and applications. Proceedings 5th ACM Workshop on Advances in Geographic Information Systems, November 15-16, Las Vegas, Nevada (1997). [187] M. Mantyla, An Introduction to Solid Modeling, Computer Press (1988). [188] D.M, Mark, Topological properties of geographic surfaces: Applications in computer carthography. Proceedings 1st International Advanced Study Symposium on Topological Data Structures for GIS 5, G. Dutton,ed.. Harvard (1994). [189] C.S. Mata and J.S.B. Mitchell, A new algorithm for computing shortest paths in a weighted planar subsivision. Proceedings ACM Symposium on Computational Geometry, Nice, France (June 1997), 264-273. [190] I.E. Mc Cormack, M.N. Gahegan, S.A. Roberts, J. Hogg and B.S. Hoyle, Feature-based derivation of drainage networks, Intemat. J. GIS 7 (3) (1993), 263-279. [191] M. McKenna, Worst case optimal hidden surface removal, ACM Trans. Graphics 6, ACM Press, New York (1987), 19-28. [192] R. McMaster, Automated line generalization, Cartographica 24 (1984), 74-111. [193] C. Miller and R.A. LaFlamme, The digital terrain model — theory and applications, Photogramm. Eng. 24 (3) (1958), 433-442. [ 194] G. Misund, Varioscale TIN based surfaces. Proceedings 7th Int. Symp. on Spatial Data Handling, August 12-16, Delft (NL) (1996), 6.35-6.46. [195] J.S.B. Mitchell, D.M. Mount and CH. Papamitridiou, The discrete geodesic problem, SIAM J. Comput. 16 (1987), 647-668. [196] J.S.B. Mitchell and C.H. Papamitridiou, The weighted region problem: Finding shortest paths through a weighted planar subsivision, J. ACM 38 (1) (January 1991), 18-23. [197] J.S.B. Mitchell and S. Suri, A survey of computational geometry, Handbooks in Operations Research and Management Science, Vol. 7, Ch. 7, M.O. Ball et al., eds, Elsevier Science B.V (1995), 439^58. [198] M. Molenaar, Single valued vector maps — a concept in GIS, Geo-Informationssysteme 2(1) (1989). [199] M.E. Mortenson, Geometric Modeling, Wiley, New York (1985). [200] D.G. Morris and R.G. Heerdegen, Automatically derived catchment boundaries and channel networks and their hydrological applications, Geomorphology 1 (1988), 131-141. [201] D.M. Mount, On finding shortest paths on convex polyhedra. Technical Report 1495, Department of Computer Science, University of Maryland, Baltimore, MD (1985). [202] K. Mulmuley, Computational Geometry: An Introduction through Randomized Algorithms, Prentice Hall (1994). [203] G. Nagy, Terrain visibility, Comput. Graphics 18 (6) (1994). [204] S. Naher, The LEDA user manual. Version 3.1 (Jan. 16, 1995). Available by anonymous ftp from ftp.mpisb.mpg.de in directory /pub/LEDA. [205] B. Naylor, J. Amanatides and W. Thibault, Merging BSP-trees yelds polyhedral set operations, Comput. Graphics (Proceedings SIGGRAPH 1990) 24, F Baskett, ed. (1990), 115-124.

Applications

of computational

geometry to geographic information systems

385

[206] G. Nielson, Scattered data modeling, IEEE Comput. Graphics Appl. 13 (1) (1993), 60-70. [207] J. Nievergelt and F.P. Preparata, Plane sweep algorithms for intersecting geometric figures, Comm. ACM 25 (10) (1982), 739-747. [208] O. Nurmi, A fast line-sweep algorithm for hidden line elimination, BIT 25 (1985), 466-472. [209] Openlnventor Architecture Group, Inventor Mentor: Openlnventor Reference Manual, Addison-Wesley (1994). [210] J. Orenstein, An algorithm for computing the overlay of k-dimensional spaces. Proceedings 2nd Symposium on Advanced in Spatial Databases, Lecture Notes in Comput. Sci. 525, O. Giinter and H.J. Schek, eds (1991), 381^00. [211] M. Overmai's and M. Sharir, A simple output-sensitive algorithm for hidden suiface removal, ACM Trans, on Graphics 11 (1992), 1-11. [212] M.H. Overmars, Designing the computational geometry algorithms library CGAL, Appl. Comput. Geom. (Proc. WACG'96), Lecture Notes in Comput. Sci., Springer-Verlag (1997). [213] L. Palazzi and J. Snoeyink, Counting and reporting red/blue segment intersections. Lecture Notes in Comput. Sci. 709, F. Dehne, J.R. Sack, N. Santoro and S. Whitesides, eds. Springer-Verlag, New York (1993), 530-540. [214] J. Persson and E. Jungert, Generation of multi-resolution maps from run length-encoded data, Intemat. J. Geogr. Inform. Systems 6 (6) (1992), 497-510. [215] G. Petrie and T.J.M. Kennie, eds, Terrain Modelling in Survey and Civil Engineering, Whittles Publishing — Thomas Telford, London (1990), 112-127. [216] G. Petrie, Modelling, interpolation and contouring procedures. Terrain Modelling in Survey and Civil Engineering, G. Petrie and T.J.M. Petrie, eds. Whittles Publishing — Thomas Telford, London (1990), 112-127. [217] T.K. Peucker and D.H. Douglas, Detection of surface-specific points by local parallel processing of discrete terrain elevation data, Comput. Graphics and Image Process. 4 (1975), 375-387. [218] S. Pigot, Generalized singular 3-cell complexes. Proceedings 6th International Symposium on Spatial Data Handling, Edinburgh, Scotland (1994), 89-111. [219] M. Pilouk, K. Tempfli and M. Molenaar, A tetrahedron-based 3D vector data model for geoinformation. Advanced Geographic Data Modelling — Spatial Data Modelling and Query Languages for 2D and 3D Apphcations, M. Molenaar and S. De Hoop, eds, Pubhcation on Geodesy — New Series 40 (1994), 129140. [220] W.K. Pratt, Digital Image Processing, Wiley (1978). [221] F.P Preparata and M.I. Shamos, Computational Geometry: An Introduction, Springer-Verlag, Berlin (1985). [222] F.P. Preparata and J.S. Vitter, A simplified technique for hidden-line elimination in terrains. Proceedings STAGS'92, Paris, A. Finkel and M. Jantzen, eds, Lecture Notes in Comput. Sci. 577, Springer-Verlag, Berlin (1992), 135-144. [223] E. Puppo and G. Dettori, Towards a formal model for multiresolution spatial maps, Advances in Spatial Databases, Lecture Notes in Computer Science 951, M.J. Egenhofer and J.R. Herring, eds, Springer-Verlag (1995), 152-169. [224] E. Puppo, Variable resolution terrain surfaces. Proceedings Canadian Conference on Computational Geometry, Ottawa (Canada) 12-15 August, 1996; appeared in longer version as Variable resolution triangulations, Comput. Geom. 11 ( 3 ^ ) (1998), 219-238. [225] E. Puppo and P. Marzano, Discrete visibility problems and graph algorithms, Intemat. J. Geogr. Inform. Sci. 11 (2) (1997), 139-162. [226] E. Puppo and R. Scopigno, Simplification, LOD, and multire solution — Principles and applications, Eurographics '97 Tutorial (1997). [227] V.T. Rajan, Optimality of the Delaunay triangulation in R^, Discrete Comput. Geom. 12 (1994), 189. [228] J. Raper, ed., Three Dimensional Applications in Geographic Information Systems, Taylor & Francis (1989). [229] J.H. Reif and S. Sen, An efficient output-sensitive hidden-surface removal algorithm and its parallelization. Proceedings 4th ACM Symposium on Computational Geometry, Urbana, ACM Press, New York (1988), 193-200.

386

L. de Floriani et al.

[230] S. Rippa, Minimal roughness property ofDelaunay triangulation, Comput. Aided Geom. Design 7 (1990), 489^97. [231] S. Rippa, Adaptive approximations by piecewise linear polynomials on triangulations of subsets of scattered data, SIAM J. Sci. Statist. Comput. 13 (1) (1992), 1123-1141. [232] A.H. Robinson, R.D. Sale, J.L. Morrison and RC. Muehrcke, Elements of Cartography, 5th ed., Wiley, New Yorlc (1984). [233] J.G. Rolme, B. Wyvill and X. Wu, Fast line scan-conversion, ACM Trans, on Graphics 9 (1990), 376-388. [234] A. Rosenfeld and A. Kak, Digital picture processing. Computer Science and Mathematics, Vol. 1 and 2, 2nd ed.. Academic Press (1982). [235] N. Roussopoulos and D. Leifker, Direct spatial search on pictorial databases using packed R-trees, Proceedings SIGMOD Conference (1985), 17-31. [236] J. Ruppert and R. Seidel, On the difficulty of tetrahedralizing ?>-dimensional non-convex polyhedra. Proceedings 5th ACM Symposium on Computational Geometry (1989), 380-393. [237] A. Saalfeld, Joint triangulations and triangulation maps. Proceedings Third Symposium on Computational Geometry, Waterioo, Canada (June 1987), 195-204. [238] A. Saalfeld, Conflation: Automatic map compilation, Internat. J. Geogr. Inform. Systems 2 (3) (1988), 217-228. [239] A. Saalfeld, Delaunay edge refinements. Proceedings 3rd Canadian Conference on Computational Geometry (1991), 33-36. [240] A. Saalfeld, Comflation: Automated map compilation. Technical Report CAR-TR-670, Center for Automation Research (May 1993). [241] A. Saalfeld, Map generalization as a graph drawing problem. Graph Drawing (Proceedings GD '94), R. Tamassia and I.G. Tollis, eds. Lecture Notes in Comput. Sci. 894, Springer-Verlag (1995), 444-451. [242] H. Samet, The Design and Analysis of Spatial Data Structures, Addison-Wesley, Reading, MA (1990). [243] H. Samet, Applications of Spatial Data Structures, Addison-Wesley, Reading, MA (1990). [244] H. Samet, Object-based and image-based representations of objects by their interiors. Advances in Image Understanding — A Festschrift for Azriel Rosenfeld, K. Bowyer and N. Ahuja, eds, IEEE Computer Society Press, Los Alamitos, CA (1996), 316-332. 1245] L.L. Scarlatos, An automatic critical line detector for digital elevation matrices. Proceedings 1990 ACSMASPRS Annual Convention 2, Denver, CO (1990), 43-52. [246] L.L. Scarlatos and T. Pavlidis, Hierarchical triangulation using cartographic coherence, CVGIP: Graphical Models and Image Processing 54 (2) (1992), 147-161. [247] A. Schmitt, Time and space bounds for hidden line and hidden surface algorithms. Proceedings Eurographics'81, North-Holland, Amsterdam (1981), 43-56. [248] L. Schubert and CA. Wang, An optimal algorithm for constructing the Delaunay triangulation of a set of line segments. Proceedings 3rd ACM Symposium on Computational Geometry, Waterloo (June 1987), 223-232. [249] W.J. Schroeder, J. A. Zarge and W. Lorensen, Decimation of triangle mesh, ACM Comput. Graphics 26 (2) (July 1992), 65-70. [250] T. Sellis, N. Roussopoulos and C. Faloutsos, The R^-tree: A dynamic index for multidimensional objects. Proceedings 13th International Conference on Very Large Data Bases (VLDB) (1987), 507-518. [251] M. Sharir and A. Shorr, On shortest paths on polyhedral spaces, SIAM J. Comput. 15 (1986), 193-215. [252] M. Sharir, The shortest watchtower and related problems for polyhedral terrains. Inform. Process. Lett. 29 (1988), 265-270. [253] C.T. Silva, J.S.B. Mitchell and A.E. Kaufman, Automatic generation of triangular irregular networks using greedy cuts. Proceedings IEEE Visualization '95 (1995), 201-208. [254] A.K. Skidmore, Terrain position as mapped from gridded digital elevation model. Internal. J. Geogr. Inform. Systems 4(1) (1990), 33-49. [255] K.R. Sloan and L.D. Hrechanyk, Surface reconstruction from sparse data. Proceedings Conference on Pattern Recognition and Image Processing (1981), 4 5 ^ 8 . [256] J. Snoeyink and M. van Kreveld, Linear-time reconstruction ofDelaunay triangulations with applications. Proceedings 5th European Symposium on Algorithms (1997). [257] D.A. Southard, Piecewise linear surface models from sampled data. Proceedings Computer Graphics International 91, Boston, MA, 22-28 June (1991).

Applications

of computational

geometry to geographic information systems

387

[258] S. Suri, A linear time algorithm for minimum link paths inside a simple polygon, Comput. Vision, Graphics and Image Process. 35 (1986), 99-110. [259] E.J. StoUnitz, T.D. DeRose and D.H. Salesin, Wavelets for computer graphics: A primer Part /, IEEE Computer Graphics and Applications (May 1995), 76-84. [260] E.J. StoUnitz, T.D. DeRose and D.H. Salesin, Wavelets for computer graphics: A primer Part II, IEEE Computer Graphics and AppUcations (July 1995), 75-85. [261] R. Tamassia, ed.. Strategic directions in computational geometry working group report, ACM Comput. Surveys 28 (4) (1996). [262] G. Taubin and J. Rossignac, Geometric compression through topological surgery, IBM Res. Report RC20340 (Januar>' 1996). [263] W.R. Tobler, A continuous transformation useful for districting, Ann. New York Acad. Sci. 219 (1973), 215-220. [264] V.J.D. Tsai, Delaunay triangulations in TIN creation: An overview and a linear-time algorithm, Intemat. J. GIS 7 (6) (1993), 501-524. [265] G. Turk, Re-tiling polygonal surfaces, ACM Computer Graphics Proceedings, Annual Conference Series (SIGGRAPH '92) 26 (2) (1992), 55-64. [266] J. van Bemmelen, W. Quak, M. van Heldcen and P. van Oosterom, Vector vs. raster-based algorithms for cross country movement planning. Proceedings AUTO-CARTO 11, Minneapolis (1993), 309-317. [267] R.C. Veltkamp, 3D computational morphology, EUROGRAPHICS '93 (1993), 115-127. [268] M. van Kreveld, On quality paths on polyhedral terrains. Proceedings IGIS'94: Geographic Information Systems, J. Nievergelt, T. Roos, H.J. Schack and P. Widmayer, eds. Lecture Notes in Comput. Sci. 884, Springer-Verlag (1994), 113-122. [269] M. van Kreveld, Efficient methods for isoline extraction from a TIN, Intemat. J. Geogr. Inform. Science 10 (5) (1996), 523-540. [270] M. van Kreveld, Digital elevation models and TIN algorithms. Notes for CISM Advanced School on Algorithmic Foundations of GIS (1996). [271] M. van Kreveld, Variations on sweep algorithms: Efficient computation of extended viewsheds and class intervals. Proceedings Symposium on Spatial Data Handling (1996), 13A.15-13A.27. [272] M. van Kreveld, R. van Oostrum and C. Bajaj, Contour trees and small seed sets for isosurface traversal. Proceedings 13th Annual ACM Symposium on Computational Geometry (1997), 212-219. [273] P. van Oosterom, A modified binary space partition for geographic information systems, Intemat. J. Geogr. Inform. Systems 4 (2) (1990), 133-146. [274] P. van Oosterom, Reactive Data Structures for Geographic Information Systems, Oxford University Press (1993). [275] P. van Oosterom and V. Schaukelaars, The development of an interactive multi-scale GIS, Intemat. J. GIS 9 (5) (1995), 489-507. [276] I. Vincent and P. Soille, Watersheds in digital spaces: An efficient algorithm based on immersion simulations, IEEE Transactions on Pattern Analysis and Machine Intelligence 13 (1991), 583-598. [277] The Virtual Reality Modeling Language Specification — Version 2.0 (August 1996), http://vag.vrml.org/. [278] B. Von Herzen and A.H. Barr, Accurate triangulations of deformed, intersecting surfaces, ACM Computer Graphics Proceedings, Annual Conference Series (SIGGRAPH '87) 21 (4) (July 1987), 103-110. [279] F. Wagner and A. Wolff, An efficient and effective approximation algorithm for the map labeling problem. Proceedings 3rd European Symposium on Algorithms, Lecture Notes in Comput. Sci. 979 (1995), 420433. [280] F. Wagner and A. Wolff, Map labeling heuristics: Provably good and practically useffil. Proceedings 11th Annual ACM Symposium on Computational Geometry (1995), 109-118. [281] J.M. Ware, C.B. Jones and G.L. Bundy, A triangulated spatial model for cartographic generalization of areal objects. Spatial Information Theory, A. Frank and W. Kuhn, eds, Lecture Notes in Comput. Sci. 988, Springer-Verlag (1995), 173-191. [282] D.F. Watson, Computing the n-dimensional Delaunay tessellation with application to Voronoi poly topes. The Comput. J. 24 (1981), 728-746. [283] L.T. Watson, TJ. Laffey and R.M. Haralick, Topographic classification of digital image intensity surfaces using generalized splines and the discrete cosine transformation, Comput. Vision, Graphics and Image Process. 29 (1985), 143-167.

388

L. de Floriani et al.

[284] H. Webb, Creation of digital terrain models using analytical photogrammetry and their use in civil engineering. Terrain Modelling in Survey and Civil Engineering, G. Petrie and T.J.M. Petrie, eds. Whittles Publishing — Thomas Telford, London (1990), 73-84. [285] K. Weiler, Edge-based data structures for solid modeling in a curved-surface environment, IEEE Computer Graphics and Applications 5(1) (January 1985), 2 1 ^ 0 . [286] R. Weibel and M. Hellen, A framework for digital terrain modeling. Proceedings 4th International Conference on Spatial Data Handhng, Zurich (1990), 219-229. [287] R. Weibel, A typology of constraints for line simplification. Proceedings 7th International Symposium on Spatial Data Handling (SDH'96) (1996), 9A.1-9A.14. [288] J. Wemecke, The Inventor Mentor: Programming Object-Oriented 3D Graphics with Open Inventor, Addison-Wesley (1994). [289] G.W Wolf, Metric surface networks. Proceedings 4th International Symposium on Spatial Data Handling, Zurich (1990), 844-846. [290] T.C. Woo, A combinatorial analysis of boundary data structure schemata, IEEE Comput. Graphics Appl. 5 (3) (1985), 19-27. [291] M.F. Worboys, A generic model for planar geographic objects, Intemat. J. Geogr. Inform. Systems 6 (5) (1992), 353-372. [292] M.F. Worboys and P. Bokafos, A canonical model for a class ofareal spatial objects. Advances in Spatial Database (SSD93), D. Abel and B.C. Ooi, eds. Lecture Notes in Comput. Sci., Springer-Verlag (1993), 36-52. [293] J.C. Xia and A. Varshney, Dynamic view-dependent simplification for polygonal models. Proceedings IEEE Visualization '96, R. Yagel and G. Nielson, eds, S. Francisco, CA (1996), 327-334. [294] S. Yu, M. Van and J. Snoeyink, Drainage queries in TINs: From local to global and back again. Proceedings Symposium on Spatial Data Handling (SDH'96) (1996), 13A.1-13A.14. [295] B. Zhu, Computing the shortest watchtower of a polyhedral terrain in 0{n log w) time, Comput. Geom. 8 (1997), 181-193. [296] S. Zoraster, Integer programming applied to the map label placement problem, Cartographica 22 (3) (1986), 16-27.

CHAPTER 8

Making Geometry Visible: An Introduction to the Animation of Geometric Algorithms Alejo Hausner and David R Dobkin* Computer Science Department, Princeton University, Princeton, NJ 08544 USA E-mail: {ah, dpd] @ cs.princeton. edu

Contents 1. Introduction Which would you prefer? Who should read this? It's interactive, not static You've already seen it The need for animation How can algorithm animation help you? It will improve further Chapter overview 2. General systems 2.1. Sorting out sorting 2.2. BALSA and Zeus 2.3. "Descendants" of BALSA 3. Geometry-oriented systems 3.1. Two-dimensional systems 3.2. Beyond two dimensions 4. Visualizations 4.1. Mathematical visualization 4.2. Demonstrations 4.3. Algorithm animations 4.4. Interactive visualizations 5. Design issues relevant to algorithm animation systems 5.1. Scope 5.2. Author and audience 5.3. Expressiveness

391 391 391 391 391 392 392 392 393 393 393 395 397 401 401 402 405 406 406 407 410 413 413 413 414

*This work has been supported in part by NSF Grants CCR-9643913 and CCR-9731535 and by the US Army Research Office under Grant DAAH04-96-1-0181 and by Dimacs, an NSF Science and Technology Center. HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V All rights reserved

389

390 5.4. User interaction 6. Techniques 6.1. Abstraction 6.2. A few animation techniques 6.3. Problems specific to 3D 7. Conclusion The future References

A. Hausner and D.P. Dobkin 415 416 416 416 417 419 420 420

Abstract This chapter surveys the field of geometric algorithm animation, which has flourished in the past five years. Algorithm animation uses moving pictures to explain a computer algorithm to students learning programming, to algorithm researchers, and to programmers involved in debugging. We examine systems used to create animations of algorithms, focusing on those that specialize in geometry. We also consider a selection of animations, pointing out some useful techniques, and continue with some consideration of general design issues and specific animation techniques, closing with a look at the future.

An introduction to the animation of geometric algorithms

391

1. Introduction Which would you prefer? Imagine yourself learning a complicated new algorithm that solves a geometric problem. You might read a book or a research paper that explains the algorithm. You probably will have to pore over the algorithm's code, which is a difficult way to learn complicated algorithms ! Even if the author had provided you with simplified pseudocode, you might still need a long time to understand the details of the algorithm. Now suppose an animation of the algorithm is available. Instead of having to imagine what is happening to the algorithm's variables, you see colored shapes which represent the program's data. These shapes move and change as the program runs. A triangle is not presented to you as the coordinates of three points, but is drawn on the screen, changing as the program works on it. As you choose different inputs, the animation will be different. You can slow the animation down, to better understand complicated parts, or speed it up to skip what you already understand, or even run it backwards to recall where things used to be. Which scenario would you prefer, reading about the algorithm or watching it in action? If you prefer the animation, then this chapter is for you. Who should read this? Anyone who explores geometry and computation may find algorithm animation useful. Explorers of geometry can be mathematicians, users or designers of geometric algorithms, or students. All of them need explanatory tools, whether they are learning about the subject or searching for new solutions to a problem. Computer-based geometric visualization and algorithm animation can provide such an aid. Visualization is the creation of images to describe an abstract idea. Computer-generated images have advantages over hand-drawn ones. They are accurate, can be modified with ease, and in the case of three-dimensional geometric objects, they can be examined from any angle. Ifs interactive, not static Algorithm animation visualizes the activity of an algorithm through moving images. As in conventional animation, the movement of images is not real, but is simulated by presenting the viewer with a sequence of still images. Unlike conventional animation, however, in algorithm animation the sequence of images is not fixed. Each image represents the state of the algorithm at some point in time. Since the inputs to an algorithm can vary, its behavior will vary and hence so will the animation of that behavior. You \e already seen it In a way, algorithm animation is as old as Euclid. It is not too far-fetched to say that a constructive geometric proof is a form of animated algorithm. For example, the proof that one

392

A. Hausner and D.P. Dobkin

can construct an equilateral triangle with a given side (Proposition 1) instructs the reader to draw two circles centered at the sides' endpoints and to draw lines to the circles' intersection. The computer in this case is a person armed with compass and straightedge, who is following instructions and reaching a result, so the procedure qualifies as an algorithm.

The need for animation Some explorers of geometry devote their time to designing algorithms that solve geometric problems. Even though their solutions will run on a computer, they often find themselves working "in the dark", needing to visualize not just the objects they are manipulating but also the process by which the algorithm modifies the objects. Geometric algorithms are often more difficult to design, program and debug than other algorithms. This is because the designers often lack the means to see the objects their algorithms are manipulating, and are forced to pore over printouts of point coordinates. Algorithm animation can help.

How can algorithm animation help you ? There are several reasons for animating an algorithm. First, a theoretician who is devising geometric algorithms needs a medium for disseminating his/her ideas. Although research papers satisfy this role, a well-crafted animation on videotape can communicate an idea more quickly than a printed description. If the author makes an interactive animation available on the world-wide web, people can learn the algorithm more easily. Second, a teacher who is presenting an algorithm to students can benefit in the same way. However, the audience in this case will not consist of experts as in the first case, so the animation must be prepared with more care. Third, animating an algorithm can stimulate the designer into finding new variations. In effect, such an animation explains the algorithm to its author! A geometric algorithm may need to handle special or degenerate cases, and animation often helps to identify these cases. Finally, algorithm animation can help with debugging of general programs, although this application turns out to have limited scope.

It will improve further Algorithm animation is a young discipline, and will probably change in the next few years. The changes will come both from improvements in software and hardware. Although animation as a medium can make an algorithm more understandable, it also has some limitations. For example, there are some algorithms that are rather complicated, and are difficult to understand even with a well-crafted video. In an animation where many things are happening simultaneously, the user will want to stop and ask questions, such as "why didn't the scan line advance when that point was processed?". Hypertext documents allow the viewer to ask such questions, but in the past they have portrayed either fixed images or fixed animations, and this limitation has reduced their explanatory powers. There are situations where the user wants to interact with a running algorithm, and also wants

An introduction to the animation of geometric algorithms

393

to see or hear other information about it. There is a great potential for improvement in algorithm animation through web-based programming languages like Java, which present the user with running programs and not just static images. Animation places great demands on computer resources. If a prepared animation is made available over the internet, it will need large amounts of storage space, and more importantly, a very high-speed connection to the net, lest the users spend hours downloading a five-minute movie. On the other hand, if the user does not download the data but rather runs a program that creates an animation in real time, he/she may need a very fast computer, lest a complicated five-minute animation take 30 minutes to run. A similar problem exists if the user requests a three-dimensional scene stored as VRML data. If the scene is complicated, the rendering speed will suffer, and hence, so will the user. Increases in storage, communication bandwidth and processing power will make possible new algorithm animations and geometric visualizations.

Chapter overview Algorithm animation has its roots in educational film and in mathematical visualization. It has also been influenced by recent developments in scientific data visualization, which is at present an extremely diverse and active field. Early efforts in algorithm animation in the late 1970's resulted in an educational film [2] which is still used today. The middle 1980's saw the development of systems [8,20] which allow a user to run algorithms and see animations interactively. Recent efforts have added color and sound [10,13], as well as the use of object-oriented programming [9,1,29,15]. Section 2 provides a survey of systems used to create animations of general algorithms. Excellent overviews of the field [27,36] are available, so here we will concentrate on questions that a designer of similar systems must face. Geometric algorithms present special opportunities and difficulties, and systems to make animations for them have also evolved [29,17,37]. Section 3 gives a survey of these systems. Again, the survey will not be exhaustive but will focus on technique. The past five years have seen the field of algorithm animation flourish. Rather than require the audience to install the animation software on their own computers, researchers have found it convenient to use video tape to record representative sessions. Section 4 surveys these video recordings. The videos selected for discussion use interesting animation techniques. Whether a researcher is creating animations or is planning a system to assist in their creation, he/she must consider some general issues. These are covered in Section 5. Finally, Section 6 presents some techniques which can be used in an algorithm animation to convey ideas more clearly. 2. General systems 2.1. Sorting out sorting The most well-known example of an early algorithm animation is the film "Sorting out Sorting", created by Ronald Baecker [2] and presented at Siggraph' 81. It explains concepts

394

A. Hausner and D.R

Linear insertion '•.••cr-^-^r.^-:

Dobkin

Bubblesort

:,."••:,••:

•:.'•'•••:.]

Straight Selection

..

. . . . . . • . . • - • , • .

••

•••;•••:

. . . . . . . . . . . . . . •

. • • • • • • " . • • ' . '

:'-/-^r.-':.;:^^:V.;

' • • • • ;

•••••••

:

'.;.

•.•;

••ir;.>-.-:-.

.•

^ . < :

• ' • ' i - ' . . -

.'••,'•',•••.•:

'

:.

.•^'••••.•.•r^"-*=^

• • " • ' . • •

i '

1

• ••

""..r":'-



.••:•

v-^'.-u-.-.-.^j^'.^•;.:;•.••.•,..

1 •'

'

• • • . • • ' : ' • • • / . ; • • :

'

. : • : • ' = ' : ; ;

••

- •

T . " ' . : : "

.""...•

• ,

..

r T . " •.••• •.

•;



••:"1"

'^^ ."•

.'•

••



• • • • . ' . •

••.•••

.••

• ' [ • - " •

• • • ^ :

•"•

.' • •

. . ' •

.



••

.

•"•

:':••

•:•*•.••••

'

•••.••••.••

• • :

• ' • : •

••••••..•

.;••••'•



. \

. . A .

..'



'

> V ' ' - : ' " - " . " ^ ' ^ ' . -

!:"• " i l " - ^ " " " " "

' v ' ^ . • ; • • • • > . : : > . : ; • •



• . • • " : •

' • \ . : . -

'•'-•

\ \ .

:-': .r*'.'.

^ ^

••"

"••.•• ."'J" ^'.

• •'

.••;'•

.••"

"•.•f\

••"...••.•

'



.

r•

, . 5 : - ^ - - \ : - ^ : V v ' -

" - • " • ' ' . ' ! • • • '

".

t . \

C ' . -

•".*,'•

^ : - ' - " • • :•"

••"•

. . • . . • ' . •

'•''

' • " • ' :

'.._,•.•_!( JN V; • ^- ' •• '••'y--.. ••

• J

. • r ' " '

• - . . • . • . . . . . •

. V 1

-y-

. . . . ; ..

i

;•

..

— ••

Shellsort

Quicksort

h----^:>;i;f •;

• . . • ^ ^ • ' : • • • ' • . •

> ' - ^ -



,. •



'-

^

:









/

'

.'

••••y^'{V;;;-'."V-=^'.

.

It.-',•^•.•*•

••

. ' '. • f' V

,'

Heapsort • : . \ •: ,..•.•.'••: '-'• i.\.

• •.;,;-"^''wM^' ;

•:.

: • ^ • • ^ • .

' . .

•-..•-\;.-r-"-.-v.T^;'••:/• • •.

. .

.

Tree Selection

•..,- :)y^.-^

;

-^

Shakersort

:'•:•: • ^ • ; i ; : ; ^ : • v ' • • ' r • * ^

• • ' ^ • . , / ; , - / - : '

•••

.•

Binary Insertion

• ."^

/."•i.-r.•;

• ' ^ ; - v ; ; ; . - . • . ; ; : . ; / . • •

''.'.

• • •

'^ • '_i.'r. • •. •"



H•'^^ rN \

.






:; -/5S$^\r

I'iput Generators t-

Proxinity Probiens i

Extract Components o

Gaps and Co^'ers o

Fonts t>

TrangLlatlons o

Segments >

Vc4-oroi Diagrams >

Rays f>

^

Cir'^ies I-

Proximity Problems

(Uiairatar (brute forciT;!

Polygons >

Diameter (Preparata & 5hanos)

Movie t>

Closest Pairi.brute force)

Externa Classes (examples) c

Closest Pair (Divide & Conrjue-)

Settlnqs...

Closest Pair lusinn Vxonoi I'ia'irarn)

). n

Callahan [31] gave an improved reduction from minimum spanning trees to bichromatic closest pairs, based on the well-separated pair decomposition described below. With his results, the minimum spanning tree can be computed with a time slower than that of the bichromatic closest pair by an O(logn) factor. Like the reduction above, this factor disappears for time bounds of the form Q{n^^^). For rectilinear {L\ and Loo) metrics, bichromatic closest pair problems can be solved in linear time after a preprocessing stage in which, for each of a constant number of different axes, we sort the points according to their projections along the axis. Therefore, rectilinear minimum spanning trees can be computed in 0{n \ogn) time in any dimension.

432

D. Eppstein

Fig. 2. A fair split tree.

Approximate MSTs. The method above for high dimensional minimum spanning trees takes close to quadratic time when the dimension is large. We describe now a method for approximating the minimum spanning tree, within any desired accuracy, much more quickly. This method is due to Callahan [31,32], and is based on his technique of wellseparated pair decomposition. Similar minimum spanning tree approximation algorithms with worse tradeoffs between time and approximation quality were previously found by Vaidya [128] and Salowe [116]. As we noted above, Callahan also uses this technique to compute exact minimum spanning trees from bichromatic pairs. We will see more applications of Callahan's method later in the construction of spanners. Callahan's method is to recursively partition the given set of sites by axis-parallel hyperplanes. This produces a binary tree 7, in which each node corresponds to a subset of points contained in some J-dimensional box. For any node i; of 7, corresponding to some set S{v) of points, define R{v) to be the smallest axis-aligned box containing the same point set, and let i{v) denote the length of the longest edge of R{v). Then Callahan calls T a fair split tree if it satisfies the following properties: • For each leaf node vofT, \ S(v) | = 1. • For each internal node v, one can choose the axis-parallel hyperplane separating its children's points to be at distance at least l{v)/3 from the sides of R{v) parallel to it. Figure 2 depicts a fair split tree. Such a tree can be constructed in time 0(nlogn) (for any fixed dimension) via any of several algorithms, for instance using methods for quadtree construction [26]. Callahan now defines a well-separated pair decomposition (for a given parameter s)to be a collection of pairs of nodes of a fair split tree, having the properties that • For any pair of sites (p, q) in the input, there is exactly one pair of nodes (P, Q) in the decomposition for which p is in S{P) and q is in S{Q). • For each pair (P, Q) in the decomposition, there is a length r such that S{P) and S{Q) can be enclosed by two radius-r balls, separated by a distance of at least sr. The connection between this definition, minimum spanning tree edges, and bichromatic pairs, is explained by the following result. LEMMA

6 (Callahan). Let pq be a minimum spanning tree edge, and (P, Q) be a pair of

Spanning trees and spanners

433

nodes in a well-separated decomposition of the input (with s = 2) for which p is in S(P) and q is in S{Q). Then pq must be the bichromatic closest pair connecting S(P) and SiQ). PROOF. This follows from the well-known fact that the minimum spanning tree solves the bottleneck shortest path problem, of connecting two points p and ^ by a path with the shortest possible maximum edge length. If there were a shorter edge p^q^ connecting S(P) and S(Q), one could find a path p-p^-q'-q using shorter edges than pq, contradicting this property of minimum spanning trees. D

As a consequence one can find minimum spanning trees by solving a collection of bichromatic closest pair problems for the pairs of a well-separated pair decomposition, then compute a minimum spanning tree on the resulting graph. However for this approach to be useful, one must bound the complexity of the well separated pair decomposition. Callahan constructs his well separated pair decomposition as follows. The method is recursive, and constructs a collection of pairs of sets covering all edges connecting two point sets, each represented by a node of T. Initially, we call this recursive algorithm once for the two children of each node of T. The recursive algorithm, if presented with wellseparated nodes P and Q of T, simply returns one pair (P, Q). Otherwise, it swaps the two nodes if necessary so that P is the one with the larger bounding box, and calls itself recursively on node pairs (Pi, Q) and (P2, Q) where P/ are the two children of P. Intuitively, each node of T is then involved only in a small number of pairs, corresponding to nodes with similar sized bounding boxes within a similar distance from the node. LEMMA 7 (Callahan). The procedure described above results in a well-separated pair decomposition consisting ofO(n) pairs, and can be performed in time 0(n) given a fair split tree of the input.

This information is not enough to reduce the problem of finding exact minimum spanning trees to that of bichromatic closest pairs; it may be, for instance, that the decomposition involves ^(n) pairs (P, Q) for which \S(P)\ = 1 and \S(Q)\ = ^{n)\ the total size of all subproblems would then be quadratic. (Callahan does use this decomposition in a more complicated way to reduce minimum spanning trees to bichromatic closest pairs). However if we only want an approximate minimum spanning tree, the problem becomes much easier. 3 (Callahan). For any fixed s, one can compute in 0(n logn) time a tree having total length within a factor of{\ -\- e) of the Euclidean minimum spanning tree. THEOREM

PROOF. Compute a well-separated pair decomposition for s = max(2,4/s) and form a graph G by choosing an edge pq for each pair (P, Q) in the decomposition. Note that pq can have length at most r(4 + s) and any other edge connecting the same pair of sets must have length at least rs, so pq approximates the bichromatic closest pair of the sets P and Q within a (1 -h ^) factor. Replacing each edge of the true minimum spanning tree by the corresponding edge in G produces a subgraph 5 of G with weight within (I -^ s) of the minimum spanning tree.

434

D. Eppstein

We now prove by induction on the length of the longest edge in the corresponding minimum spanning tree path that any two sites are connected by a path in S. Specifically, let e = pq he the longest edge in the minimum spanning tree path connecting the sites; then the corresponding edge in G connects two points p^q^ in the same pair (P, Q) of the decomposition. There is a path of edges all shorter than e from one of the two sites to p^ (via the minimum spanning tree to p, then within the cluster to p\ using the fact that ^ > 2 to prove that this last step is shorter than e). By the induction hypothesis, one can find such a path using only edges in S. Similarly there is a path of edges in S all shorter than e from q^ to the other site. Therefore S is connected, and must be a spanning tree. The minimum spanning tree of G must then be shorter than S, so it approximates the true minimum spanning tree of the sites within a (I-\- e) factor. D By choosing more carefully the representative edges from each pair in the decomposition, Callahan achieves a time bound of 0(/i(log n-\-e~^^'^\oge~^))for this approximation algorithm. Incremental and offline MSTs. In some applications one wishes to compute the minimum spanning trees of a sequence of point sets, each differing from the previous one by a small change, such as the insertion or deletion of a point. This can be done using similar ideas to the static algorithms described above, of constructing a sparse graph that contains the geometric minimum spanning tree. Such a graph will also change dynamically, and we use a dynamic graph algorithm to keep track of its minimum spanning tree. As with the static problem, the specific graph we use may be a Yao graph, Delaunay triangulation, or bichromatic closest pair graph, depending on the details of the problem. We start with the simplest dynamic minimum spanning tree problem, in which only insertions are allowed to a planar point set. In this case, the graph of choice is the Yao graph described earlier, in which we compute for each point the nearest neighbor in each of six 60° wedges. LEMMA 8. Given a set S ofn points, and a given 60° wedge of angles, in 0(n \ogn) time we can construct a data structure which can determine in 0(log/i) time the nearest point in S to a query point x, and forming an angle from x within the given wedge. PROOF. We construct a Voronoi diagram for the convex distance function consisting of the Euclidean distance for points within the wedge, and infinite distance otherwise. This diagram can be found in time 0{n \ogn) using any of several algorithms [41,56,104]. We then perform planar point location in this diagram; again, various algorithms are possible [58, 92,99,119]. D

We can then apply the static-to-dynamic transformation for decomposable search problems [21] to this data structure, producing an incremental data structure in which we can insert points to S and answer the same sorts of queries, both in time 0(log^ n). 4. We can maintain the Euclidean minimum spanning tree of a set ofpoints in the plane, subject to point insertions only, in time 0(log^ n) per update. THEOREM

Spanning trees and spanners

435

We maintain a sparse graph G containing the minimum spanning tree. We then use the above data structure to determine for each new point six candidate edges by which it might be connected to the minimum spanning tree, and add these edges to G. Insertions to a dynamic graph minimum spanning tree problem can be handled in logarithmic time each, using a data structure of Sleator and Tarjan [124]. D PROOF.

A more complicated technique based on the same Yao graph idea can be used to handle an offline sequence of both insertions and deletions, in the same time bound [61]. Fully dynamic MSTs. The fully dynamic case, in which we must handle both insertions and deletions online (without knowing the sequence of events in advance) is much harder. A method of Eppstein [62] can be used to maintain the solution to a bichromatic closest pair problem; combining this with the reduction of Lemma 5 and a fully dynamic graph minimum spanning tree algorithm [69] gives a method with amortized running time 0(n^/^log^n + ^i-2/(r6f/2i+i)+e^ p^j, update, where d denotes the dimension of the problem. Recently Henzinger and King [81] further improved this to OCn^/^log'^+^n +ni-2/(r^/2Hi)+6:) Agarwal et al. [3] extended the nearest neighbor searching methods used by these data structures to more general algebraic distance functions in two dimensions, and in particular to planar Lp metrics. In rectilinear metrics, bichromatic closest pairs can be maintained in polylogarithmic time, so the total time per update can be simplified to 0{n^^^ log^^^ n). However these time bounds are too large, and the algorithms too complicated, to be of practical interest. We instead describe in more detail a technique that works well in a certain average case setting defined by Mulmuley [108] and Schwarzkopf [120]. We define a signature of size n to be a set 5 of n input points, together with a string s of length at most In consisting of the two characters " + " and "—". Each " + " represents an insertion, and each "—" represents a deletion. In each prefix of s, there must be at least as many " + " characters as there are "—" characters, corresponding to the fact that one can only delete as many points as one has already inserted. Each signature determines a space of as many as (n!)^ update sequences, as follows. One goes through the string s from left to right, one character at a time, determining one update per character. For each "-h" character, one chooses a point x from S uniformly at random among those points that have not yet been inserted, and inserts it as an update in the dynamic problem. For each "—" character, one chooses a point uniformly at random among those points still part of the problem, and updates the problem by deleting that point. For any signature, we define the expected time for a given algorithm on that signature to be the average time taken by that algorithm among all possible update sequences determined by the signature. The algorithm is not given the signature, only the actual sequence of updates. We then define the expected time of the algorithm on inputs of size n to be the maximum expected time on any signature of size n. In other words, we choose the signature to force the worst case behavior of the algorithm, but once the signature is chosen the algorithm can expect the update sequence to be chosen randomly from all sequences consistent with the signature. Note that this generalizes the concept of a randomized incremental, since the expected case model for such algorithms is generated by signatures containing only the "-h" character. To demonstrate the power of this expected case model, and derive a fact we will need in our minimum spanning tree algorithm, we prove the following result. Compare this

436

D. Eppstein

with the Gin) worst case bound on the amount of change per update in the Delaunay triangulation. LEMMA 9 (Mulmuley). For any signature of size n, the expected number of edges in a dynamic planar Delaunay triangulation that change per update isO{\).

We first consider the change per insertion. Suppose after an insertion at some step / of the algorithm, there are exactly j points remaining. Since the Delaunay triangulation is a planar graph, it will have at most ?>j — 6 edges. Each edge will have just been added to the graph if and only if one of its endpoints was the point just inserted, which will be true with probability 2/j. So the expected number of additional edges per insertion is at most (3j — 6) • 2/j = 0(1). The number of existing edges removed in the insertion is at most proportional to the number of edges added, and can possible be even smaller if the convex hull becomes less complex as a result of the insertion. Thus the total change per insertion is 0(1). The total change per deletion can be analyzed by a similar argument that examines the graph before the deletion, and computes the probability of each edge being removed in the deletion. D

PROOF.

This fact can then be used to show that the Delaunay triangulation can be maintained efficiently. Deletions can be performed in time proportional to the complexity of the change [5]. Insertions can also be performed in this amount of time as long as the point to be inserted can be located within the existing Delaunay triangulation. This point location can be performed using a data structure based on the history of the structure of the Delaunay triangulation, similar to that used by Guibas et al. [78], and which can be shown to have logarithmic expected search time in this expected case model. With some care, this structure can be updated quickly after each deletion. THEOREM 5. The minimum spanning tree of a planar point set can be maintained fully dynamically for any signature of size n in expected time O(logn) per update. PROOF. AS shown by Mulmuley, by Schwarzkopf, and by Devillers et al. [51], we can maintain the Delaunay triangulation in this time bound. Since the Delaunay triangulation is a planar graph, we can find its minimum spanning tree using a data structure of Eppstein etal. [70]. D OPEN PROBLEM 1. Can we maintain the Euclidean minimum spanning tree dynamically in poly logarithmic worst case or amortized time per update?

Kinetic MSTs. Instead of allowing only one point at a time to change arbitrarily, one can also consider an alternative form of dynamism, in which all points move simultaneously but predictably. The simplest form of this problem is one in which a set of n points each have a linear motion (as a function of a parameter which we refer to conventionally as time). One can ask how many times the minimum spanning tree of these moving points changes, ask for an algorithm to compute this sequence of changes, or find the value of the time parameter that optimizes some given function of the MST.

Spanning trees and spanners

437

Note that two linearly moving points have a quadratically varying Euclidean distance, so one can not directly apply known combinatorial bounds or algorithms for graphs with linearly varying edge weights [52,64,71,79]. However, several insights from the graph case carry over to the geometric setting; in particular, for point sets in general position (no three simultaneous equal distances) the MST changes must be in the form of swaps between two equal-length edges. If one draws an arrangement of curves, one curve graphing the weight function of each potential edge in the MST, then these changes only occur at times corresponding to arrangement vertices. Hence there can be O(n^) MST changes total, in any metric with the property that the distance between two moving points is a piecewise algebraic function with bounded degree and a bounded number of breakpoints. It is easy to find point sets for which the MST changes Q{n^) times (simply pass n/2 moving points along a line containing n/2 fixed points). Katoh et al. [86] showed that the Yao graph, and hence the Euclidean MST of a linearly moving point set, can only change 0(/2^2"^"^) times, that the rectilinear MST can change 0{n^^^a{n)) times, and that the rectilinear maximum spanning tree can change 0{n^) times. For Euclidean maximum spanning trees the best bound on the number of changes is currently only 0(n^'^^^) [126]. The 0{n^f^a(n)) bound on rectilinear MST changes follows by partitioning the problem into a sequence of 0{na{n)) sparse graph problems; it therefore follows from the recent breakthrough results of Dey [52] that this bound can be improved to 0{n^^^a{n)). All of these results are largely independent of the dimension. Note that only in the case of rectilinear maximum spanning trees are these bounds tight. OPEN PROBLEM 2. What is the maximum number of changes in the Euclidean minimum spanning tree ofn linearly moving pointsl How quickly can we compute that sequence of changesl

Finally, this problem of moving points can be combined with the dynamic point sets described earlier in kinetic geometry, a framework in which a system of linearly moving points can be updated by inserting, deleting, or changing the motion of points [19]. Basch et al. [20] describe results on proximity problems of moving points in this framework.

2.2. Maximum weight spanning trees For graphs, the maximum spanning tree problem can be transformed to a minimum spanning tree problem and vice versa simply by negating edge weights. But for geometric input, the maximum spanning tree is very different from the minimum spanning tree, and different algorithms are required to construct it. This problem was first considered by Monma et al. [106], and has applications in certain clustering problems [16]. We first examine the edges that can occur in the maximum spanning tree. One might guess, by analogy to the fact that the minimum spanning tree is a subgraph of the Delaunay triangulation, that the maximum spanning tree is a subgraph of the farthest point Delaunay triangulation. Unfortunately this is far from being the case — the farthest point Delaunay triangulation can only connect convex hull vertices, and it is planar whereas the maximum

438

D. Eppstein

spanning tree has many crossings. However we will make use of the farthest point Delaunay triangulation in constructing ih^ farthest neighbor forest, formed by connecting each site to the site farthest away from it. The first fact we need is a standard property of graph minimum or maximum spanning trees. LEMMA 10. The farthest neighbor forest is a subgraph of the maximum spanning tree. LEMMA 11 (Monma et al. [106]). Let each tree of the farthest neighbor forest be twocolored. Then for each such tree, the convex hull vertices of any one color form a contiguous nonempty interval of the convex hull vertices. The trees of the forest can be given a cyclic ordering such that the intervals adjacent to any such interval come from adjacent trees in the ordering.

12 (Monma et al. [106]). Let e = (x,y) be an edge in the maximum spanning tree but not in the farthest neighbor forest, with x in some farthest point neighbor tree T. Then x and y are both convex hull vertices, and y is in a tree adjacent to T in the cyclic ordering of Lemma 11. LEMMA

Putting these pieces of information together, we have the following result. THEOREM 6 (Monma et al. [106]). The maximum spanning tree of a planar point set can be constructed in 0(n \ogn) time by computing the farthest neighbor forest, determining the cyclic ordering of Lemma 11, finding the longest edge between each adjacent pair of trees in the cyclic ordering, and removing the shortest such edge.

The farthest neighbor forest and the longest edge between adjacent trees can be computed easily via point location in the farthest point Voronoi diagram. Eppstein [63] considered the same problem from the average case dynamic viewpoint discussed above. His algorithm performs a similar sequence of steps dynamically: maintaining a dynamic farthest neighbor forest, keeping track of the intervals induced on the convex hull and of the cyclic ordering of the intervals, and recomputing longest edges as necessary between adjacent intervals using a dynamic geometric graph based on the rotating caliper algorithm for static diameter computation. THEOREM 7 (Eppstein). The Euclidean maximum spanning tree can be maintained in expected time 0(log'^ n) per update.

Just as higher dimensional minimum spanning trees are closely related to bichromatic nearest neighbors, higher dimensional maximum spanning trees can be related to bichromatic farthest neighbors. Agarwal et al. [4] used this idea to give a randomized algorithm for the three-dimensional maximum spanning tree with expected time complexity 0(/i^/^log^/^n), almost matching the bound for the corresponding minimum spanning tree problem. The same authors also provide fast approximation algorithms to three- and higher-dimensional maximum spanning tree problems.

Spanning trees and spanners

439

Since the maximum spanning tree involves many edge crossings, it is also natural to consider the problem of finding the maximum planar spanning tree, that is, the maximum weight spanning tree not involving any edge crossings. The exact complexity of this problem appears to be unknov^n, but Alon et al. [7] point out that it can be approximated to within a factor of two simply by choosing a tree with a star topology (in which one hub vertex is connected to all n — 1 others). Choosing the best possible hub can be done in 0(n^) time; Alon et al. show that a constant factor approximation to the maximum noncrossing spanning tree can be found in 0(n) time by choosing a suboptimal hub. OPEN PROBLEM 3. What is the complexity of finding the exact maximum weight noncrossing spanning treel If it is NP-hard, how well can it be approximated in polynomial timel

2.3. Low-degree spanning trees For points in the plane (with the Euclidean metric), any minimum spanning tree has degree at most six, and a perturbation argument shows that there always exists a minimum spanning tree with degree at most five [107]. In general the degree of a minimum spanning tree in any dimension is bounded by the kissing number (maximum number of disjoint unit spheres that can be simultaneously tangent to a given unit sphere) [114]. However it is interesting to consider the construction of trees with even smaller degree bounds. As an extreme example, the traveling salesman path problem asks us to find a spanning tree with degree at most two. As is well known, one can approximate this to within a factor of two by performing an Euler tour of the graph formed by doubling the edges of a minimum spanning tree. Christofides' heuristic [42] reduces the approximation ratio to 3/2 by forming a smaller Eulerian graph, the union of a minimum spanning tree and a matching on its odd degree vertices. These techniques do not take much advantage of the geometry of the problem, and work for any metric space. Very recently, Arora [11,12] has discovered a polynomial time approximation scheme for the planar Traveling Salesman Problem. The basic idea, like that of many recent geometric approximation algorithms, is to use dynamic programming on a structure similar to Callahan's fair split tree. Arora shows that, for any point set, there exists a tour approximating the TSP and a recursive decomposition in which each box is crossed only a small number of times by this approximate tour. One can then find the best possible subtour for each small subset of edges crossing each box, by combining similar information from smaller boxes. The time to approximate the TSP within a factor of 1 + 6: is 0{n log^^^^'^^ n). This approximation strategy generalizes to higher dimensional traveling salesman problems as well, with a time bound of the form 0(n log^^^^/^^ n). For any fixed dimension and fixed value of s this is within a polylogarithmic factor of linear, but with an impractically high exponent in the poly log. One can also consider degree bounds between two and five. KhuUer et al. [90] consider this problem for degree bounds of three and four; they show that one can find constrained spanning trees with length 3/2 and 5/4 times the MST length respectively. They also show

440

D. Eppstein

that in any dimension one can find a degree-three tree with total length at most 5/3 that of the minimum spanning tree. The methods of Khuller et al. are based, like the 2-approximation to the TSP, on modifications to the minimum spanning tree to reduce its degree. For instance, their algorithm for the planar degree-three tree problem roots the minimum spanning tree, then finds for each vertex v the shortest path starting at v and visiting all the (at most four) children of v. The union of these paths is the desired degree-three tree. The exact solution to the planar degree-three spanning tree problem is NP-hard [111] but apparently the complexity of the degree-four problem remains open. O P E N PROBLEM 4. Is it NF-hard to find the minimum weight degree-four spanning tree of a planar point setl

It is also natural to consider maximization versions of these bounded-degree spanning tree problems. Barvinok [18] shows that the adjacency matrix of a complete geometric graph (using a polyhedral approximation to the Euclidean distance function) has "low combinatorial rank" and uses this to approximate the maximum traveling salesman problem within any factor (1 — £) in polynomial time. Little seems to be known about similar bounded-degree maximum spanning tree problems with larger degree bounds. Alon et al. [7] consider the maximum non-crossing traveling salesman problem, and its generalization to the construction of a maximum bounded-degree non-crossing spanning tree. They show that this can be approximated by linking the edges of a heavy non-crossing matching formed by projecting the points onto a line, splitting the points into two sets by their median on this line, and matching points of one set with those of the other. The approximation ratio of this method for the non-crossing traveling salesman problem is l/n — s; this ratio is not worked out explicitly for the other bounded-degree spanning tree problems but remains a constant. The time used by Alon et al. to find this long non-crossing path is 0(n logn). As with the other non-crossing problems considered by Alon et al., the complexity of finding exact solutions apparently remains open.

2.4. k-point spanning trees We next consider the k-minimum spanning tree problem: given n points in the Euclidean plane, find the shortest tree spanning k of the points (Figure 3(a)). Up to constant factors in the approximation ratio, the /:-MST problem is equivalent to the problem of finding a path connecting k points (the /:-TSP problem) or a Steiner tree connecting k points. The choice of Euclidean metric is also not critical. However we will use the A:-MST formulation for simplicity. The /:-MST problem was introduced independently by Zelikovsky and Lozevanu [133], and by Ravi et al. [113]. Many similar ^-point selection problems with other optimization criteria can be solved in polynomial time [50, 68] but the /:-MST problem is NP-complete [113,133] (as are obviously the /:-TSP and /:-Steiner tree variants), so one must resort to some form of approximation. In a sequence of many papers, the approximation ratio was reduced to Oik^^"^) [113], O(logi^) [75,102], 0(log)^/loglogw) [651,0(1) [27], 2V2 [105], and finally 1-f 6: [11]. We describe the 2 ^ 2

Spanning trees and spanners

441

~l

Fig. 3. (a) 6-point minimum spanning tree, (b) Guillotine partition.

approximation algorithm, but the other results use similar methods. In particular, the 1+6: approximation results of Arora can be seen as a more complicated generalization of this method. For related work on non-geometric ^-MST problems see [17,28,37,113,133]. Mitchell [105] first restricts his attention to the rectilinear metric in the plane; the weight of a tree in this metric differs by its Euclidean weight by a factor of at most \/2, so this simplification entails only that factor loss in the overall approximation ratio of his algorithm. With this restriction, one can look for a rectilinear tree in which vertices are connected by paths of horizontal and vertical line segments. Most of the approximation algorithms for the geometric /:-MST problem work by using dynamic programming to construct a suitable recursive partition of the plane, and Mitchell's is no exception. Mitchell applies the concept of a guillotine subdivision, previously used e.g. in VLSI design; this is a recursive partition of the plane into rectangles, in which each rectangle is subdivided by a vertical or horizontal line segment into two smaller rectangles (Figure 3(b)). For a given rectilinear tree, the span of a line segment in the subdivision is the shortest contiguous subsegment containing all the points where the segment intersects the tree. (In other words, this span is a one-dimensional convex hull of the intersection of the line segment with the rectilinear tree.) The key technical result of Mitchell, the proof of which is too complicated to repeat here, is the following: 13 (Mitchell). For any rectilinear graph G, we can find a guillotine subdivision S, such that each edge of G is covered by the spans of segments in S, and such that the weight of these spans is at most twice the weight ofG. LEMMA

Let G be the optimum rectilinear ^-point spanning tree; then the lemma shows that there exists a guillotine subdivision, for which the spans of segments form a connected region of the plane covering at least k points, and with weight at most twice that of G. Conversely, if we construct the minimum weight guillotine subdivision with these properties, we can simply take a minimum spanning subgraph of this region of the plane to produce a rectilinear ^-point spanning tree with the same weight, which will then be twice that of the optimum rectilinear tree. (Actually, in general this process will form a Steiner tree rather

442

D. Eppstein

than a spanning tree. One must instead be more careful, and find a guillotine subdivision for which the minimum spanning subtree of the given k points has minimum total weight.) Thus we have reduced the problem to one of finding an optimum guillotine subdivision. This can be done by dynamic programming: there are 0{n^) combinatorially distinct ways of forming a rectangle containing one or more of the sites. For each of these different rectangles, for each possible number of sites within the rectangle, and for each of the (polynomially many) distinct ways of connecting those sites to the boundary of the rectangle, we find the optimal guillotine partition within that rectangle by combining information from 0{n) pairs of smaller rectangles. The result is a polynomial time algorithm for finding the optimum guillotine partition. 8 (Mitchell). In polynomial time, one can find a spanning tree of a subset ofk out ofn given planar sites, having total weight at most 2 V2 times the optimum.

THEOREM

The complexity of this method, while polynomial, is very high {0{n^^k^) or 0(^2 logn + nk^^)) but can be reduced somewhat (to 0{n^k^) or 0{n \ogn + nk^^)) at the expense of increasing the approximation ratio to a larger constant [105]. Further work remains to be done on reducing the time complexity of this algorithm to a more practical range. In this direction, Eppstein [65] showed that an 0(log/c) approximation could be found in time 0{n\ogn + nkXogk) by combining a similar dynamic programming approach with techniques based on quadtrees. Arora [11,12] showed that the approximation factor could be reduced to 1 + £. He shows that there exists a partition similar to the guillotine partition above, and a tree in which each partition edge is crossed a small number of times, such that the total length of this tree is within this 1 + s factor of optimal. He then uses very similar dynamic programming methods to find such a tree. Arora also reduces the time complexity of this approach to 0{n log^^^/^^ n) which, while still high, is at least within a polylogarithmic factor of linear. 2.5. Minimum diameter spanning trees The previous spanning tree problems have all been based on the weight of the tree constructed. We now consider other criteria for the quality of a tree. The diameter of a tree is just the length of its longest path. Since geometric spanning trees are often used in applications such as VLSI, in which the time to propagate a signal through the tree is proportional to its diameter, it makes sense to look for a spanning tree of minimum diameter. Ho et al. [82] give an algorithm for this problem, based on the following fact: LEMMA 14 (Ho et al.). Any point set has some minimum diameter spanning tree in which there are at most two interior points. PROOF. We start with any minimum diameter spanning tree T, and perform a sequence of diameter-preserving transformations until it is in the above form. Let P be the longest path in the given tree, and number its vertices ui, f2,. •., i^/?. We first form a forest by removing all edges of P from 7, and for each vertex v of P, let Ty denote the tree in this forest containing v. For any other vertex w, let Pu denote the

Spanning trees and spanners

443

Fig. 4. (a) Minimum diameter spanning tree corresponds to cover by two circles, (b) Point set with high diameter minimum spanning tree.

vertex v such that u is in Ty. Then we construct a new tree T' by adding to P an edge from each vertex u to Pu. T' has the same diameter as the original tree, since the distance between any two vertices can not increase except when they are in the same tree Ty, and in that case (by the assumption that P is a diameter path) the distance of each point to v is less than the distance from v to the endpoints of P. Now suppose that P has four or more edges and the length of the path v\-V2-v?> is at most half the length of P. (If not, we can reverse P and consider the three vertices at its other end.) Form a tree T^^ by removing every edge uv2 and reconnecting each such vertex u to V3. This can only decrease the lengths of paths already going through U3 in 7^ so the only pairs of vertices with increased path lengths are those newly connected to V3. But the length of any such path is at most twice the length of the path i'i-i'2-t'3, so the diameter of T^^ is no more than that of T. Each repetition of this transformation decreases the number of edges in P until it is at most three, and preserves the property that each vertex is within one edge of P, so we will eventually reach a tree of the form specified in the lemma. D

9 (Ho et al.). We can find a minimum diameter spanning tree of any point set in time Oirr').

THEOREM

PROOF. We simply try all single interior vertices and pairs of interior vertices. For the latter case, we still need to determine how to assign each remaining point to one of the two interior vertices. If the diameter path of the tree is fi-1^2-1^3-^4, and we know the lengths of v\-V2 and 1^3-i;4, we can perform this assignment by drawing two circles, centered at the two interior vertices, with those lengths as radii (Figure 4(a)); these circles must together cover the whole point set and each point can be assigned to the interior vertex corresponding to the circle covering it. Ho et al. show that, if we sort the points by their distances from V2 and U3, the space of all minimal pairs of covering circles can be searched in linear time. These sorted orders can be precomputed in a total of 0{n log n) time. Q

Ho et al. also consider optimization of combinations of diameter and total weight; they show that it is NP-complete to find a tree satisfying given bounds on these two measures. It is also of interest to combine these criteria with a bound on vertex degree. Of course for

444

D. Eppstein

degree-two trees, minimum diameter and minimum weight are equivalent to the travehng salesman problem. The minimum weight spanning tree itself can have very high diameter (an Q(^/n) factor away from optimal); for instance a point set spaced nearly uniformly in a unit square can have a path of length Q {y/n) as its minimum spanning tree (Figure 4(b)). Conversely an upper bound of 0 ( v ^ ) follows from results on the worst case length of the minimum spanning tree of n points [24,125]. One can achieve a better diameter by relaxing the requirement of minimum weight; as we describe later, for two- and three-dimensional point sets, one can find a subgraph of the complete Euclidean graph with degree three and total weight 0(1) times that of the minimum spanning tree, for which shortest paths in the subgraph have length within 0(1) of the Euclidean distance [44]. A single-source shortest path tree in such a graph is a degree-three tree that has both weight and diameter within a constant of the minimum. Similar problems of constructing spanning trees combining diameter, weight, and bottleneck shortest path bounds were considered by Salowe et al. [118] and Khuller et al. [91]. 2.6. Minimum dilation spanning trees Of the various common geometric network design quality criteria, only dilation (largest ratio of network distance to Euclidean distance) is underrepresented among research on the tree problems. Dilation has been studied for non-geometric tree design problems [30], and as we will see, is very prominent in work on other classes of graphs. The lack of work on tree dilation is likely because there is little to do in terms of worst case bounds: LEMMA

15. Any spanning tree on the vertices of a regular polygon has dilation Q{n).

PROOF. AS with any tree, we can find a vertex v, the removal of which partitions the tree into components the largest of which has at most 2/i/3 vertices. Therefore there is some pair of vertices from different components, adjacent to each other along the boundary of the polygon, and separated from v by at least n/6 polygon edges. The path in this tree D connecting this pair passes through v, and so has dilation Q{n).

Conversely, the minimum spanning tree has dilation 0(«) by its bottleneck shortest path property. However, the minimum spanning tree is not always good in terms of dilation; an example similar to that of Figure 4(b) shows that it can have dilation Q{n) even when dilation O(v^) is possible, it is easy to construct spanning trees with dilation O(v^). OPEN PROBLEM 5. Is it possible to construct the exact minimum dilation geometric spanning tree, or an approximation to it, in polynomial time? Does the minimum dilation spanning tree have any edge crossings! How well is it approximated by the minimum spanning treel

3. Planar graphs We now discuss problems of constructing planar graphs having a given planar point set as vertices. Note that any such problem (in which the graph is required only to be connected)

Spanning trees and spanners

445

requires Qinlogn) time in the randomized algebraic decision tree model, since element distinctness is reducible to sorting, sorting is reducible to convex hulls, and convex hulls can be found in linear time from a connected planar graph spanning the input points [103].

3.1. Minimum weight triangulation The minimum weight triangulation problem asks to find a triangulation (that is, a maximal planar straight-line graph on the given set of vertices) that minimizes the total Euclidean edge length. This problem is not known to be NP-hard, nor is it known to be solvable in polynomial time, and the complexity of minimum weight triangulation is one of the few problems left from Garey and Johnson's original list of open problems [74]. However, a generalized problem with non-Euclidean distances is NP-complete [100]. We describe here two very recent developments in the theory of minimum weight triangulations. First, Levcopoulos and Krznaric [96] have shown that one can find a triangulation with total length approximating the minimum to within a (large) constant factor; no such approximation was previously known. Second, Dickerson and Montague have found a method of finding in polynomial time a subgraph of the exact minimum weight triangulation which, empirically, is usually quite large; enough so that moderate sized instances of the problem can now be solved exactly. It is possible that their method in fact gives a polynomial time algorithm for the problem. MWT approximation. The two best approximations known to the MWT are those of Plaisted and Hong [112] and of Levcopoulos and Krznaric [96]. Although dissimilar in many ways, an important basic idea is shared by both papers. Rather than finding a triangulation directly, they consider the problem of finding a minimum weight partition into convex polygons with vertices at the input points (MC for short). We first sketch Plaisted and Hong's work. Let v be an interior vertex of a planar straightline graph with convex faces (such as the MWT of S). We can find a star of three or four edges forming no reflex angles at v as follows: choose the first star edge arbitrarily, and choose each successive edge to form the maximal non-reflex angle with the previous edge. Conversely, if each interior vertex of a planar straight-line graph has no reflex angles, the graph must have convex faces. Thus motivated, Plaisted and Hong try to build a graph on S with convex faces by piercing together local structures — stars. For each point st in the interior of the convex hull of 5, they find the minimum weight star of three or four edges. This collection of minimum weight stars, together with the convex hull of 5, forms a graph with total edge length less than twice that of MC and therefore of the MWT. Unfortunately, the resulting graph may not be planar. Plaisted and Hong use a complicated case analysis to remove crossings from this graph, ending up with a planar straight-line graph with convex faces having total edge length at most 12 times that of the MWT. The ring heuristic (connecting every other vertex of a convex polygon P to form a polygon P' with [n/2J vertices and a smaller perimeter, and triangulating P' recursively; see Figure 5(a)) can be used to convert this convex partition to a triangulation, at the cost of a logarithmic factor in total length (and hence in the approximation ratio). Olariu et al. [110] showed that this O(logn) bound is tight.

446

D. Eppstein

Fig. 5. (a) Ring heuristic triangulation of a convex polygon, (b) Greedy triangulation can have Q{^) weight than minimum.

larger

We now outline the results of Levcopoulos and Krznaric [96]. Their idea is to start with a proof that the greedy triangulation (in which one considers the possible edges in increasing order by weight, adding an edge to the triangulation exactly when it does not cross any previously added edge) achieves approximation ratio O(v^); an Q{^) lower bound was previously known [95,101] and is depicted in Figure 5(b). Rather than proving this directly, Levcopoulos and Krznaric again work with convex partitions. They define a greedy convex partition GC by constructing stars using only the edges in the greedy triangulation. Unlike the construction of Plaisted and Hong, the result is planar (since it is a subgraph of a planar graph) so no crossings need be removed. The first step of their proof is a lemma that the weight of GC approximates MC within a factor of O(v^). Given any graph G such as MC or GC, define maxG(f) to be the length of the longest edge incident to i; in G; for any planar graph the sum of these quantities is within a constant of the total graph weight. Then if the greedy partition is heavier than the minimum partition by a factor of/:, Levcopoulos and Krznaric choose v to maximize maxGc(i^) — maxMc(^) among all vertices with maxcc {v) = Q{k maxMC i^))- Some simple algebra shows that the weight of GC is 0{n maxGc(i^))- They then use the properties of the greedy triangulation (and the details of their construction of GC from their triangulation) to construct a fan of Q{k) edges resembling that in Figure 5(b), in which the short edge of the fan corresponds roughly with maxA/c(i^) and the long edge corresponds with maxGc(i^)- From this fan one can find a convex chain of Q{k) vertices such that any star at one of these vertices involves an edge of length ^(maxGc(i^)). As a consequence the weight of MC is Q{kmdiXGciv)). Combining these two bounds we see that the weight ratio k of GC to MC is 0(n/k). Therefore k = 0(y/n), and this gives an O(v^) approximation of MC by the greedy convex partition. Levcopoulos and Krznaric then show that for any convex partition C formed by edges of the greedy triangulation, C can be triangulated by adding diagonals of total length 0(\MWT\-\-\C\). Since the overall greedy triangulation we started with, restricted to each cell of C, forms a greedy triangulation of that cell, and since the greedy triangulation forms a constant approximation to the MWT of a convex polygon [97], the result is that the length of the greedy triangulation is 0(\MWT\ + \MC\^/n) = (\MWT\^). To improve this 0(^/n) approximation, Levcopoulos and Krznaric modify the greedy triangulation. Their algorithm MGREEDY adds edges one at a time to a planar straightline graph G, as follows. Let wf be the smallest edge not crossing anything already in G (so

Spanning trees and spanners

441

that the greedy algorithm would next add uvtoG). Then MGREEDY tests the following six conditions: • Some pair of edges uw and wv are already in G, with uwv forming an empty triangle. • Some edge wx crosses uv, with edge vx already in G and vwx forming an empty triangle. • Angle vwu is at least 135°. • \wx\ < l.l|wi;|. • If /7 is the intersection point of lines vx and uw, then \xp\ < 0.5\wp. • There is an edge uy already in G, such that triangle vuy is empty and angle wuy is reflex. If all six conditions hold for some two vertices w and x, then the algorithm adds edge wx to G; otherwise it adds uv. A similar construction of a convex partition and a fan in it now holds in this modified algorithm as it did in the greedy triangulation. However now the fan can only have 0(1) edges before forming a situation in which the six conditions above hold. The result is that the convex partition is a constant factor approximation to the minimum weight convex partition, and the modified greedy triangulation is a constant factor approximation to the minimum weight triangulation. As Levcopoulos and Krznaric show, these approximations can be constructed in time 0(n logn), or even 0(n) if the Delaunay triangulation is already known. OPEN PROBLEM 6. What is the best possible approximation ratio for a polynomial time approximation to the minimum weight triangulation!

Exact MWT construction. Instead of approximating the minimum weight triangulation, a number of authors have attacked the problem of constructing the exact minimum weight triangulation, by finding conditions sufficient to guarantee that certain edges belong to the MWT. If enough MWT edges could be found in this way, so that the resulting subgraph of the MWT connected all the vertices, the remaining regions of the plane could be treated as simple polygons and triangulated in polynomial time by dynamic programming [93]. This approach gained in credibility when Edelsbrunner and Tan [59] used it to solve a closely related problem, the min max weight triangulation. In this problem, the quality of a triangulation is measured by the length of its longest edge; the min max weight triangulation is the one minimizing this quantity. LEMMA 16 (Edelsbrunner and Tan). There exists some min max weight triangulation that contains the edges of the relative neighborhood graph of the sites. COROLLARY

1. The min max weight triangulation can be found in polynomial time.

The dynamic programming idea described above would lead to an 0(n?) time bound, but Edelsbrunner and Tan reduced this to O(n^). Recall that the relative neighborhood graph is defined in terms of a forbidden region characterization: an edge is in the graph if and only if the lune (formed by intersecting the two circles with that edge as radius) contains no other sites. Several authors have proven

448

D. Eppstein

similar forbidden region characterizations for the minimum weight triangulation. Yang et al. [ 131 ] show that if the region formed as the union of the same two radius circles is empty, the edge belongs to the MWT. (In particular, the two closest sites are connected by an edge in the MWT [76].) Keil [88], Yang [130], and Cheng and Xu [38] prove similar results for an alternate union of two circles, for which the given edge is a chord. Aichholzer et al. [6] provided a different type of characterization of minimum weight triangulation edges, which leads to an algorithm capable of finding the MWT for certain very large subsets. They define a light edge to be one that is not crossed by any other edge of smaller weight. It is not the case that all light edges need be part of the MWT, but they show that if the set of all light edges forms a triangulation, that triangulation must be the one minimizing the total edge weight (or any other monotonic functional of edge weights). Aichholzer et al. note that the characterizations of Keil, Yang et al. [88,130,131] all produce edges that must be Hght. The best computational results so far for exact minimum weight triangulation construction have been found by Dickerson and Montague [53,54]. They define a locally minimal triangulation to be one in which for every two adjacent triangles forming a convex quadrilateral, the common side of the triangles is the shortest diagonal of the quadrilateral. Clearly, the minimum weight triangulation is locally minimal; their idea is to identify edges belonging to all locally minimal triangulations. The technique is simply to maintain a set S of edges that might possibly be part of a locally minimal triangulation. Initially S consists all pairs of vertices, then in a sequence of passes Dickerson and Montague remove from S any edge that is not the short diagonal of a quadrilateral (not containing other sites) all sides of which still belong to S. Eventually no more edges can be removed, and the process terminates. They then define another set S' of all those edges remaining in S and not crossed by any other edge in S. LEMMA 17 (Dickerson and Montague). All edges in S' belong to all locally minimal triangulations.

Since only O(n^) edges can be removed from 5, this heuristic runs in polynomial time. In computational experiments, Dickerson and Montague have shown that for moderate sized inputs (between 100 and 200 points), the set of edges in S' is very likely to form a connected subgraph of the minimum weight triangulation. The minimum weight triangulation itself can then be found by dynamic programming. 7. Is the graph found by Dickerson and Montague always connected! Can one test in polynomial time whether a given edge belongs to some locally minimal triangulation! Can one find the exact minimum weight triangulation in polynomial timel OPEN PROBLEM

3.2. Low-dilation planar graphs We next consider the problem of constructing planar networks with low dilation (maximal ratio between graph and geometric distance). Clearly it will not always be possible to find networks with dilation very close to 1; for instance, any planar graph connecting the vertices of a square has dilation \/2.

Spanning trees and spanners

449

Fig. 6. (a) Diamond property: one of two isosceles triangles on edge is empty, (b) Graph violating good polygon property: ratio of diagonal to boundary path is high.

The initial work on this problem has been to show that various previously studied graphs had constant dilation. Chew [39] showed that the rectilinear Delaunay triangulation has dilation at most \/T0- (Note that the dilation here is measured in the Euclidean metric even though the triangulation itself is defined with the rectilinear metric. There is a factor of V2 lost in the translation; the rectilinear Delaunay triangulation has rectilinear dilation \f5.) Chew also pointed out that by placing points around the unit circle, one could find examples for which the Euclidean Delaunay triangulation was made to have dilation as much as n/l. In the joumaL version of his paper [40], Chew added a further result, that the graph obtained by Delaunay triangulation for a convex distance function based on an equilateral triangle has dilation at most 2. Chew's conjecture that the Euclidean Delaunay dilation was constant was proved by Dobkin et al. [55], who showed that the Delaunay triangulation has dilation at most (pn where cp is the golden ratio (1 + V5)/2. Keil and Gutwin [89] further improved this bound to g^^^f .^. ^ 2.42. Das and Joseph [46] then showed that these constant dilation bounds were not unusual; in fact such bounds hold for a wide variety of planar graph construction algorithms, satisfying the following two simple conditions: • Diamond property. There is some angle of < TT, such that for any edge ^ in a graph constructed by the algorithm, one of the two isosceles triangles with ^ as a base and with apex angle a contains no other site. This property gets its name because the two triangles together form a diamond shape, depicted in Figure 6(a). For instance, because of its empty-circle property, the Delaunay triangulation satisfies the diamond property with ot = 7T/2. • Good polygon property. There is some constant d such that for each face / of a graph constructed by the algorithm, and any two sites M, V that are visible to each other across the face, one of the two paths around / from u to v has dilation at most d. Figure 6(b) depicts a graph violating the good polygon property, because two nearby sites have no short boundary path connecting them. The good polygon property is satisfies by any triangulation (with d = 1). Intuitively, if one tries to connect two vertices by a path in a graph that passes near the straight line segment between the two, there are two natural types of obstacle one encounters. The line segment one is following may cross an edge of the graph, or a face of

450

D. Eppstein

Fig. 7. Fractal )6-skeleton with unbounded dilation.

the graph; in either case the path must go around these obstacles. The two properties above imply that neither type of detour can force the dilation of the pair of vertices to be high. THEOREM 10 (Das and Joseph). Any planar graph construction algorithm satisfying the diamond and good polygon properties produces graphs with bounded dilation.

The proof is too complicated to summarize here, but is in some ways similar to the proof of Dobkin et al. [55] that the Delaunay triangulation has bounded dilation. Das and Joseph go on to show that not only Delaunay triangulations but also the greedy and minimum weight triangulations possess these two properties, and hence have bounded dilation. (The bound, while constant, is quite high and could presumably be strengthened.) It seems clear that similar results should also hold for some other triangulation methods developed since their paper, such as the min max edge length triangulation and the min max angle triangulation [60]. Constant dilation of greedy and related triangulations plays a key role in Levcopoulos and Krznaric's recent constant-factor approximation algorithm for the minimum weight triangulation [96], and Drysdale [29] has pointed out that the diamond property of minimum weight triangulation can be very helpful in pruning the edges that could potentially take part in it, and speed up exact solution methods for this triangulation. Another set of natural candidates for bounded dilation are the p-skeletons, formed by including an edge ab when no other site c forms an angle acb larger than some particular bound (depending on the parameter P). When the angle bound is 90° this is the Gabriel graph, a subgraph of the Delaunay triangulation and the relative neighborhood graph, and a supergraph of the minimum spanning tree. Skeletons with larger angle bounds have been used to find sets of edges guaranteed to be part of the minimum weight triangulation [38,88, 130]. As p approaches zero, these skeletons contain more and more edges, until eventually one forms the complete geometric graph; this limiting behavior along with the fact that the definition of yS-skeletons is closely related to Das and Joseph's diamond property hint together that these graphs might have bounded dilation. But Eppstein [67] has recently shown that a fractal construction leads to ^S-skeletons in the form of paths with unbounded dilation, for any fi >0. (Figure 7.) Curiously, there has been little or no published work on using dilation as the direct basis for constructing planar graphs (rather than constructing graphs some other way and mea-

Spanning trees and spanners

451

suring the dilation of the result). Obviously, the minimum dilation planar graph should be a triangulation, since it is never harmful to add diagonals. It seems that it should have useful properties similar to those of the Delaunay triangulation, min max angle triangulation, min max edge length triangulation, minimum weight triangulation, and other optimal triangulations, but this has apparently not been studied. For many optimal triangulation problems, the version of the problem in which one optimally completes the triangulation of a convex or simple polygon can be solved by dynamic programming [93], however even this is not obvious for the minimum dilation triangulation. A solution to this subproblem might have implications in allowing the powerful edge insertion method [22,60] to be applied to the point set version of the problem. OPEN PROBLEM 8. Is it possible to construct in polynomial time the minimum dilation triangulation of a point set, or of a simple polygon!

Clearly there still also remains a wide gap between the best upper and lower bounds (2 and \/2 respectively) on dilation of planar graphs. OPEN PROBLEM

9. What is the worst case dilation of the minimum dilation triangula-

tionl The rectilinear case of Open Problem 9 is solved: Arikati et al. [10] showed that for the L\ (or equivalently Loo) planar metrics there is always a planar 2-spanner, and that some point sets have no better spanner. 3.3. Planar dilation and weight Levcopoulos and Lingas [98] and Das and Joseph [46] brought the total weight of the graph, as well as its dilation, into the equation. Clearly the weight should be measured in terms of the minimum spanning tree, for as Das and Joseph [46] observe, any graph with bounded dilation should at least be connected. Das and Joseph [46] form a planar graph by applying a greedy triangulation algorithm to the polygons remaining between the minimum spanning tree and the convex hull of the sites. They then consider the greedy edges in decreasing order of weight, removing each edge if it is unnecessary to achieve the desired dilation. They then apply their diamond property and good polygon property criteria to bound the dilation of the resulting pruned graph. Because they only prune the greedy edges, the total weight of the spanner they construct is close to three times that of the minimum spanning tree. Althofer et al. [8,9] later generalized this pruning strategy to arbitrary graphs. Given any graph, and any parameter r > 0, their method again considers the edges of the graph in decreasing order by weight, removing each edge if it is not necessary to achieve dilation \-\-t. Obviously the result is a subgraph with dilation 1 -f ^, but they prove further that the total weight of this subgraph is 1 -\-0{n/t) times that of the graph's minimum spanning tree, and that the number of edges is 0(^^~^^^^/^^). For planar graphs these bounds are much better: The weight is 1 + 0 ( 1 / 0 times that of the minimum spanning tree, and the number of edges is n(l + 0 ( 1 / 0 ) -

452

D. Eppstein

Althofer et al. then apply this method to the Delaunay triangulation, which as we have seen is a planar graph with constant dilation that also contains the Euclidean minimum spanning tree. The result is a planar spanner with constant dilation, few edges, and weight arbitrarily close to that of the minimum spanning tree. Levcopoulos and Lingas [98] proved the same result by a similar but more complicated method of pruning the Delaunay triangulation; their pruning method works only for planar graphs, but has the further advantage that it runs in linear time. The same pruning method can be used to trade weight and dilation in the other direction, and find graphs with dilation arbitrarily close to that of the Delaunay triangulation, and weight a large constant times that of the minimum spanning tree.

4. General graphs Most of the work on general geometric network design problems, in which the network to be constructed is not of some restricted class, has been on dilation of sparse graphs, since it is trivial to find a graph with low weight or diameter, or to find a non-sparse graph with low dilation.

4.1. Dilation only As we saw, for planar graphs, the dilation can not approach one. By considering nonplanar graphs, it is possible to find sparse graphs approximating the complete Euclidean graph arbitrarily closely. Specifically, Keil [87] showed that variations of the Yao graph construction (in which one partitions the space around each point into wedges with a given fixed opening angle, and connects the point to the nearest neighbor in each wedge) produce graphs with dilation arbitrarily close to 1, with 0{n) edges, and that can be constructed in time 0{n \ogn). No better time bound is possible in the randomized algebraic decision tree model of computation, since spanners could be used to solve the element distinctness problem [35]. THEOREM 11 (Keil). The Yao graph, formed by wedges of opening angle 0 < 60°, produces graphs with dilation 1 + 0(l/6>). PROOF. TO find a path in this graph from u to v, one at each step determines the wedge containing v and moves along a graph edge to the nearest vertex (w;) in that wedge. The worst case for the algorithm occurs when uv and uw are similar in length but widely separated in angle, in which case the distance to v is reduced by (1 — 0(1/0))uw. For 0 < 60° this reduction must be positive, and repeating the process brings us eventually to V. The total distance traveled is proportional to 1/(1 - 0(1/^)) = 1 + O(l/0) times the total reduction in distance to v, which is exactly the original distance from u to v. D

Althofer et al. [8,9] observed that this result holds in any dimension, and Ruppert and Seidel [115] modify the technique so that orthogonal range searching methods can be used in its construction, improving its running time in high dimensions to 0(n log*^"^ n).

Spanning trees and spanners

453

Callahan [31] improved the dependence between the number of edges and the dilation, as well as the construction time, in this result: THEOREM 12. If one forms a well-separated pair decomposition, and chooses a representative edge from each pair, the resulting graph has 0{n) edges and can be made to have dilation arbitrarily close to one {as a function of the separation parameter of the decomposition). PROOF. We find a short path between two sites u and v by the following recursive process: by the definition of a well-separated pair decomposition, the decomposition includes some pair ([/, V) of sets of sites for which w is in t/ and i; is in V. Find the edge u'W representing the pair {U, V), and form a path by recursively connecting u to u', following edge u'v', and connecting v' to v. If the parameter of separation in the decomposition is s, and the length of wi; is r, the length of u'v' can be at most r{s -\- A)/s, and the length of uu' and vv' can be at most 2r/s. Therefore this recursive algorithm produces a path with length satisfying the recurrence

L{r) ^ r(l + A/s) + 2L{2r/s). If ^ > 4, this solves to 0(r), and for any e > 0 one can choose a sufficiently large s for which the solution to this recurrence i s r ( l + £ ) . D As in his approximation to the Euclidean minimum spanning tree, Callahan further shows that the dependence on e can be reduced by choosing more carefully the representative edge for each pair in the well separated pair decomposition. Similar constructions of sparse spanners with a larger dependence on the dilation were also given by Salowe [116] and Vaidya [129].

4.2. Dilation and weight Note that the small-dilation graphs described above may have very high weight. As we have already seen, it is possible for planar sites to construct small-dilation graphs with weight arbitrarily close to that of the minimum spanning tree. The greedy algorithm of Althofer et al. [8,9] (described in the section on planar graphs) was defined for any graph, and so produces spanners with nontrivial weight bounds in any dimension; but in general the results proved by those authors are far from the constant factor over minimum spanning tree weight that one would hope for. Das et al. [45] showed that applying this greedy algorithm to the complete Euclidean graph in three dimensions again produces a spanner with any given dilation t > \, and total weight 0(1) times that of the minimum spanning tree (where the constant depends on t). Chandra [33] showed that for random point sets in higher dimensions, the weight is again 0(1) times that of the minimum spanning tree. Das and Narasimhan [48] apply this greedy approach to a sparse spanner produced by clustering techniques; the dilation of the result is the product of the dilations from these two steps, which is still typically some constant. This idea speeds

454

D. Eppstein

Fig. 8. Isolation property: a cylinder around each edge is not crossed by any other edge.

up the greedy method to run in time 0{n log^ n), and produces results similar to those of Althofer et al. Chandra et al. [34] showed that, for any graph, one can construct a spanner with dilation 0(log^ n) and weight 0 ( l ) times that of the minimum spanning tree. For geometric graphs in any dimension, they construct spanners with constant dilation and weight O(logn) times that of the minimum spanning tree. They actually give two methods for this second result. One method is simply to apply the greedy algorithm; they show an O(logn) weight factor based on the gap property: the endpoints of any two edges are separated by a distance at least proportional to the smaller of the two edge lengths. The other method combines known sparse (but heavy) spanners with approximations to the traveling salesman path; the algorithm then partitions the path recursively into smaller pieces and combines spanners from representative points on each piece. Arya and Smid [141 modify and speed up the first, greedy, method: instead of adding edges incrementally when necessary to preserve the dilation, and proving that the result satisfies the gap property, they add edges unless the gap property would be violated, and show that the resulting graph has bounded dilation. With this modified greedy method they show that a graph with bounded dilation and weight 0(log/i) times that of the minimum spanning tree can be constructed in time 0{n \og^ n). Finally, Das, Narasimhan, and Salowe [49] extended the results of Das et al. [45], and showed that in any dimension the greedy algorithm produces spanners with both constant dilation and weight a constant times that of the minimum spanning tree. The dilation bound is immediate; the weight bound is reminiscent of the methods of Das and Joseph [46] in that the authors describe a collection of general conditions under which such a bound applies; further their isolation property is very similar to the diamond property of Das and Joseph. • Isolation property. There exists a constant c, such that any edge of length t produced by the given algorithm can be placed within a cylinder of radius and height c • i (Figure 8), such that the axis of the cylinder is a subset of the edge, and the cylinder does not intersect any other edge of the graph. Note that, unlike the diamond property, the cylinders are required to avoid the edges of the graph, and not just the other sites. Also note that in this definition the cylinders around each edge may intersect each other, but one can shrink them by a factor of two to avoid intersections. The exact shape of the cylinders is also unimportant.

Spanning trees and spanners

455

LEMMA 18 (Das et al.). Any geometric graph satisfying the isolation property has weight at most a constant times that of the minimum spanning tree. The proof is based on separating the edges of the graphs into groups of nearly parallel edges, and charging the length in each group to edges in the Steiner minimal tree of the sites. Das et al. also prove similar results for the leapfrog property, a generalization of the isolation property for which they give a more complicated definition. They then show that graphs produced by a version of the greedy algorithm have this property. This then proves that the greedy algorithm produces spanners with weight 0(1) times that of the minimum spanning tree. 4.3. Dilation, weight, and degree Perhaps the ultimate results in high-dimensional spanners combine dilation, weight, and vertex degree. Clearly the degree bound must be at least three, since degree-two graphs may be forced to have very large dilation. Further, for any fixed bound d on the degree, one must have dilation bounded away from one by some function of d, even in the plane, as can be seen by considering the vertices of a regular d + 2-gon. Chandra et al. [34] showed that their version of the greedy method, when applied to polyhedral approximations to the Euclidean graph, produces a spanner with degree bounded by some function of the dilation, dimension, and the particular approximation. Their bound is not stated explicitly but is roughly exponential in the dimension. As we have seen these spanners also have low weight. Salowe [117] produces another bounded-degree spanner algorithm by modifying the Yao graph construction described earlier (which may have unbounded degree). The idea is simply to add edges to the spanner in order by weight, adding an edge (w, v) if there is no other edge (w, w) or (u, w) already added and having an angle close to that of (w, f). Salowe also reports that a similar degree bound applies to the spanner of Ruppert and Seidel [115] but this appears to be erroneous. However it should be possible to keep track of the next edge to be added to the graph at each step of this construction (that is, the shortest edge within a certain range of angles, connecting two points not already incident to edges in that range) by combining Ruppert and Seidel's modified Yao graph orthogonal range searching technique [115] with a method of Eppstein [62] for maintaining bichromatic closest pairs dynamically. The result would be an 0{n log^^^^ n) time algorithm to construct this bounded degree spanner. As we have already seen, Arya and Smid [14] achieved a similar time bound for an alternate bounded degree spanner. Yet another bounded-degree spanner construction is credited by Vaidya [129] to Feder and Nisan. Finally, Arya et al. [13] improved the time bounds of all these methods. The results of Arya et al. include an 0{n \ogn) time algorithm for constructing a spanner with bounded dilation, bounded degree, and weight a constant factor times the minimum spanning tree. They also give an 0{n log n) algorithm for constructing a spanner with bounded dilation on paths of O(logn) edges, bounded degree, and weight 0(log^ n) times that of the minimum spanning tree. (The idea of finding short paths with few edges was also previously considered by Arya, Mount, and Smid [15].) Therefore for any fixed dilation, one can quickly

456

D. Eppstein

find spanners with bounded degree. Further work on the problem has turned this tradeoff around, and asked how small a degree is possible to achieve bounded dilation. Salowe [117] was the first to find degree bounds that did not grow with the dimension of the problem; he showed that in any dimension, there is a degree-four graph with constant dilation. Salowe starts with any one of the bounded degree methods discussed above. Salowe's method for reducing the degree to four is then to cluster points using the nearest neighbor forest, and use these clusters to roughly halve the degree of any spanner construction method. Starting with the initial bounded degree spanner method described above and iterating this degree reduction process produces his result. This paper also contains a convenient classification of the spanner literature in terms of five parameters (dilation, time, number of edges, weight, and degree) and the class of metric space for which the spanners are defined. Salowe's paper does not however include a bound on the total weight of his spanners. Das and Heffeman [44] improved this degree-four bound, by showing that in any dimension, there is a graph with maximum degree three, at most dn edges (for any J > 1), constant dilation (depending on d), and total weight O(logn) times the minimum spanning tree weight. They state that in dimensions two and three the weight can be further reduced to 0(1) times the minimum spanning tree weight. Their technique involves similar nearest-neighbor-forest based clustering methods to those of Salowe, applied somewhat more carefully, and combined with the greedy spanners constructed by Chandra et al. [34]. The weight bound comes from summing the weight of the greedy spanner with that of the nearest neighbor forest (which as a subgraph of the minimum spanning tree has small weight). Combining this with the recent result of Das et al. [49] on constant weight bounds for the greedy spanner construction yields the following result. THEOREM 13 (Das, Heffeman, Narasimhan, and Salowe). For sites in any fixed dimension, and for any constant d ^ \, one can construct in time 0(n logn) a graph with constant dilation, degree three, at most dn edges, and weight a constant factor times that of the minimum spanning tree.

References [1] Active Geometry Group, Johns Hopkins U., Extracting the geometry of the vascular tree. Manuscript, available online at http://blaze.cs.jhu.edu/grad/lundberg/agg/projects/vasc.html. [2] P.K. Agarwal, H. Edelsbrunner, O. Schwarzkopf and E. Welzl, Euclidean minimum spanning trees and bichromatic closest pairs. Discrete Comput. Geom. 6 (1991), 407^22. [3] P.K. Agarwal, A. Efrat and M. Sharir, Vertical decomposition of Shallow levels in 3-dimensional arrangements and its applications, Proc. 11th ACM Symp. Comp. Geom. (1995), 39-50. [4] P.K. Agarwal, J. Matousek and S. Suri, Farthest neighbors, maximum spanning trees and related problems in higher dimensions, Comput. Geom. 1 (1992), 189-201. [5] A. Aggarwal, L.J. Guibas, J. Saxe and P.W. Shor, A linear time algorithm for computing the Voronoi diagram of a convex polygon. Discrete Comput. Geom. 4 (1989), 591-604. [6] O. Aichholzer, F. Aurenhammer, M. Taschwer and G. Rote, Triangulations intersect nicely. Discrete Comput. Geom. 16 (1996), 339-359. [7] N. Alon, S. Rajagopalan and S. Suri, Long non-crossing configurations in the plane. Fund. Inform. 22 (1995), 385-394.

Spanning trees and spanners

457

[8] I. Althofer, G. Das, D. Dobkin and D. Joseph, Generating sparse spanners for weighted graphs, Proc. 2nd Scand. Worksh. Algorithm Theory, Springer LNCS 447 (1990), 26-37. [91 I. Althofer, G. Das, D. Dobkin, D. Joseph and J. Soares, On sparse spanners of weighted graphs. Discrete Comput. Geom. 9 (1993), 81-100. [101 S. Arikati, D.Z. Chen, L.R Chew, G. Das, M. Smid and CD. Zaroliagis, Planar spanners and approximate shortest path queries among obstacles in the plane, Proc. 4th Eur. Symp. Algorithms, Springer LNCS 1136(1996), 514-528. [11] S. Arora, Polynomial time approximation schemes for Euclidean TSP and other geometric problems, Proc. 37th IEEE Symp. Foundations of Comp. Sci. (1996), 2-11. [12] S. Arora, Nearly linear time approximation schemes for Euclidean TSP and other geometric problems. Manuscript (1997). [13] S. Arya, G. Das, D.M. Mount, J.S. Salowe and M. Smid, Euclidean spanners: Short, thin, and lanky, Proc. 27th ACM Symp. Theory of Computing (1995), 489^98. Available onhne at http ://w WW. cs. umd. edu/~ mount/Papers/stoc95. ps. [14] S. Arya and M. Smid, Efficient construction of a bounded degree spanner with low weight, Proc. 2nd Eur. Symp. Algorithms, Springer LNCS 855 (1994), 48-59. [15] S. Arya, D. Mount and M. Smid, Randomized and deterministic algorithms for geometric spanners of small diameter, Proc. 35th IEEE Symp. Foundations of Comp. Sci. (1994), 703-712. Available online at http://www.cs.umd.edu/~mount/Papers/focs94.ps. [16] T. Asano, B. Bhattacharya, M. Keil and F. Yao, Clustering algorithms based on minimum and maximum spanning trees, Proc. 4th ACM Symp. Comp. Geom. (1988), pp. 252-257. [17] B. Awerbuch, Y. Azar, A. Blum and S. Vempala, Improved approximation guarantees for minimum-weight k-trees and prize-collecting salesmen, Proc. 27th ACM Symp. Theory of Computing (1995), 211-1%Z. Available online at http://www-cgi.cs.cmu.edU/afs/cs.cmu.edu/Web/People/avrim/Papers/prizetsp.ps.Z. [18] A.I. Barvinok, Two algorithmic results for the traveling salesman problem. Manuscript (1994). [19] J. Basch, L.J. Guibas and J. Hershberger, Data structures for mobile data, Proc. 8th ACM-SIAM Symp. Discrete Algorithms (1997), lM-156. [20] J. Basch, L.J. Guibas and L. Zhang, Proximity problems on moving points, Proc. 11th ACM Symp. Comput. Geom. (1997), 344-351. [21] J.L. Bentley and J. Saxe, Decomposable searching problems I: Static-to-dynamic transformation, J. Algorithms 1 (1980), 301-358. [22] M. Bern, H. Edelsbrunner, D. Eppstein, S. Mitchell and T.S. Tan, Edge-insertion for optimal triangulations. Discrete Comput. Geom. 10 (1993), 47-65. [23] M. Bern and D. Eppstein, Mesh generation and optimal triangulation. Computing in Euclidean Geometry, D.-Z. Du and F.K. Hwang, eds, World Scientific (1992); 2nd ed. (1995), 47-123. [24] M. Bern and D. Eppstein, Worst-case bounds for subadditive geometric graphs, Proc. 9th ACM Symp. Comp. Geom. (1993), 183-188. [25] M. Bern and D. Eppstein, Approximation algorithms for geometric problems. Approximation algorithms for NP-Hard Problems, D. Hochbaum, ed., PWS Pubhshing (1996), 296-345. [26] M. Bern, D. Eppstein and S.-H. Teng, Parallel construction of quadtrees and quality triangulations, Proc. 3rd Worksh. Algorithms and Data Structures, Springer LNCS 709 (1993), 188-199. [27] A. Blum, P. Chalasani and S. Vempala, A constant-factor approximation for the k-MST problem in the plane, Proc. 27th ACM Symp. Theory of Computing (1995), 294-302. Available online at http://wwwcgi.cs.cmu.edu/afs/cs.cmu.edu/Web/People/avrim/Papers/planarktrees.ps.Z. [28] A. Blum, R. Ravi and S. Vempala, A constant-factor approximation algorithm for the k-MST problem, Proc. 28th ACM Symp. Theory of Computing (1996). [29] H. Bronnimann, Computational Geometry Tribune, No. 3, available online at http://www.inria.fr/prisme/ personnel/bronnimann/cgt/cgt3.ps.Z. [30] L. Cai and D. Cornell, Tree spanners, SIAM J. Discrete Math. 8 (1995), 359-388. [31] P.B. Callahan, The Well-Separated Pair Decomposition and its Applications, PhD thesis, Johns Hopkins U. (1995). Available online at ftp://ftp.cs.jhu.edU/pub/callahan/dissertation.ps.Z. [32] P.B. Callahan and S.R. Kosaraju, A decomposition of multidimensional point sets with applications to k-nearest neighbors and n-body potential fields, J. ACM 42 (1995), 67-90.

458

D. Eppstein

[33] B. Chandra, Constructing sparse spanners for most graphs in higher dimensions. Inform. Process. Lett. 51 (1994), 289-294. [34] B. Chandra, G. Das, G. Narasimhan and J. Soares, New sparseness results on graph spanners, Intemat. J. Comput. Geom. Appl. 5 (1995), 125-144. [35] D.Z. Chen, G. Das and M. Smid, Lower bounds for computing geometric spanners and approximate shortest paths, Proc. 8th Canad. Conf. Comput. Geom., Carleton Univ. Press. (1996), 155-160. [36] D. Cheriton and R.E. Tarjan, Finding minimum spanning trees, SIAM J. Comput. 5 (1976), 310-313. [37] S.Y. Cheung and A. Kumar, Efficient quorumcast routing algorithms, Proc. IEEE Conf. Computer Communications, Vol. 2 (1994), 840-847. [38] S. Cheng and Y. Xu, Approaching the largest ^-skeleton within the minimum weight triangulation, Proc. 12th ACM Symp. Comp. Geom. (1996), 196-203. [39] L.P. Chew, There is a planar graph almost as good as the complete graph, Proc. 2nd ACM Symp. Comp. Geom. (1986), 169-177. [40] L.P. Chew, There are planar graphs almost as good as the complete graph, J. Comput. System. Sci. 39 (1989), 205-219. [41] L.P. Chew and R.L. Drysdale, Voronoi diagrams based on convex distance functions, Proc. 1st ACM Symp. Comp. Geom. (1985), 235-244. [42] N. Christofides, Worst-case analysis of a new heuristic for the traveling salesman problem. Report 388, Grad. Sch. Industrial Admin., Carnegie Mellon U. (1975). [43] C. Chiang, M. Sarrafzadeh and C.K. Wong, A powerful global router based on Steiner min-max trees, Proc. IEEE Int. Conf. CAD (1989), 2-5. [44] G. Das and P.J. Heffeman, Constructing degree-3 spanners with other sparseness properties, Proc. 4th Int. Symp. Algorithms and Computation, Springer LNCS 762 (1993), 11-20. [45] G. Das, P.J. Heffeman and G. Narasimhan, Optimally sparse spanners in ^-dimensional Euclidean space, Proc. 9th ACM Symp. Comp. Geom. (1993), 53-62. [46] G. Das and D. Joseph, Which triangulations approximate the complete graph? Proc. Int. Symp. Optimal Algorithms, Springer LNCS 401 (1989), 168-192. [47] G. Das, S. Kapoor and M. Smid, On the complexity of approximating Euclidean traveling salesman tours and minimum spanning trees, Proc. 16th Conf. Found. Software Tech. and Theor. Comput. Sci., Springer LNCS 1180 (1996), 64-75. [48] G. Das and G. Narasimhan, A fast algorithm for constructing sparse Euclidean spanners, Proc. 10th ACM Symp. Comp. Geom. (1994), 132-139. [49] G. Das, G. Narasimhan and J. Salowe, A new way to weigh malnourisheed Euclidean graphs, Proc. 6th ACM-SIAM Symp. Discrete Algorithms (1995), 215-222. [50] A. Datta, H.-P. Lenhof, C. Schwarz and M. Smid, Static and dynamic algorithms for k-point clustering problems, J. Algorithms 19 (1995), 474-503. [51] O. Devillers, S. Meiser and M. Teillaud, Fully dynamic Delaunay triangulation in logarithmic expected time per operation, Comput. Geom. 2 (1992), 55-80. [52] T. Dey, Improved bounds for k-sets and kth levels. Manuscript (1997). [53] M. Dickerson and M. Montague, A (usually?) connected subgraph of the minimum weight triangulation, Proc. 12th ACM Symp. Comp. Geom. (1996), 204-213. Available online at ftp://ams.sunysb. edu/pub/geometry/msi-workshop/95/dickerso.ps.gz. See also http://www.middlebury.edu/~dickerso/ mwtskel.html. [54] M. Dickerson and M. Montague, The exact minimum weight triangulation, Proc. 12th ACM Symp. Comp. Geom. (1996). [55] D.P Dobkin, S.J. Friedman and K.J. Supowit, Delaunay graphs are almost as good as complete graphs. Discrete Comput Geom. 5 (1990), 399-407. [56] R.L. Drysdale, A practical algorithm for computing the Delaunay triangulation for convex distance functions, Proc. 1st ACM-SIAM Symp. Discrete Algorithms (1990), 159-168. [57] D.-Z. Du and F.K. Hwang, The state of the art in Steiner ratio problems. Computing in Euclidean Geometry, D.-Z. Du and RK. Hwang, eds. World Scientific (1992); 2nd ed. (1995), 195-224. [58] H. Edelsbrunner, L.J. Guibas and J. Stolfi, Optimal point location in a monotone subdivision, SIAM J. Comput. 15 (1986), 317-340.

Spanning trees and spanners

459

[59] H. Edelsbrunner and T.S. Tan, A quadratic time algorithm for the minmax length triangulation, Proc. 32nd IEEE Symp. Foundations of Comp. Sci. (1991), 414-^23. [60] H. Edelsbrunner, T.S. Tan and R. Waupotitsch, A polynomial time algorithm for the minmax angle triangulation, SIAM J. Sci. Statist. Comput. 13 (1992), 994^1008. [61] D. Eppstein, Offline algorithms for dynamic minimum spanning tree problems, J. Algorithms 17 (1994), 237-250. [62] D. Eppstein, Dynamic Euclidean minimum spanning trees and extrema of binary functions. Discrete Comput. Geom. 13 (1995), 237-250. [63] D. Eppstein, Average case analysis of dynamic geometric optimization, Comput. Geom. 6 (1996), 45-68. [64] D. Eppstein, Geometric lower bounds for parametric matroid optimization. Discrete Comput. Geom., to appear. [65] D. Eppstein, Faster geometric k-point MST approximation, Comput. Geom. 8 (1997), 231-240. [66] D. Eppstein, Faster circle packing with application to nonobtuse triangulation, Intemat. J. Comput. Geom. Appl. 7 (5) (1997), 485-491. [67] D. Eppstein, Beta-skeletons have unbounded dilation. Tech. Rep. 96-15, UC Irvine, Dept. Inf. & Comp. Sci. (1996). [68] D. Eppstein and J. Erickson, Iterated nearest neighbors and finding minimal polytopes. Discrete Comput. Geom. 11 (1994), 321-350. [69] D. Eppstein, Z. Galil, G.F. Italiano and A. Nissenzweig, Sparsification - A technique for speeding up dynamic graph algorithms, Proc. 33rd IEEE Symp. Foundations of Comp. Sci. (1992), 60-69. [70] D. Eppstein, G.F. Italiano, R. Tamassia, R.E. Tarjan, J. Westbrook and M. Yung, Maintenance of a minimum spanning forest in a dynamic plane graph, J. Algorithms 13 (1992), 33-54. [71] D. Femandez-Baca, G. Slutski and D. Eppstein, Using sparsification for parametric minimum spanning tree problems, Nordic J. Comput. 3 (1996), 352-366. [72] M. Fredman and D.E. Willard, Trans-dichotomous algorithms for minimum spanning trees and shortest paths, Proc. 31st IEEE Symp. Foundations of Comp. Sci. (1990), 719-725. [73] H.N. Gabow, Z. Galil, T. Spencer and R.E. Tarjan, Efficient algorithms for finding minimum spanning trees in undirected and directed graphs, Combinatorica 6 (1986), 109-122. [74] M.R. Garey and D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman (1979). [75] N. Garg and D.S. Hochbaum, An 0(log k) approximation for the k minimum spanning tree problem in the plane, Proc. 26th ACM Symp. Theory of Computing (1994), 432^38. [76] P.D. Gilbert, New results in planar triangulations. Report R-850, U. Illinois Coordinated Science Lab. (1979). [77] D. Grigoriev, M. Karpinski, F. Meyer auf der Heide and R. Smolensky, A lower bound for randomized algebraic decision trees, Proc. 28th ACM Symp. Theory of Computing (1996), 612-619. [78] L.J. Guibas, D.E. Knuth and M. Sharir, Randomized incremental construction of Delaunay and Voronoi diagrams, Algorithmica 7 (1992), 381^13. [79] D. Gusfield, Bounds for the parametric spanning tree problem, Proc. Humboldt Conf. Graph Th., Combinatorics and Computing, Utilitas Mathematica (1979), 173-183. [80] S. Haruyama and D. Fussell, A new area-efficient power routing algorithm for VLSI layout, Proc. IEEE Int. Conf. CAD (1987), 3 8 ^ 1 . [81] M.R. Henzinger and V. King, Maintaining minimum spanning trees in dynamic graphs. Tech. Rep. DCS251-IR, Univ. of Victoria, Dept. of Computer Science (1997). [82] J.-M. Ho, D.T. Lee, C.-H. Chang and C.K. Wong, Minimum diameter spanning trees and related problems, SIAM J. Comput. 20 (1991), 987-997. [83] M.A.B. Jackson, A. Srinivasan and E.S. Kuh, Clock routing for high-performance IC's, Proc. 27th ACM/IEEE Design Automation Conf. (1990), 573-579. [84] A. Kahng, J. Cong and G. Robins, High-performance clock routing based on recursive geometric matching, Proc. 28th ACM/IEEE Design Automation Conf. (1991), 322-327. [85] D. Karger, P.N. Klein and R.E. Tarjan, A randomized linear-time algorithm to find minimum spanning trees, J. ACM 42 (1995), 321-328. [86] N. Katoh, T. Tokuyama and K. Iwano, On minimum and maximum spanning trees of linearly moving points. Discrete Comput. Geom. 13 (1995), 161-176.

460

D. Eppstein

[87] J.M. Keil, Approximating the complete Euclidean graph, Proc. 1st Scand. Worksh. Algorithm Theory, Springer LNCS 318 (1988), 208-213. [88] J.M. Keil, Computing a subgraph of the minimum weight triangulation, Comput. Geom. 4 (1994), 13-26. [89] J.M. Keil and C.A. Gutwin, The Delaunay triangulation closely approximates the complete Euclidean graph. Discrete Comput. Geom. 7 (1992), 13-28. [90] S. Khuller, B. Raghavachari and N. Young, Low degree spanning trees of small weight, SIAM J. Comput. 25 (1996), 355-368. [91] S. Khuller, B. Raghavachari and N. Young, Balancing minimum spanning trees and shortest-path trees, Algorithmica 14 (1995), 305-321. [92] D. Kirkpatrick, Optimal search in planar subdivision, SIAM J. Comput. 12 (1983), 28-35. [93] G.T. Klincsek, Minimal triangulations of polygonal domains, Ann. Discrete Math. 9 (1980), 121-123. [94] E. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan and D.B. Shmoys, The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization, Wiley (1985). [95] C. Levcopoulos, An Q{y/n) lower bound for non-optimality of the greedy triangulation. Inform. Process. Lett. 25 (1987), 247-251. [96] C. Levcopoulos and D. Krznaric, Quasi-greedy triangulations approximating the minimum weight triangulation, Proc. 7th ACM-SIAM Symp. Discrete Algorithms (1996), 392-401. [97] C. Levcopoulos and A. Lingas, On approximation behavior of the greedy triangulation for convex polygons, Algorithmica 2 (1987), 175-193. [98] C. Levcopoulos and A. Lingas, There are planar graphs almost as good as the complete graphs and as short as minimum spanning trees, Proc. Int. Symp. Optimal Algorithms, Springer LNCS 401 (1989), 9-13. [99] R.J. Lipton and R.E. Tarjan, Applications of a planar separator theorem, SIAM J. Comput. 9 (1980), 615-627. [100] E.L. Lloyd, On triangulations of a set ofpoints in the plane, Proc. 18th IEEE Symp. Foundations of Comp. Sci. (1977), 228-240. [101] G.K. Manacher and A.L. Zobrist, Neither the greedy nor the Delaunay triangulation approximates the optimum. Inform. Process. Lett. 9 (1979), 31-34. [102] C.S. Mata and J.S.B. Mitchell, Approximation algorithms for geometric tour and network design problems, Proc. 11th ACM Symp. Comp. Geom. (1995), 360-369. [103] D. McCallum and D. Avis, A linear algorithm for finding the convex hull of a simple polygon, Inform. Process. Lett. 9 (1979), 201-206. [104] K. Mehlhom, S. Meiser and C. O'Dunlaing, On the construction of abstract Voronoi diagrams. Discrete Comput. Geom. 6 (1991), 211-224. [105] J.S.B. Mitchell, Guillotine subdivisions approximate polygonal subdivisions: A simple new method for the geometric k-MSTproblem, Proc. 7th ACM-SIAM Symp. Discrete Algorithms (1996), 402-408. [106] C. Monma, M. Paterson, S. Suri and F. Yao, Computing Euclidean maximum spanning trees, Algorithmica 5 (1990), 407^19. [107] C. Monma and S. Suri, Transitions in geometric spanning trees. Discrete Comput. Geom. 8 (1992), 265293. [108] K. Mulmuley, Randomized multidimensional search trees: Dynamic sampling, Proc. 7th ACM Symp. Comp. Geom. (1991), 121-131. [109] C. O'Dunlaing and C.K. Yap, A "retraction" method for planning the motion of a disc, J. Algorithms 6 (1985), 104-111. [110] S. Olariu, S. Toida and M. Zubair, On a conjecture by Plaisted and Hong, J. Algorithms 9 (1988), 597-598. [ I l l ] C.H. Papadimitriou and U. V. Vazirani, On two geometric problems related to the traveling salesman problem, J. Algorithms 5 (1984), 231-246. [112] D.A. Plaisted and J. Hong, A heuristic triangulation algorithm, J. Algorithms 8 (1987), 405^37. [113] R. Ravi, R. Sundaram, M.V. Marathe, D.J. Rosenkrantz and S.S. Ravi, Spanning trees short and small, Proc. 5th ACM-SIAM Symp. Discrete Algorithms (1994), 546-555. [114] G. Robins and J.S. Salowe, Low-degree minimum spanning trees. Discrete Comput. Geom. 14 (1995), 151-165. [115] J. Ruppert and R. Seidel, Approximating the d-dimensional complete Euclidean graph, 3rd Canad. Conf. Comp. Geom. (1991), 207-210.

Spanning trees and spanners

461

[116] J.S. Salowe, Constructing multidimensional spanner graphs, Intemat. J. Comput. Geom. Appl. 1 (1991), 99-107. [117] J.S. Salowe, Euclidean spanner graphs with degree four. Discrete Appl. Math. 54 (1994), 55-66. [118] J.S. Salowe, D.S. Richards and D.E. Wrege, Mixed spanning trees: A technique for performance-driven routing, Proc. 3rd Great Lakes Symp. VLSI Design Automation of High Performance VLSI Systems, IEEE (1993), 62-66. [119] N. Samak and R.E. Tarjan, Planar point location using persistent search trees, Comm. ACM 29 (1986), 669-679. [120] O. Schwarzkopf, Dynamic maintenance of geometric structures made easy, Proc. 32nd IEEE Symp. Foundations of Comp. Sci. (1991), 197-206. [121] Sedgewick and J. Vitter, Shortest paths in Euclidean graphs, Algorithmica 1 (1986), 31-48. [122] M.L Shamos and D. Hoey, Closest-point problems, Proc. 16th IEEE Symp. Foundations of Comp. Sci. (1975), 151-162. [123] N. Sherwani, Algorithms for VLSI Physical Design Automation, Kluwer (1993). [124] D.D. Sleator and R.E. Tarjan, A data structure for dynamic trees, J. Comput. System. Sci. 24 (1983), 362-381. [125] J.M. Steel and T.L. Snyder, Worst-case growth rates of some classical problems of combinatorial optimization, SIAM J. Comput. 18 (1989), 278-287. [126] H. Tamaki and T. Tokuyama, How to cut pseudo-parabolas into segments, Proc. 11th ACM Symp. Comput. Geom. (1995), 230-237. [127] R. Tsay, Exact zero skew, Proc. IEEE Int. Conf. CAD (1991), 336-339. [128] P.M. Vaidya, Minimum spanning trees in k-dimensional space, SIAM J. Comput. 17 (1988), 572-582. [129] P.M. Vaidya, A sparse graph almost as good as the complete graph on points in K dimensions. Discrete Comput. Geom. 6 (1991), 369-381. [130] B.-T. Yang, A better subgraph of the minimum weight triangulation. Inform. Process. Lett. 56 (1995), 255-258. [131] B.-T. Yang, Y.-F. Xu and Z.-Y You, A chain decomposition algorithm for the proof of a property on minimum weight triangulations, Proc. Int. Symp. Algorithms and Computation, Springer LNCS 834 (1994), 423-427. [132] A.C. Yao, On constructing minimum spanning trees in k-dimensional space and related problems, SIAM J. Comput. 11 (1982), 721-736. [133] A.A. Zelikovsky and D.D. Lozevanu, Minimal and bounded trees, Proc. Tezele Cong. XVIII Acad. Romano-Americane, Kishinev (1993) 25-26.

This Page Intentionally Left Blank

CHAPTER 10

Geometric Data Structures Michael T. Goodrich* Center for Geometric Computing, Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 E-mail: [email protected]

Kumar Ramaiyer^ Informix Software, Inc., 1111, Broadway, Suite 2000, Oakland, CA 94607 E-mail: rk@ Informix, com

Contents 1. Introduction 1.1. Problem classification and goals 1.2. Chapter outline 2. Embedded planar graphs 2.1. The Doubly Connected Edge List (DCEL) 2.2. The winged-edge representation 2.3. The quad-edge representation 2.4. Well known PSLGs 3. Planar point location and ray shooting 3.1. The slab method 3.2. The trapezoid method 3.3. The chain method 3.4. Improving the chain method via fractional cascading 4. Dynamic point location 4.1. The inter-laced trees technique 5. Convex hulls and convex polytopes 5.1. J-dimensional representations 5.2. 2-dimensional dynamic maintenance 5.3. 3-dimensional subdivision hierarchies

465 465 466 466 467 468 469 469 471 471 473 474 475 480 481 482 482 483 483

*This research supported by the NSF under Grant CCR-9625289, and by ARC under grant DAAH04-96-1-0013. Author's homepage: h t t p : //www. c s . j h u . e d u / g o o d r i c h / . ^This research supported by the NSF under Grant CCR-9300079, and by ARC under grant DAAH04-96-1-0013. Author's homepage: h t t p : //www. c s . j h u . e d u / g r a d / k u m a r / . HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V. All rights reserved

463

464

M T. Goodrich and K. Ramaiyer

6. Rectilinear data structures 6.1. k-D trees and quadtrees 6.2. Segment trees 6.3. Range trees 7. General techniques 7.1. Fractional cascading 7.2. Persistence 7.3. Static to dynamic conversions 7.4. Internal-memory to external-memory conversions References

484 484 485 485 485 486 486 486 487 487

Geometric data structures

465

1. Introduction Computational geometry problems often require preprocessing geometric objects into a simple and space-efficient structure so that the operations on the geometric objects can be performed repeatedly in an efficient manner. We refer to this as the "geometric data structuring" approach. This approach has been widely used by several researchers to design very elegant data structures to solve a number of geometry problems [15,22,26,32,36,4345,56,63]. Classic data structures hke lists, trees, and graphs are by themselves not sufficient to represent geometric objects as either they are generally one dimensional in nature or do not capture the rich structural properties of the geometric objects in the domain. For example, in a planar subdivision the clockwise and counter-clockwise orderings of edges around a vertex are often useful for solving many problems involving subdivisions (e.g, see Guibas and Stolfi [41]). Similarly, facial ordering and connectivity information of subdivisions is often needed and requires special representation. If one is given a collection of horizontal segments in M^, for example, one may like to represent the endpoints, and also some representation of the "aboveness" partial order (e.g., see Edesbrunner [34]). Higher dimensional geometric objects define even richer relationships and likewise cannot be easily represented by the classical data structures, and require careful study. Even with this short list of examples one can see that geometric data requires the representation of relationships that cannot be represented using strictly numeric or combinatoric data structures. Indeed, it is the interplay of numeric and combinatoric data that makes the design of efficient geometric data structures an interesting and challenging research domain.

1.1. Problem classification and goals Data structuring problems involving geometric objects vary and are often classified as follows: Static: In this case all the geometric objects in the problem domain are given as part of the input. Online: In this case new geometric objects are allowed to be added to the problem domain, but cannot be deleted. Dynamic: In this most general case new geometric objects are allowed to be added and some existing objects are allowed to be deleted from the problem domain. In addition, data structures used for storing geometric objects should ideally achieve all of the following goals: • • • • •

capture structural information, allow for efficient query processing, allow for efficient updates, optimize the space required, and store objects efficiently so as to minimize the number of I/O accesses, when the input size is very large.

466

M. T. Goodrich and K. Ramaiyer

1.2. Chapter outline In this chapter we review and highUght research on geometric data structures, describing important examples for each of the above problem classifications. In each case we review we sketch how well it achieves the basic goals of geometric data structure design. In the next section, we describe methods for representing embedded straight line graphs, which arise in a number of computational geometry contexts, including the construction and maintenance of fundamental geometric structures, including convex hulls, Voronoi diagrams, Delaunay triangulations, and arrangements. In Section 3, we review several methods for performing an important search operation in such subdivisions — the point location search, and we give a short review of dynamic methods for solving this problem in Section 4. In Section 5, we discuss some methods for representing convexity, and, in Section 6, we describe some data structures for representing data that is rectilinear (i.e., aligned with the coordinate axes). Finally, in Section 7, we discuss some general techniques for designing geometric data structures. Since geometric data structures are fundamental in the design and implementation of geometric computations, there are necessarily a number of interesting geometric data structures that we will not be discussing in this chapter. Fortunately, many of these are covered in other chapters of this Handbook [60]. In particular, a number of spatial and rectilinear structures for higher-dimensional spaces are discussed in Chapter 17. In addition, general techniques for designing randomized geometric data structures are covered in Chapters 13 and 16. Data structures for shortest paths and ray shooting are discussed in Chapters 15 and 12. In addition, the important and related visibility graph structure is discussed in Chapter 22. Other visibility related questions are covered in Chapter 19. An interesting variation of the data structuring problem is covered in Chapters 15 and 20, where one is allowed to answer queries approximately. There are also a host of interesting data structures that are based upon ^-nets and spanning trees with low stabbing numbers, which are topics covered in Chapters 1, 2, and 13. Finally, fundamental to the issue of geometric representations is the issue of numeric stability, which is covered in Chapter 14. We highlight in the following sections various geometric data structures and we also discern some of the general principles behind data structure design for geometric structures.

2. Embedded planar graphs A graph G = (V, £") is said to be embedded in a surface S when it is drawn on S so that no two edges intersect. A graph i^ planar if it can be embedded in the plane; di plane graph has already been embedded in the plane [42], in which case it makes sense to define the set F of faces of G. A planar graph can always be embedded in the plane so that all its edges are straight-line segments [37] and such an embedded graph is called planar straight line graph (PSLG). The planar graphs play an important role in many two-dimensional computational geometry problems, for an embedded planar graph represents a planar subdivision, which is a structure that arises in several useful applications, including include arrangements of lines, Voronoi diagrams, Delaunay triangulations, and general triangulations.

Geometric data

structures

467

A measure of the usefulness of an embedded graph representation is that such a representation should allow for efficient traversal of edges around a vertex (in clockwise and counter-clockwise directions), and it should allow for the efficient access of all edges bounding a face and all the faces incident on a vertex. In addition, it is very important for such a representation to preserve the topology of the embedding of the planar graph, as a given planar graph may have several embeddings. Once the embedding of a planar graph is given in the form of a planar straight line graph, one of the simplest representation is to represent the graph as a collection of simple polygons. This representation is not flexible enough for traversal, however. Representations for embedded planar graphs that do allow for efficient traversals include the doubly connected edge list or DCEL [57], the winged-edge structure [3], and the quad-edge structure [41]. Let us therefore review each of these representations.

2.1. The Doubly Connected Edge List (DCEL) Muller and Preparata [50,57] designed a PSLG representation, which they called the doubly-connected edge hst (or DCEL). The DCEL for a PSLG G = (V, E, F) has a collection of edge nodes. This representation treats each edge as a directed edge; hence, it imposes an orientation on each edge. Each edge node e = (va, vt) is a structure consisting of six fields: • Vo, representing the origin vertex (u^), • Vj, representing the destination vertex {vh), • Fu representing the left face as we traverse on e from VQ to Vj, • Fr, representing the right face as we traverse on e from Vo to Vj, • CCWo, representing the counter-clockwise successor of e around Vo, and • CCWd, representing the counter-clockwise successor of e around V^. The Figure 1 shows the DCEL representation of the edge e\ in the subdivision given. From the DCEL representation in linear time one can easily extract the edges around faces (in clockwise-order) and the edges around a vertex (in counter-clockwise order). To

Vo Vd

Fi

F,

^1

n

^2

fl

fl

^2

^3

^1

fl

/o

^3

V4

^2

h

/o

CCWoCCWd ^2

Fig. 1. A DCEL representation of an embedded planar graph.

^3

468

M.T. Goodrich and K. Ramaiyer

get the other ordering information efficiently, however, one needs to dupHcate the edges of the PSLG with opposite orientations from their original orientations and store a DCEL for this orientation as well. Once edges are duplicated and oriented in the opposite direction, one can access the edges bounding a face in counter-clockwise direction and edges around a vertex in clockwise direction in linear time. The total size of the DCEL representation is 0{\V\^\F\ + \E\).

2.2. The winged-edge representation The winged-edge representation was proposed by Baumgart [3] and is similar to the DCEL. Given a PSLG G = (V, E, F), the winged-edge representation stores an array for the vertices. This array stores for each vertex an arbitrary edge incident on that vertex. The winged-edge representation also stores an array for faces, which stores for each face an arbitrary edge bounding that face. For each edge e = (va, Vb) it stores the following information: • Vo, representing the origin vertex (Va), • Vj, representing the destination vertex (vt,), • F/, representing the left face as seen from Vo to Vd, • Fr, representing the right face as seen from Vo to Vd, • CWo, representing the clockwise successor of e around Vo, • CCWo, representing the counter-clockwise successor of e around Vo, • CWd, representing the clockwise successor of e around Vd, and • CCWd, representing the counter-clockwise successor of e around Vd. Thus four successor edges are stored for each edge. This allows one to do all the accesses we outUned earlier as efficiently as in a (double-orientation) DCEL. Figure 2 shows the winged-edge representation for a PSLG. The total storage needed for the winged-edge is 0 ( | y I + | F | + l^l), but the constant factor is sUghtly better than for a double-orientation DCEL.

VQ Vd F| F, CCWpCCWd CWQ CW^ ^1

V2

/2

/l

V3

V\

/2

/o

V4

V2

/l

/o







.

^2

^3



€4

^5





Fig. 2. A winged-edge representation of an embedded planar graph.

Geometric data

structures

469

Fig. 3. A quad-edge representation of an embedded planar graph. The graph and the representation are shown. The thick edges are the edges of the graph and the gray-dotted edges are the edges of the dual graph.

2.3. The quad-edge representation Guibas and Stolfi [41] proposed the quad-edge representation for the embedded planar graphs. Their structure is isomorphic to the winged-edge structure, but it is given semantics general enough to represent an undirected graph embedded in an arbitrary two-dimensional manifold. Their structure also simultaneously represents the graph-theoretic primal and the dual of a planar graph. Figure 3 shows an example representation. The total space required for the quad-edge structure is 0 ( | y | -f \F\ + l^"!), with the constants being essentially the same is for the winged-edge representation.

2.4. Well known PSLGs As a motivation for the use of these subdivision representation, let us briefly review some of the well-known geometric structures that are special cases of PSLGs. These structures are treated in greater detail in other chapters of this Handbook. Line arrangements. Given a set of lines in the plane, the intersection of lines form a structure that is referred to as the arrangement. This structure is a planar subdivision. This is a very useful structure and has number of applications. Since any two non-parallel lines intersect, if the given set of n lines does not contain any pair of parallel lines, then the number of intersections is 0(n^) and hence the size of arrangement is 0(n^). There are algorithms for computing the arrangement of lines in 0(n^) time, which is of course optimal. Figure 4 shows an example arrangement of lines. The arrangement is a PSLG and can be represented using one of the data structures discussed in the previous section.

470

M. T. Goodrich and K. Ramaiyer

Fig. 4. Arrangements of lines in a plane.

Fig. 5. The Voronoi diagram and Delaunay triangulation of a point set. The thick edges represent the triangulation and the thin edges represent the Voronoi diagram.

Voronoi diagrams and Delaunay triangulations. Given a collection of points and a metric, say L2, one can define a geometric structure called the Voronoi diagram. This is a very useful geometric structure for answering number of useful questions one can ask about a collection of points, including closest pairs and nearest neighbors. The Voronoi diagram for a set of points is a PSLG. Each face in the graph contains an unique point from the given set. Each face is a locus of points in the plane which are closer to the point inside it, than to any other point in the given set. A related structure, the Delaunay triangulation, is the graph-theoretic planar dual of the Voronoi diagram in which the faces and vertices are interchanged while preserving the incidence relationships (which forms a triangulation if the original points are in general position). Figure 5 shows a Voronoi diagram. Both the Delaunay triangulation and the Voronoi diagram have a wide variety of uses; and they are PSLG's. We can therefore use

Geometric data structures

All

one of the above representations (DCEL, quad-edge, or winged-edge) to store Voronoi diagrams and Delaunay triangulations. Interestingly, the quad-edge representation has the added advantage of being able to simultaneously represent both the Voronoi diagram and the Delaunay triangulation of a point set using a single representation. In the next section we study how to perform searches like point location or ray shooting in PSLG structures such as arrangements and Voronoi diagrams.

3. Planar point location and ray shooting The planar point location problem is one of the fundamental computational geometry problems and has several applications. This problem has been studied by several researchers [22,31,32,36,44,45,56,63], and there are a number of efficient solutions. The problem in its widely studied form is stated as follows: Given a planar subdivision in the form of a PSLG (using one of the representations discussed in the previous section), preprocess the subdivision and store it in a data structure so as to answer queries of the form "given a query point p find the face of the subdivision containing p'\ This query is typically answered by performing a vertical ray shooting query from /?, where one determines the first segment(s) in the PSLG hit by vertical rays emanating out of p. The important criteria for judging solutions to the point location problem are the space occupied by the data structure and the query time it allows. The preprocessing time is also an important criterion, but is generally not considered as critical as the others, since it amounts to a one-time cost. Variations of the problem as stated above include methods for special types of subdivision i.e., introducing constraints on the shapes of the faces of the subdivision and the connectivity of the underlying planar graph. Different subdivisions that have been studied over the years include general subdivisions (which may not even be connected), connected subdivisions, monotone and convex subdivisions (where each face is respectively a monotone polygon or convex polygon), and rectilinear subdivisions (where only horizontal and vertical edges are used). We outline below the various data structures used for solving the point location problem.

3.1. The slab method The "slab" method proposed by Dobkin and Lipton [31] is historically the first non-trivial method to solve the point location problem, and it is suitable for the most general types of subdivisions. The idea is very simple. Since general subdivisions can be of an arbitrary nature, their basic idea is to partition the subdivision into a collection of vertical slabs so that each slab contains only triangles and trapezoids. The partition is done as follows: one draws vertical lines through each of the vertices of the subdivision. This partitions the subdivision into 0(n) slabs. The slabs have the property

472

M.T. Goodrich andK.

Ramaiyer

Fig. 6. The slab method: Partitioning the subdivision into vertical slabs consisting of trapezoids.

that none of the edges inside the slab cross each other and the edges either cross the vertical boundaries of the slab or two or more edges meet at a vertex through which the vertical boundary Hne of the slab passes through. Thus, each slab contains a collection of triangles and trapezoids with vertical boundaries. Dobkin and Lipton store this collection of slabs in a data structure to perform point location as follows: the slabs are totally ordered left-to-right by the JC-coordinates of the vertices and hence can be stored using any balanced binary search tree structure. Call this search tree A. The edges within each slab are totally ordered by the "above" relationship, i.e., given a point and the supporting Hne of an edge inside the slab, one can find out whether the point is above or below the edge. ^ This total order relationship (in each slab) can also be stored using a balanced binary search tree. Let us call the collection of search trees for all the slabs as B. The point location is done as follows: given a query point /? = (XQ, yo), we first use the x-coordinate of p, i.e., JCQ to identify the slab in which the point lies by doing a binary search in A. Once the slab is identified, the corresponding binary tree in B storing the trapezoids within the slab is searched by performing "aboveness" comparisons. Summary: Space: Since 0{n) binary search trees (each of size 0(«)) need to be stored in B, the total space requirement is 0(«^). Query Time: To locate a point we need to perform two binary searches, and hence the query time complexity is O(logn). This is the unit-time operation that is used in the complexity measure. This operation can be implemented either by checking the point against the equation of the supporting hne or by checking whether the vertices of the edge e = {v\, V2) and the query point v^, make a "left turn" or "right turn" in the order f 1, 1*2, and 1^3.

Geometric data

structures

Al?>

3.2. The trapezoid method The slab method was improved by Preparata [56] to achieve an 0(log n) query time method with only 0(n logn) space. His method is commonly referred to as the trapezoid method, and is in principle the same as slab method as it also partitions the subdivisions into simple trapezoids. But rather than drawing 0(n) vertical lines, a recursive structure is built. The method for constructing a point location data structure using the trapezoid method is as follows. One inductively assumes one is given a trapezoid r with vertical sides containing the subdivision (initially we can use a bounding rectangle). Identify in r the "spanning" edges i.e., the edges which intersect both the vertical boundaries of r. If there are no spanning edges, then partition r again vertically at the median vertex and recurse on each side. If there are spanning edges, then partition the slab into a number of trapezoids using the spanning edges, i.e., each trapezoid has a spanning edge as its top and bottom boundaries and the vertical boundaries of the slab as its two side boundaries. Then order these trapezoids by the "above" relationship discussed in the previous section and recursively partition each such non-trivial trapezoid. Thus, one recursively partitions the subdivision, first vertically and then using the spanning edges. The vertical cuts are global and can be organized using a single binary search tree. The trapezoids within a container trapezoid r are organized using a biased search tree [4] which has the property that the depth of an item / with weight wi is 0(log W/wi), where W is the sum of the weights of all the items in the tree. Each trapezoid is assigned a weight proportional to the number of vertices within the trapezoid. Hence the resulting structure storing the subdivision is a compound structure in which the primary tree is a balanced binary search tree organizing the vertical cuts and the secondary structure is a

'

1

I

I

I

1 1 1 1

'

1

I

I

I

I I

1 1

till >

1

1

> 1

i ,

I 1 1 1 i 1 1 / , . . . , , . _

1

1

1

1 1 1

Fig. 7. Trapezoid method: The vertical cuts are shown in the tree as triangular nodes and the spanning cuts are shown as circular nodes.

474

M. T. Goodrich and K. Ramaiyer

biased search tree storing the trapezoids. The Figure 7 shows an example subdivision and the resulting data structure. The worst-case depth of a leaf node u in this compound structure is calculated as follows: the depth in the primary structure is O(logn), since vertical cuts are always at median X-coordinates. Suppose the depths in the different levels of secondary structures from the leaf u (weight of w = Wo = 1) to the root are 0(log Wi/Wo),0(log W2/W1),..., 0(log W/Wjt). These values form a telescoping sum that reduces to 0(log W). But the total weight W at the root is 0{n \ogn)\ hence, the worst-case depth of a leaf node in the compound structure is O(logn). Summary: Space: 0{n\ogn) as there are O(logn) levels and in each level, structures of total size 0(M) are stored. Query Time: To locate a point we need to perform alternate binary searches in the primary tree and the second tree. As argued above this is bounded by O(logn).

3.3. The chain method Lee and Preparata [45] introduced an alternative approach. Unlike the slab method and the trapezoid method, which partition the subdivision into trapezoids, their method partitions the subdivision into regions separated by "chains". A chain is a sequence of edges that either forms a cycle or a path such that the end vertices belong to the boundary of the unbounded region. This way the chain, whether it is a cycle or a path, partitions the subdivision into two parts. The method then is to first find a ^'median" chain so as to partition the subdivision into roughly two equal parts. Then recursively one finds chains in each of the subdivision to construct a "balanced tree" of chains. The point location method then proceeds by discriminating the point against the chain at the root to find the partition containing the point and then recurses at the appropriate child in the tree. The important work involved in searching such a data structure is therefore the discrimination of a point against a chain. It is easy to see that discrimination of a point against a general chain is as difficult as point location in a simple polygon and hence is not really any simpler. So Lee and Preparata restricted each chain to be monotone, thus restricting their method to monotone subdivisions (which really is not a big restriction, since we can convert a general subdivision to a monotone subdivision by a vertical decomposition construction). For any monotone chain there exists a straight line such that the line orthogonal to the line intersects the chain at most once, and this property can be exploited for point location. The Figure 8 shows the partition of a subdivision into monotone chains. The edges are shared by different chains, but the data structure stores only one copy of each edge. Lee and Preparata's point location data structure is a compound data structure that has as its primary data structure a binary tree with a separating chain associated with each vertex and the secondary data structure storing the description of each chain. The primary and secondary data structure are both balanced binary search trees, actually. During a point

Geometric data structures

475

Fig. 8. Chain method: Partition of the subdivision into monotone chains (with respect to j-axis).

location discrimination of a point versus a monotone chain is done using the description of the chain in the secondary data structure. The discrimination of point versus monotone chain (say with respect to vertical line) is done in the obvious way. The projection of y-coordinates of the vertices of the chain on the vertical line partition the line into intervals which enables a binary search and the query point is located within one of the intervals (so that the point can then be compared with the straight line supporting the edge corresponding to the interval to find out which side of the chain the point lies in). Thus, each chain discrimination can be done in O(logn) time. Summary: Space: Each edge belongs potentially to more than one chain. But by storing each edge to the highest chain in the primary data structure to which the edge belongs, the space required for the data structure can be made to be 0(f2). Query Time: The discrimination of a point with respect to a monotone chain takes O(logfz), and the depth of the primary data structure is O(logn). Thus, point location takes 0(log^ n) time using the chain method.

3.4. Improving the chain method via fractional cascading Edelsbrunner, Guibas, and Stolfi [36] propose an improvement to the chain method using a technique called fractional cascading. This technique is applicable in any general situation where there are repeated similar searches along the nodes of a path in a directed graph in which the degree of each node is bounded by constant and the set of items searched in each node are drawn from the same universe. Under these conditions one can do better than performing several independent binary searches. The list stored in each node of the graph is augmented with extra elements from the lists in the successors so as to correlate the searches in a node and its successors. We now describe the fractional cascading technique and show how to improve the chain method. In chain method, a sequence (0(logn)) of point-versus-monotone chain discriminations are performed. In each such discrimination, a search is performed using the same query point, but against different chains. Moreover, the set of _y-coordinates of the vertices of the edges against with the comparison is made is a fixed set i.e., the y-coordinates of

476

M. T. Goodrich and K. Ramaiyer

the vertices of the subdivision. Hence the conditions are favorable to apply the fractional cascading technique. In order to be concrete let us therefore review in detail how the fractional cascading technique can be applied in this case. Consider a node u in the primary structure with children i; and w. Let Cu,Cv, and Cw be the chain lists in the nodes w, v, and w respectively before augmentation. The problem is to compute appropriate augmented lists Tu,Ty, and Ty^ Stored in the nodes w, i>, and w respectively. We perform this augmentation bottom up. Assume Ty and T^j are already constructed. For an appropriate constant d {d = 4 works, for example), we select every d-th element from Ty and Tyj, respectively, and copy the elements to the list Cu to form the augmented list Tu. These copied elements are referred to as bridge elements in Tu. Moreover each element / in Tu stores additional pointers as follows: Left Bridge (Pi): Pointer to the closest bridge element (not to the left of /) copied from left child. Right Bridge (Pr)'- Pointer to the closest bridge element (not to the left of /) copied from right child. Proper (Pp): Pointer to the closest element from Cu not to the left of /. Predecessor: Pointer to the predecessor element in Tu. Bridge (P/?): Pointer to the corresponding element in Ty or Tuj (only for elements copied into Cu). The distance between two bridge elements in Tu is at most d. To perform the search, we first do a binary search using the the given query point in the list stored in the root to identify which child to search. Suppose we need to search the left child. We then follow the Left Bridge pointer of the successor element of the query point in the T list of root. From the Left Bridge we follow the Bridge pointer to the T list of the left child. We then use the Predecessor pointers to find the two elements in the T list of left child which encompass the given query point. We select the successor and then use the Proper pointer to identify the element in the C list so as to perform comparisons for branching to next level. The Figure 9 shows an example where the list at a parent node is augmented with elements from the lists in two children. Every fourth element is copied from the child to the parent. For sake of exposition, we use lists of integers. The total number of pointers traversed for crossing one level is J + 3 pointers i.e., one Left Branch pointer, one Bridge pointer, at most d Predecessor pointers, and one Proper pointer. Initial search takes O(logw) time and the subsequence searches together take 0(d\ogn) time. If d is chosen as constant, then the total search time is O(logn). Summary: Space: Edelsbrunner, Guibas, and Stolfi [36] show the total space requirement is 0(n) for an appropriate constant value of d. Query Time: O(logn), as argued above. The preprocessing time is 0(n logn).

Geometric data

3 7 10 12 13 17 25 33 34 35

structures

All

W \ 1 2 5 9 15 18 19 22 28 30 31

Fig. 9. Fractional cascading: Augmentation of list in a parent node with elements from the lists in two children. The left bridge and right bridge pointers for element 20 are shown. Similarly the proper pointer for element 9 is shown. Also the bridge pointers are shown.

Subdivision hierarchies. Historically, the method of Edelsbrunner, Guibas, and Stolfi is not the first to simultaneously achieve O(^) space and an O(logn) query time. Kirkpatrick [44] discovered earlier an elegant method, based upon a technique we call the subdivision hierarchy method, for performing point location. This method is also applicable for searching in higher-dimensional structures and is even amenable to parallelization [2325]. The method requires that the subdivision is triangulated and also that the outer face is a triangle. The triangulation, however, can be done in linear time using Chazelle's method [14] if the original subdivision is connected. The method then proceeds as follows: 1. Identify a maximal independent set in the PSLG representing the subdivision using a greedy heuristic with the condition that the degree of vertices in the independent set is bounded by a constant c. Also the independent set should not include any vertices of the outer face. 2. Remove the vertices of the independent set and the edges attached to them. Retriangulate each of the star polygons which contain the vertices of the independent set. 3. Repeat the process until only the outer face remains. The Figure 10 shows the process of removal of independent set of vertices and retriangulation for an example subdivision. Now the point location data structure is constructed as follows: We have a layered directed acyclic graph in which one layer represents an intermediate triangulation of the subdivision in the above recursive process. We assign one node of the dag to each triangle. After removing the independent set and retriangulating, some of the triangles are destroyed and new triangles are introduced inside each star polygon. We introduce a node for each new triangle and introduce pointers from the node representing the new triangle to all the nodes representing the triangles which are destroyed within the star polygon. In the limiting case we have one node representing the outer face and three nodes representing the

478

M. T. Goodrich and K. Ramaiyer

Fig. 10. A subdivision hierarchy: The figure shows removal of independent set and retriangulation. The independent set vertices selected at a step are shown as hollow circles.

Fig. 11. A subdivision hierarchy: Organizing triangles in a dag for searching. The last three levels of a hierarchy are shown.

three triangles destroyed in the previous step. The resulting data structure thus is a layered dag as shown in the Figure 11. Kirkpatrick shows that in the layered dag: • The degree of each node is constant, and • A fraction of the vertices are removed at each step. Hence the total depth of the dag is O(log^i). This follows from a theorem in graph theory which states that there are "large" independent sets of constant degree vertices in a planar graph, which allows one to find maximal independent sets of size which is a fraction of the current number of vertices. The point location algorithm proceeds by first locating the point inside the outer triangle.

Geometric data structures

479

At any step in the search algorithm one has located the query point in a triangle t on some level in the dag. One then follows the pointers in the dag to search all the triangles in the next level that were destroyed to form t. The query point can be located within the unique triangle from among these candidates in 0(1) time, and this defines the invariant for the next level. Since there are only O(logn) levels in the dag, the point location takes O(logn) time. Summary: Space: The space occupied by the dag is O(^). Query Time : O(logn) as argued above. The preprocessing time is 0{n) given the triangulated subdivision, as it takes only constant time to retriangulate each star polygon. Indeed, using Chazelle's triangulation method [14], the preprocessing for Kirkpatrick's method can be implemented in linear time any time the original subdivision is connected. The sweep method and persistence. There is actually one more well-known method for achieving 0(log^)-time queries and 0(n) space for planar point location. In particular, Samak and Tarjan [63] use the idea of persistence to build such a space-efficient point location data structure. Intuitively, they combine the techniques of the slab method, plane sweeping, and persistence to build a very elegant point location data structure. A similar method was also discovered by Cole [22]. A persistent data structure allows one to perform updates and queries. Updates must be performed on the most recent version of the data structure (in ^o-cdi)lt&partial persistence), but the queries can be done on past versions, i.e., any previous version of the data structure that was modified by updates. Samak and Tarjan [63] modified the slab method as follows: given a subdivision construct slabs by dropping vertical lines through each of the vertices. Now perform a plane sweeping from the left most slab to the right most. The event points for the sweep are the vertices of the subdivision. A search structure is built for the edges within each slab which is maintained during the sweep. The structure changes at each event point with the insertion and deletion of edges. Samak and Tarjan [63] interpreted these changes at the event points as persistent updates. As a result they maintain a single persistent structure during the sweep which is updated at every event point. This persistent stmcture need be only partially persistent as updates occur only on the latest version during the sweep. The point location query is then performed as follows: using the X-coordinate of the query point the appropriate version of the persistent structure (slab) is identified for searching. Then a persistent search is performed on the corresponding "past" version to identify the face. By implementing the sweep using a partially-persistent red-black tree, Samak and Tarjan show how to construct an 0(n) space data stmcture that allows O(logn) time persistent searches and updates. Moreover, they show that each update adds only 0(1) amortized-space to the data structure.

480

M T. Goodrich and K. Ramaiyer

Summary: Space: The space occupied by the persistent structure is 0(n), since there are 0{n) updates and each requires 0(1) amortized space. Query Time: O(logn), as mentioned above.

4. Dynamic point location An important variant of the point location problem is to allow for environments that request incremental updates to the subdivision. In this variant one studies how to best reflect the changes to the subdivision (e.g., deletion and insertion of edges or vertices) in the data structure storing the subdivision. The goal, of course, is to continue to efficiently support point location queries while also also performing updates in an efficient manner. In addition, the space required by the data structure should be kept as small as possible. In this section we review some dynamic data structures used for performing dynamic planar point location, where edges are allowed to be inserted or deleted from a subdivision. The goal here is to efficiently maintain the data structure under the update operations and allow for fast point location queries. The basic problems in performing updates are the following: 1. Propagation of new information or modification of old information efficiently. Suppose one is deleting an edge. If the edge is represented in several nodes of the data structure, then one needs to remove that information from all the nodes. For example in the slab method, if an edge spans multiple slabs, then one needs to remove that information from each of the binary trees storing that edge. A reverse problem occurs when such a long edge is inserted into the subdivision. 2. Restructuring of the data structure to maintain the invariants assumed by the algorithm. For example in the trapezoid method, we have the invariant that each node represents a trapezoid and adjacent nodes are separated by a "spanning edge". Suppose a spanning edge is deleted. Then the two adjacent trapezoids (which are represented as two different nodes) must now be collapsed into a single trapezoid and represented by a single node. Similarly if a new spanning edge is inserted, then the corresponding node must be split into two different nodes. 3. Dynamization of the techniques (fractional cascading, persistence, etc.) used to achieve good space bounds. Preparata and Tamassia [59] present a method for dynamic point location in a convex subdivision. They construct the data structure by dynamizing the structure used in the static trapezoid method. They put a restriction that the vertices lie on a fixed set of A^ horizontal lines, however. The data structure in this case uses space O(NlogN). Still, they achieve an impressive time of 0(logn + log A^) for queries and an update time of 0(\ognlogN). The data structure consists of a primary component that is any balanced binary search tree and the secondary component that is a biased binary search trees [4]. Preparata and Tamassia [58] also dynamized the chain-method for performing point location on a monotone subdivision. They construct a data structure that allows for insertion and deletion of vertices, insertion and deletion of monotone chain of edges, and

Geometric data structures

481

point location queries. The update operation is required to leave the subdivision monotone. They achieve an 0(\o^ n) query time, O(logn) time for inserting or deleting a vertex, and 0(log^ n -\-k) time for inserting or deleting a monotone chain of k edges. The space requirement is 0(n). Chiang and Tamassia [21] later improved the above results for dynamic point location in a monotone subdivision. They further dynamized the trapezoid method to achieve 0(log«) time for queries and 0(log^ n) time for updates. The space required for the data structure is 0{n logn). Their data structure is a compound structure with the primary component being a BB[Qf] tree [10,48,51] and the secondary component being biased binary search trees [4]. Mehlhorn and Naher [49] dynamize fractional cascading to support insertions and deletions in O(loglogn) amortized time and queries in Oilogn + /: log logn), where k is the length of the path traversed. Hence their dynamization adds a 0(log logn) overhead to the static method. Dietz and Raman [27] improve the update time from amortized to worstcase. Cheng and Janardan [18] present methods for dynamic point location for any connected subdivision. They have two schemes. In one scheme, they achieve an 0(log^ n) query time, O(logn) time for insertion and deletion of vertices, and 0{k\og{n + k)) and O(^logn) times for insertion and deletion, respectively, of an arbitrary ^-edge chain inside a region. The space requirement is 0(n). In their other scheme they speedup the insertion and deletion of ^-edge monotone chain to 0(log^n log logn + k), but increase the other bounds slightly. Their general approach is based on the a new search strategy using priority search trees [47] taken together with the technique of dynamization as proposed by Willard and Lueker [46,66] and Overmars [53]. The main idea of this technique is that rather than updating the data structure immediately with each update request they perform only local updates and spread the restructuring over a sequence of future operations. They perform global rebuilding of the entire data structure periodically so that the structure does not go out balance too much. They also make use of BB[Qf] trees. Goodrich and Tamassia [38] present a method for dynamic method for point location in monotone subdivisions. They improve the update times by paying a penalty on the query time. In particular, they achieve O(logn) time for insertion and deletion of vertices, 0(logn -h k) for the insertion and deletion of a monotone chain of k edges, and 0(log^ n) time for queries. The space requirement is 0(n). Their data structure consists of two interlaced spanning trees, each of which is represented using link-cut trees [65]. In order to be concrete about one dynamic point location method, we briefly review their method. 4.1. The inter-laced trees technique Let 5 be a PSLG that is connected and monotone (and will remain that way throughout the update process). Goodrich and Tamassia [38] first construct a spanning tree T for the triangulation with the property that its root-to-leaf paths are monotone with respect to the }^-axis. Then they construct a graph-theoretic dual of the triangulation such that it excludes the edges dual to the monotone spanning tree constructed above. This defines the spanning tree D for the dual graph of 5, and these two trees "inter-lace." Each node of D represents a triangle of S and each edge of D corresponds to a non-tree edge of S, with respect to T (since edges dual to edges of T are ignored while constructing

482

M. T. Goodrich and K. Ramaiyer

D), and hence determines a unique cycle in S. Moreover this cycle partitions S into two regions, one inside the cycle and the other outside the cycle. This property allows one to perform searching the subdivision where we need to do point-versus-cycle discrimination. The main idea behind point location is as follows: since each edge of D represents a cycle of 5, it partitions S into two regions. Given a query point, we can compare it against the edges of the cycle to determine whether it is inside or outside the cycle in 0(logn) time, since T is monotone. Depending on this test, we proceed to an appropriate edge of D, which is either inside or outside the cycle of 5, for further discrimination. To perform such a search efficiently, one needs to balance the tree D. The authors use the link-cut tree data structure [65] to be able to implement a recursive centroid search [13,40] in D to eliminate a constant fraction of the triangular faces with each cycle test. This allows one to perform searches in 0(log^ n) time since the depth of the centroid decomposition tree is O(logn) and in each step one needs to perform an O(logn) -time point-versus-chain discrimination. Summary: Space: The space occupied by the structure is 0{n) (again from fractional cascading method). Query Time: 0(log^ n), as mentioned above. Update Time : The authors show how to use link-cut tree primitives to implement updates in 0(log«) time.

5. Convex hulls and convex polytopes Convex hulls and convex polytopes are fundamental geometric data structures and have been well studied. In this section we discuss the data structure representations of convex hulls and polytopes and also how they are maintained dynamically when the points are inserted and deleted.

5.1. d-dimensional representations Given a collection of points in J-dimension, where J is a fixed constant, there are algorithms (both randomized and deterministic) for computing the convex hull of the points in this collection. Alternately, some of the algorithms exploit the duality between a point and a hyperplane in d dimensions, and compute the intersection of a collection of halfspaces determined by the origin and a set of hyperplanes, which by this duality directly gives the information about the convex hull of the input set of (primal) points. A convex polytope is represented by the information about faces, edges, and vertices and the relationship between them. Each face of the convex polytope is a convex set. The (d — 1)-dimensional faces of a J-dimensional polytope are called facets, its (d — 2)- and lower dimensional faces are called subfacets, its one-dimensional faces are edges, and

Geometric data structures

483

its zero-dimensional faces are vertices. In a convex poly tope arises from a convex hull computation, then each vertex is a point in the input set. A J-dimensional convex polytope is represented generally using an incidence graph [35]. Dobkin and Laszlo [30] define such a representation for 3-dimensional convex poly topes and Brisson [11] extends this to J-dimensional convex polytopes, for fixed J ^ 2. In addition to the above definitions, we refer to the {d — 2)-dimensional face as a ridge. In the incidence graph, f o r — 1 ^ ^ < J — l , a ^-face / and a (/: + l)-face g are incident upon each other if / belongs to the boundary of ^. In this case, / is called a subface of g and g is called the super face of / .

5.2. 2-dimensional dynamic maintenance Let us now consider the problem of maintaining a convex hull of points in the plane when the underlying point set changes. The online problem, where the points are only allowed to be inserted, is easier than the dynamic problem, where one allows deletion of points as well. In the online case the convex hull can only expand in area. If a new point is determined to fall inside the existing convex hull, then one does not need to do any additional work. But if the point falls outside, then one must compute the tangets from the new point to the current convex hull. These tangents are added to the convex hull and the chain of points on the convex hull between the two tangents are deleted. This can be done in amortized O(logn) time, where n is the number of points on the convex hull, as shown by Preparata [55]. When the deletion of points is allowed things get more complicated, since one needs to maintain convexity information about points that are not currently on the convex hull. Overmars and van Leeuwen [54] present an elegant solution to this problem. They maintain the convex hull as an union of two monotone chains — the upper and lower hulls — partitioned at the point with largest and smallest x-coordinate, respectively. Each hull is then maintained as a compound tree structure, where each internal node stores the convex hull of the points in the subtree and the parent node adds the supporting tangent to the convex hull stored at its two children to maintain the convex hull of all the points in its subtree. The insert and delete operations work to modify these lists at the nodes appropriately. They show that the update operations take 0(log^ n) time (where n is the current number of points in the set), while the query operation of asking for the current convex hull involvex just reading the list from the root of the tree. In addition, they show that one can still perform tangent queries in O(logn) time.

5.3. 3-dimensional subdivision hierarchies Representing 3-dimensional convex polytopes is considerably harder. We know of no efficient dynamic schemes, for example. Still, Dobkin and Kirkpatrick [28,29] present an beautiful static data structure for representing 3-dimensional convex polyhedra so as to answer tangent and intersection queries quickly. Their structure is based upon the subdivision hierarchies technique introduced earlier (in Section 3.4). They form a hierarchy by first identifying a relatively large independent set of vertices of at most constant degree

484

M T. Goodrich and K. Ramaiyer

(viewing the edges of the polyhedron as a graph). They then remove these vertices and form the convex hull of those that remain, while forming pointers between the new facets formed and the vertex in the previous level that was deleted to give rise to these new facets. They then recursively repeat this process, terminating this construction when the polyhedron has constant size. Interestingly, they show that this simple approach can be used to answer a number of types of tangent and intersection queries on the original polyhedron in 0(log«) time, where n is the number of vertices.

6. Rectilinear data structures Having reviewed some data structures for maintaining convexity information, let us now consider the organization of geometric objects so as to enable "rectilinear" types of searching. In particular, we briefly review in this section methods that partition and query the space occupied by the underlying geometric objects using axis-parallel hyperplanes. For a more-complete description of these techniques, the reader is referred to the Chapter 17 by Nievergelt and Widmayer of this Handbook.

6.1. k-D trees and quad trees First we consider two rectilinear data structures, namely, the k-D tree [5,8,7] and the quadtree [61,62]. We now briefly describe the two types of partitions used to build these structures. We discuss the partitioning for point sets in J-dimensions. The method can be extended for other types of geometric objects in a straightforward way. In the case of the k-D tree,^ we first compute the median of the point set in one of the dimensions, say DQ and partition the point set into two sets based on the median point, i.e., all points having coordinates less than the median point along Do are placed in one set and the remaining points are placed in the other set. This process is then recursively continued along the other dimensions in the two resulting regions. Once the partitioning is completed along all dimensions, it is repeated starting from Do in the resulting regions. If the number of points in a particular region falls below a certain constant, the process is terminated for that region. These regions with boundaries parallel to the axes are organized in the form of tree with partitioning of a region into two smaller regions along an axis representing the parent-child relationship. In the case of the quad-tree, the bounding box of the point set is partitioned into 2^ regions by using axis-parallel hyperplanes passing through the mid point of each of the sides of the bounding box. The partitioning is continued recursively in each of the resulting regions until the number of points falls below a certain constant. These regions are then organized in the form of a multi-ary tree. The ^-D tree and quad tree occupy linear space and the performance of the search operations depends on the application, but is in general not optimal. The phrase means A:-dimensional or multidimensional binary search tree, but we use d to denote the dimension to be consistent with other sections.

Geometric data structures

485

The search algorithms proceed by intersecting the search volume with the bounding box of the region at the root node and recursively searching the regions in the children nodes whose bounding boxes intersect the search volume.

6.2. Segment trees Given a set of n segments in the plane, a segment tree [6] allows for efficient storage and searching operations on the underlying set. The x-coordinates of the segment endpoints are projected onto the real line so as to partition the line into several intervals (if the endpoints are in general position, there will be 2^ + 1 intervals). These intervals are then organized in the form of a tree structure. The intervals represent the leaf nodes of the tree. Each internal node represents an interval that is the union of all intervals in the leaf nodes in the subtree. We store a "cover list" of segments at each internal node (typically sorted by the "above" relationship). Formally we say a segment covers a node u if it spans interval at u and does not span the interval at the parent of u. One can show that a segment is stored in the cover lists of at most two nodes in each level and also in at most O(logn) different nodes, where n is the number of segments. A query operation, such as finding all the segments stabbed by a vertical query ray, can be answered by searching the cover list at each level and proceeding down the tree. The segment tree requires 0(n logn) space and the vertical ray-intersection query can be answered in 0(logn + k) time, where k is the output size.

6.3. Range trees Range trees allow for efficient storage of point sets for rectangle range searching. Given a set of n points in plane, for example, the 2-dimensional range tree is constructed as a compound tree structure. The primary tree structure is constructed as a balanced binary search tree structure on the x-coordinates of the points. Each node in the primary tree structure represents an interval in the x-axis. We associate with each internal node all the points within its interval and organize those points in the form of a search tree, but ordered using their y-coordinates. To perform a range search, one first searches the primary tree and locates the two intervals in the leaf nodes containing the bounding x-values of the query range. Then we walk up the tree to the least common ancestor of the two leaves and along the path we search the secondary structure stored in the nodes which are siblings (nodes not on the path) for the points which are within the range in the y-axis. The space occupied by the range tree is 0{n logn) since there O(logn) levels each containing 0(n) nodes. The complexity of range search is 0(log^ n + k), where k is the output size. This can be improved to 0(logn + k) by using techniques like fractional cascading.

7. General techniques In this section we briefly outline the some of the transformation and construction techniques used for improving the performance of the searches and updates on a data structure.

486

M. T. Goodrich and K. Ramaiyer

7.1. Fractional cascading As mentioned earlier, fractional cascading [16,17] is a very powerful data structure transformation technique that can improve the query performance of the data structures. We outlined the method in detail in Section 3.4. Given a graph-based data structure consisting of nodes of bounded degree in which search operations proceed along a path in the graph and compare the information stored in each node of the path with the "same" key, one can improve the performance by augmenting the information stored in each node. This will eliminate the need for independent searches in each node and will make the searches dependent. One important restriction is that all the information stored must be from the same universal set.

7.2. Persistence The data structures we study normally are ephemeral in nature, i.e., once the updates are done on the structures the previous information is lost. The persistent data structure maintains information about the past versions. This allows one to perform queries in the past. Such structures are called partially persistent structures. Sometimes one would like to allow for updates in the past versions. This becomes quite complicated as an update to one of the past version creates a new chain of data structures. We refer the reader to an excellent paper by DriscoU et al. [33] for complete details for how to make structures persistent. Such structures are referred to diS fully persistent structures. As showed in Section 3.4, the persistent structures can maintain the information during a plane sweep in a simple way which provides for efficient planar point location algorithm.

7.3. Static to dynamic conversions Bentley and Saxe [64,9] propose general techniques for converting static data structures to be dynamic. They considered a class of problems called decomposable searching problems and presented general techniques for converting static data structures to dynamic. The decomposable searching problems have the property that one can decompose a query about the complete set of objects into queries involving subset of objects and combining the results in a certain way to obtain the solution for the original query. These have applications to problems like membership querying, nearest neighbor querying, fathest point querying, and intersection querying. The transformations to online structures (i.e., ones that allow only insertions) is to maintain a collection of static structures of appropriate size and merge them to build large structures periodically. The size and time at which new structures are built are determined typically based on geometric progressions, such as powers-of-twos or a Fibonacci series. When deletions are allowed they advocate the use of a shadow structure where the deleted elements are maintained. One can then answer queries by searching both the shadow and the actual structures. Overmars [53] introduced another class called order decomposable set problems and he presented general techniques for dynamizations. This is a generalization of the method of

Geometric data structures

487

Overmars and van Leeuwen [54] for dynamic maintenance of convex hull of planar point set. This technique has applications in maintenance of the contour of maximal elements of a two-dimensional point set, maintenance of the intersection of a set of halfspaces in the plane, etc.

7.4. Internal-memory to external-memory conversions When the input data to a given problem is huge, one would like to design algorithms that optimize the number of I/O accesses. Because of the order of magnitude difference in the access times between disks and internal memory, algorithms dealing with large inputs should pay more attention to the organization of the underlying data structure so as to minimize the number of disk accesses. The main task in external memory organization of a data structure involves determining which substructures share the same block or page in the disk. The blocking of the nodes of the internal memory data structure is very crucial and different blocking schemes lead to different space requirement and different I/O performance of the query algorithms. Goodrich et al. [39] present external memory techniques for solving computational geometry problems dealing with large inputs. They presented four general techniques and showed how they can be applied to obtain efficient external memory algorithms for problems, like computing the pairwise intersection of orthogonal segments, constructing the 2-d and 3-d convex hull of points, answering batched range queries on points, point location queries on the planar subdivision, finding all nearest neighbors, etc. There are several other works related to external memory computational geometry. We refer the reader to some of the recent papers on this topic [1,12,19,52,20]. In addition, we highlight a recent paper by Arge [2], where he introduces a general technique, based on a data structure he calls the bujfer tree, for converting certain types of internal memory computations into efficient external memory computations.

References [1] L. Arge, External-storage data structures for plane-sweep algorithms. Technical Report RS-94-16, BRICS, Aarhus Univ., Denmark (1994). [2] L. Arge, The bujfer tree: A new technique for optimal I/O-algorithms, Proc. 4th Workshop Algorithms Data Struct., Lecture Notes in Comput. Sci. 955 (1995), 334-345. [3] B.G. Baumgart, A polyhedron representation for computer vision, Proc. AFIPS Natl. Comput. Conf., Vol. 44 (1975), 589-596. [4] S.W. Bent, D.D. Sleator and R.E. Tarjan, Biased search trees, SIAM J. Comput. 14 (1985), 545-568. [5] J.L. Bentley, Multidimensional binary search trees used for associative searching, Comm. ACM 18 (9) (1975), 509-517. [6] J.L. Bentley, Solutions to Klee's rectangle problems. Report ??, Carnegie-Mellon Univ., Pittsburgh, PA (1977). [7] J.L. Bentley, Multidimensional binary search trees in database applications, IEEE Trans. Softw. Eng. SE-5 (1979), 333-340. [8] J.L. Bentley, Multidimensional divide-and-conquer, Comm. ACM 23 (4) (1980), 214-229. [9] J.L. Bentley and J.B. Saxe, Decomposable searching problems I: Static-to-dynamic transformation, J. Algorithms 1 (1980), 301-358.

488

M.T. Goodrich andK.

Ramaiyer

[10] N. Blum and K. Mehlhom, On the average number of rebalancing operations in weight-balanced trees, Theoret. Comput. Sci. 11 (1980), 303-320. [11] E. Brisson, Representing geometric structures in d dimensions: Topology and order. Discrete Comput. Geom. 9 (1993), 387-^26. [12] P. Callahan, M.T. Goodrich and K. Ramaiyer, Topology B-trees and their applications, Proc. 4th Workshop Algorithms Data Struct., Lecture Notes in Comput. Sci. 955, Springer-Verlag (1995), 381-392. [13] B. Chazelle, A theorem on polygon cutting with applications, Proc. 23rd Annu. IEEE Sympos. Found. Comput. Sci. (1982), 339-349. [14] B. Chazelle, Triangulating a simple polygon in linear time. Discrete Comput. Geom. 6 (1991), 485-524. [15] B. Chazelle, H. Edelsbrunner, M. Grigni, L. Guibas, J. Hershberger, M. Sharir and J. Snoeyink, Ray shooting in polygons using geodesic triangulations, Proc. 18th Intemat. Colloq. Automata Lang. Program., Lecture Notes in Comput. Sci. 510, Springer-Verlag (1991), 661-673. [16] B. Chazelle and L.J. Guibas, Fractional cascading: I. A data structuring technique, Algorithmica 1 (1986), 133-162. [17] B. Chazelle and L.J. Guibas, Fractional cascading: II. Applications, Algorithmica 1 (1986), 163-191. [18] S.W. Cheng and R. Janardan, New results on dynamic planar point location, SIAM J. Comput. 21 (1992), 972-999. [19] Y.-J. Chiang, Experiments on the practical I/O efficiency of geometric algorithms: Distribution sweep vs. plane sweep, Proc. 4th Workshop Algorithms Data Struct., Lecture Notes in Comput. Sci. 955, SpringerVerlag (1995), 346-357. [20] Y.-J. Chiang, M.T. Goodrich, E.F. Grove, R. Tamassia, D.E. Vengroff and J.S. Vitter, External-memory graph algorithms, Proc. 6th ACM-SIAM Sympos. Discrete Algorithms (1995), 139-149. [21] Y.-J. Chiang and R, Tamassia, Dynamization of the trapezoid method for planar point location in monotone subdivisions, Internat. J. Comput. Geom. Appl. 2 (3) (1992), 311-333. [22] R. Cole, Searching and storing similar lists, J. Algorithms 7 (1986), 202-220. [23] N. Dadoun and D.G. Kirkpatrick, Parallel processing for efficient subdivision search, Proc. 3rd Annu. ACM Sympos. Comput. Geom. (1987), 205-214. [24] N. Dadoun and D.G. Kirkpatrick, Cooperative subdivision search algorithms with applications, Proc. 27th AUerton Conf. Commun. Control Comput. (1989), 538-547. [25] N. Dadoun and D.G. Kirkpatrick, Parallel construction of subdivision hierarchies, J. Comput. Syst. Sci. 39 (1989), 153-165. [26] M. de Berg, Ray Shooting, Depth Orders and Hidden Surface Removal, Lecture Notes in Comput. Sci. 703, Springer-Verlag, Berlin, Germany (1993). [27] P.P. Dietz and R. Raman, Persistence, amortization and randomization, Proc. 2nd ACM-SIAM Symp. Discrete Algorithms (1991), 78-88. [28] D.P. Dobkin and D.G. Kirkpatrick, A linear algorithm for determining the separation of convex polyhedra, J. Algorithms 6 (1985), 381-392. [29] D.P. Dobkin and D.G. Kirkpatrick, Determining the separation of preprocessed polyhedra — a unified approach, Proc. 17th Internat. Colloq. Automata Lang. Program., Lecture Notes in Comput. Sci. 443, Springer-Verlag (1990), 400-413. [30] D.P. Dobkin and M.J. Laszlo, Primitives for the manipulation of three-dimensional subdivisions, Algorithmica 4 (1989), 3-32. [31] D.P. Dobkin and R.J. Lipton, The complexity of searching lines in the plane. Technical Report, Dept. Comput. Sci., Yale Univ., New Haven, CT (1976). [32] D.P. Dobkin and R.J. Lipton, Multidimensional searching problems, SIAM J. Comput. 5 (1976), 181-186. [33] J.R. Driscoll, N. Samak, D.D. Sleator and R.E. Tarjan, Making data structures persistent, J. Comput. Syst. Sci. 38(1989), 86-124. [34] H. Edelsbrunner, A new approach to rectangle intersections, Part I, Intemat. J. Comput. Math. 13 (1983), 209-219. [35] H. Edelsbrunner, Algorithms in Combinatorial Geometry, EATCS Monographs on Theoretical Computer Science 10, Springer-Verlag, Heidelberg, West Germany (1987). [36] H. Edelsbrunner, L.J. Guibas and J. Stolfi, Optimal point location in a monotone subdivision, SIAM J. Comput. 15 (1986), 317-340. [37] I. Fary, On straight lines representation of planar graphs. Acta Sci. Math. Szeged. 11 (1948), 229-233.

Geometric data structures

489

[38] M. Goodrich and R. Tamassia, Dynamic trees and dynamic point location, Proc. 23rd Annu. ACM Sympos. Theory Comput. (1991), 523-533. [39] M.T. Goodrich, J.-J. Tsay, D.E. Vengroff and J.S. Vitter, External-memory computational geometry, Proc. 34th Annu. IEEE Sympos. Found. Comput. Sci. (FOCS 93) (1993), 714^723. [40] L.J. Guibas, J. Hershberger, D. Leven, M. Sharir and R.E. Tarjan, Linear-time algorithms for visibility and shortest path problems inside triangulated simple polygons, Algorithmica 2 (1987), 209-233. [41] L.J. Guibas and J. Stolfi, Primitives for the manipulation of general subdivisions and the computation of Voronoi diagrams, ACM Trans. Graph. 4 (1985), 74-123. [42] F. Harary, Graph Theory, Addison-Wesley, Reading, MA (1972). [43] J. Hershberger and S. Suri, Offline maintenance of planar configurations, Proc. 2nd ACM-SIAM Sympos. Discrete Algorithms (1991), 32-41. [44] D.G. Kirkpatriclc, Optimal search in planar subdivisions, SIAM J. Comput. 12 (1983), 28-35. [45] D.T. Lee and F.P. Preparata, Location of a point in a planar subdivision and its applications, SIAM J. Comput. 6 (1977), 594-606. [46] G.S. Lueker and D.E. Willard, A data structure for dynamic range searching. Inform. Process. Lett. 15 (1982), 209-213. [47] E.M. McCreight, Priority search trees, SIAM J. Comput. 14 (1985), 257-276. [48] K. Mehlhom, Sorting and Searching, Data Structures and Algorithms, Vol. 1, Springer-Verlag, Heidelberg West Germany (1984). [49] K. Mehlhom and S. Naher, Dynamic fractional cascading, Algorithmica 5 (1990), 215-241. [50] D.E. MuUer and F.P. Preparata, Finding the intersection of two convex polyhedra, Theoret. Comput. Sci. 7 (1978), 217-236. [51] J. Nievergelt and E. Reingold, Binary search trees of bounded balanced, SIAM J. Comput. 2 (1973), 33-43. [52] M.H. Nodine, M.T. Goodrich and J.S. Vitter, Blocking for external graph searching, Proc. 12th Annu. ACM Sympos. Principles Database Syst. (PODS '93) (1993), 222-232. [53] M.H. Overmars, Dynamization of order decomposable set problems, J. Algorithms 2 (1981), 245-260. Corrigendum in 4 (1983), 301. [54] M.H. Overmars and J. van Leeuwen, Maintenance of configurations in the plane, J. Comput. Syst. Sci. 23 (1981), 166-204. [55] F.P. Preparata, An optimal real-time algorithm for planar convex hulls, Comm. ACM 22 (1979), 402-405. [56] F.P. Preparata, A new approach to planar point location, SIAM J. Comput. 10 (1981), 473^82. [57] F.P. Preparata and M.L Shamos, Computational Geometry: An Introduction, Springer-Verlag, New York, NY (1985). [58] F.P. Preparata and R. Tamassia, Fully dynamic point location in a monotone subdivision, SIAM J. Comput. 18(1989), 811-830. [59] F.P. Preparata and R. Tamassia, Dynamic planar point location with optimal query time, Theoret. Comput. Sci. 74 (1990), 95-114. [60] J.-R. Sack and J. Urrutia, eds. Handbook on Computational Geometry, Elsevier Science B.V., Amsterdam, the Netherlands (1999). [61] H. Samet, Applications of Spatial Data Structures, Addison-Wesley, Reading, MA (1990). [62] H. Samet, Applications of Spatial Data Structures: Computer Graphics, Image Processing, and GIS, Addison-Wesley (1990). [63] N. Samak and R.E. Tarjan, Planar point location using persistent search trees, Comm. ACM 29 (1986), 669-679. [64] J.B. Saxe and J.L. Bentley, Transforming static data structures to dynamic structures, Proc. 20th Annu. IEEE Sympos. Found. Comput. Sci. (1979), 148-168. [65] D.D. Sleator and R.E. Tarjan, A data structure for dynamic trees, J. Comput. Syst. Sci. 26 (3) (1983), 362-381. [66] D.E. Willard and G.S. Lueker, Adding range restriction capability to dynamic data structures, J. ACM 32 (1985), 597-617.

This Page Intentionally Left Blank

CHAPTER 11

Polygon Decomposition J. Mark Keil Department of Computer Science, University of Saskatchewan, Saskatoon, SK, Canada, S7N 5A9

Contents 1. Introduction 2. Decomposing general polygons 2.1. Polygon partitioning 2.2. Polygon covering 3. Orthogonal polygons 3.1. Partitioning orthogonal polygons 3.2. Covering orthogonal polygons References

493 496 496 500 502 502 507 513

HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V. All rights reserved

491

This Page Intentionally Left Blank

Polygon decomposition

493

1. Introduction Practitioners frequently use polygons to model objects in applications where geometry is important. In polygon decomposition we represent a polygon as the union of a number of simpler component parts. Polygon decomposition has many theoretical and practical applications and has received attention in several previous surveys [26,68,108,129,134]. Pattern Recognition is one area that uses polygon decomposition as a tool [41,111113,134]. Pattern recognition techniques extract information from an object in order to describe, identify or classify it. An established strategy for recognizing a general polygonal object is to decompose it into simpler components, then identify the components and their interrelationships and use this information to determine the shape of the object [41,112]. Polygon decomposition is also useful in problems arising in VLSI artwork data processing. Layouts are represented as polygons, and one approach to preparation for electronbeam lithography is to decompose these polygon regions into fundamental figures [6,42, 101,103]. Polygon decomposition is also used in the process of dividing the routing region into channels [83]. In computational geometry, algorithms for problems on general polygons are often more complex than those for restricted types of polygons such as convex or star-shaped. The point inclusion problem is one example [115]. For other examples see [4] or [109]. A strategy for solving some of these types of problems on general polygons is to decompose the polygon into simple component parts, solve the problem on each component using a specialized algorithm, and then combine the partial solutions. Other applications of polygon decomposition include data compression [93], database systems [91], image processing [98], and computer graphics [132]. Although much work has been done on decomposing polyhedra in three or higher dimensions [17,10], we will restrict the scope of this survey to that of decomposing polygons in the plane. Triangulation, the partitioning of the interior of a polygon into triangles, is a central problem in computational geometry. Many algorithms for polygons begin by triangulating the polygon. As early as 1978, Garey et al. [45] provided an 0(n log n) time algorithm, but no matching lower bound was known. The importance of the problem led to a significant amount of research [39,133] on algorithms, culminating in Chazelle's linear time algorithm [27]. Although it certainly is an example of a polygon decomposition problem, the triangulation problem has taken on a life of its own and we will consider a systematic study of the triangulation problem, as well as related mesh generation work, to be outside the scope of this survey. For a good survey of mesh generation and optimal triangulation see [17] or Chapter 6. There are a wide variety of types of component subpolygons that are useful for polygon decomposition. These include subpolygons that are convex, star-shaped, spiral or monotone, as well as fixed shapes such as squares, rectangles and trapezoids. Before proceeding further we provide definitions for some of the restricted types of polygons. A point x in a polygon P is visible from a point y in P, if the line segment joining x and y lies entirely inside P. We treat a polygon as a closed set, thus a visibility line may touch the boundary of P. A polygon P is convex if every pair of points in P are visible from each other. A polygon P is star-shaped if there exists at least one point x inside P from which the entire

494

J.M. Keil

(a)

(b)

(c) Fig. 1. (a) A star-shaped polygon, (b) a spiral polygon, and (c) a polygon monotone with respect to the y-axis.

polygon is visible. The entire set of points in P from which P is visible is called the kernel of P. A polygonal chain in a polygon P is a sequence of consecutive vertices of P. A spiral polygon is a polygon whose boundary chain contains precisely one concave subchain.

Polygon

decomposition

495

(a)

(b) Fig. 2. A polygon with 9 vertices and 3 reflex vertices partitioned into convex subpolygons (a) with a Steiner point and (b) without Steiner points.

A polygonal chain is monotone with respect to a line / if the projections of the vertices in the chain on / occur in exactly the same order as the vertices in the chain. A polygon P is monotone if there exists a line / such that the boundary of P can be partitioned into two polygonal chains which are monotone with respect to the line /. See Figure 1. It is also useful to classify the type of polygon that is being decomposed. Polygons may be simply connected or they may contain holes. Holes are nonoverlapping "island" simple polygons, inside the main polygon. Some authors allow for degenerate holes such as line segments or points. The complexity of a decomposition problem usually increases if the

496

JM. Keil

polygon contains holes. A polygon is said to be orthogonal if all of its sides are either horizontal or vertical. Orthogonal polygons are relevant in many applications and in this survey special emphasis is placed on the decomposition of orthogonal polygons. Polygon decompositions are also classified according to how the component parts interrelate. A decomposition is called 2i partition, if the component subpolygons do not overlap except at their boundaries. If generally overlapping pieces are allowed we call the decomposition a cover. Decomposing a polygon into simpler components can be done with or without introducing additional vertices which are commonly called Steiner points. While the use of Steiner points makes subsequent processing of the decomposed polygon more complex, it also often allows the use of fewer component parts. See Figure 2. In a polygon with n vertices, at some A^ of the vertices the interior angle will be reflex (greater than 180°). The number A^ of reflex vertices of a polygon can be much smaller than n and we analyze the complexity of decomposition algorithms with respect to both n and A^. See Figure 2. In most applications we want a decomposition that is minimal in some sense. Some applications seek to decompose the polygon into the minimum number of some type of component. Other applications use a decomposition that minimizes the total length of the internal edges used to form the decomposition (minimum "ink"). Perhaps the earhest minimum "ink" result is due to Klincsek [70]. He uses dynamic programming to find the minimum "ink" triangulation of a polygon. His work was influential in that it inspired subsequent dynamic programming solutions to decomposition problems. As in the example of Figure 6, a minimum edge length decomposition can be quite different from a minimum number decomposition for the same component type. In the next section we will review the work that has been done concerning partitioning and covering general polygons. In Section 3 we turn our attention to orthogonal polygons and consider the work done on decomposing orthogonal polygons.

2. Decomposing general polygons In this section we consider both partitioning and covering problems for general polygons.

2.1. Polygon partitioning When partitioning a polygon into simpler subpolygons, it is the application which determines the type of subpolygon to be used. Syntactic pattern recognition uses convex, spiral and star-shaped decompositions [41,111,113,124,9,134]. VLSI applications use trapezoids [6]. In the rest of this section we will consider each of these types of subpolygons in turn. Convex subpolygons. When the polygon may contain holes, the problem of partitioning a polygon into the minimum number of convex components is NP-hard [80], either allowing or disallowing Steiner points. For polygons without holes much of the work done disallows Steiner points. A 1975 algorithm, due to Feng and Pavlidis [41], runs in O(N^n) time, but

Polygon

decomposition

497

Fig. 3. AnX-pattern.

does not generally yield a minimum decomposition. Schachter's 1978 [124] 0(nN) time partitioning algorithm also cannot guarantee a minimum number of components. For polygons without holes disallowing Steiner points, several approximation algorithms provide results guaranteed to be close to optimum. In 1982 Chazelle [25] provides an 0(^ log n) time algorithm that finds a partition that contains fewer than 4^ times the optimal number of components. Later, Greene [55], and Hertel and Mehlhorn [61] provide 0(n log n) time algorithms that find a partition that contains less than or equal to four times the optimal number of components. Note that, for polygons without degenerate holes, any convex partition that does not contain unnecessary edges will be within four times of the optimal sized partition. This is true as each added edge can eliminate at most two reflex vertices and each reflex vertex requires at most two edges to eliminate it in any convex partition that does not contain unnecessary edges. The year 1983 saw the achievement of algorithms obtaining the optimal number of convex components. When disallowing Steiner points, Greene [55] developed an O(N^n^) time algorithm for partitioning a polygon into the minimum number of convex components. Independently, Keil [64,65] developed an 0(N^n log n) time algorithm for the problem. Recently, Keil and Snoeyink [69] have achieved an 0(n + A^^ mm{N^, n}) time algorithm. Allowing Steiner points makes the problem quite different. There are an infinite number of possible locations for Steiner points. Nevertheless, as early as 1979 Chazelle and Dobkin [24,28,29] developed an 0(n + N^) time algorithm for the problem of partitioning a polygon into the minimum number of convex components. They define an Xy^-pattern to be a particular interconnection of k reflex vertices which removes all the reflex angles at these vertices and creates no new reflex angles. See Figure 3. They achieve their algorithm by further developing this idea and using a dynamic programming framework. Dobkin et al. [37] show how to extend the existing algorithms for decomposing a polygon without holes into convex components to optimally partition a splinegon into convex components (with or without Steiner points). A splinegon is a polygon where edges have been replaced by "well behaved curves" [37]. Partitioning a polygon with holes into convex components remains hard under the minimum edge length criterion. With Steiner points Lingas et al. [83] show the problem is

498

JM. Keil

NP-hard. Disallowing Steiner points, Keil [64] shows the problem is NP-complete. Allowing Steiner points, Levcopoulos and Lingas develop approximation algorithms for the problem [77]. For polygons without holes, they have an 0(n log n) time algorithm that yields a solution of size 0{p log A^), where p is the length of the perimeter of the polygon. For polygons with holes, they have an 0(n log n) time algorithm that produces a convex partition of size 0{{b -\-m) log N), where b is the total length of the boundary of the polygon and the holes and m is the minimum length of its convex partition. No optimal algorithms for the problem are known when Steiner points are allowed. For a convex polygon with point holes, without the use of Steiner points, Plaisted and Hong [114] give a polynomial time algorithm for partitioning into convex subpolygons, such that the total edge length is within 12 times the minimum amount required. For this problem Levcopoulos and Krznaric [76] give a greedy type 0(rt log n) time algorithm that yields a solution that is also within a constant factor of optimal. The year 1983 also saw the achievement of optimal algorithms under the minimum edge length criteria. For polygons without holes, disallowing Steiner points, Keil [64] develops an 0(N^n^ log n) time dynamic programming algorithm for the problem of partitioning a polygon into convex subpolygons while minimizing the total internal edge length. Independently, Greene noticed that his algorithm for the convex minimum number problem [55] can be adapted to yield an O(N^n^) time algorithm for the convex minimum edge length problem. Spiral subpolygons. Recall that a spiral polygon is a simple polygon whose boundary chain contains precisely one concave subchain. Keil [64] shows that the problem of partitioning a polygon with holes into the minimum number of spiral components is NPcomplete, when Steiner points are disallowed. For polygons without holes, again disallowing Steiner points, Feng and Pavlidis [41] provide a polynomial time algorithm for the problem that does not generally yield the minimum number of components. Keil [65] provides an 0(n^ log n) time algorithm to partition a polygon without holes, disallowing Steiner points, into the minimum number of spiral components. He also provides an Oin"^ log n) time algorithm for the same problem under the minimum edge length optimality criterion [64]. No results are known concerning partitioning polygons into spiral components if Steiner points are allowed. Star-shaped subpolygons. Steiner points are disallowed in most of the known results concerning star-shaped partitioning. Again we see the hardness of decomposing a polygon with holes as Keil [64] shows that the problem of partitioning a polygon with holes into the minimum number of star-shaped components is NP-complete. In 1981, for polygons without holes. Avis and Toussaint [9] give an 0(n log n) time algorithm that partitions a polygon into at most n/3 star-shaped components. This algorithm does not generally yield a minimum partition. In 1984 Aggarwal and Chazelle [2] are able to partition a polygon into n/3 components inO(n) time. In order to achieve a partition into the minimum number of star-shaped components, in 1983 Keil employs dynamic programming to develop an 0(n^N^ log n) time algorithm [65]. The idea is to extend the solutions for small subpolygons into solutions for larger subpolygons. In general however, there can be an exponential number of minimum starshaped partitions of a subpolygon. Furthermore, there are situations where no minimum

Polygon decomposition

499

partition of a subpolygon can be extended into a global minimum partition. The solution is to introduce pseudo star-shaped polygons. A pseudo star-shaped subpolygon has the property that there exists a point x in the polygon, but not in the subpolygon, so that every point in the subpolygon can be seen from x. The algorithm proceeds by keeping one star or pseudo star-shaped minimum partition of each of a number of equivalence classes of partitions at each subpolygon. Shapira and Rappoport [126] make use of a form of star-shaped partition in a new method for the computer animation task of shape blending. They seek a partition into the minimum number of star-shaped components, each of whose kernels contains a vertex of the polygon. When such a partition exists, they compute it using a restriction of Keil's algorithm [64]. Since such a partition does not always exist, they also provide a heuristic which allows Steiner points. For the problem of partitioning a polygon into star-shaped components while minimizing the total internal edge length, Keil [64] provides an 0(N^n^ log n) time algorithm. Monotone subpolygons. Recall that a polygon P is monotone if there exists a line, /, such that the boundary of P can be partitioned into two polygonal chains, each of which is monotone with respect to /. For a polygon with holes, disallowing Steiner points, Keil [64] shows that the problem of partitioning a polygon into the minimum number of monotone subpolygons is NP-complete. For a polygon without holes, Keil [64] develops an O(Nn^) time algorithm for the problem. The algorithm relies on the fact that there are only a polynomial number of preferred directions with respect to which a subpolygon can be monotone. If a minimum partition is not important Garey et al. [45] can provide an 0(n log n) time algorithm. If all of the subpolygons in a partition are monotone with respect to the same line then the partition is a decomposition into uniformly monotone components. Liu and Ntafos [89] give algorithms for partitioning a polygon without holes into the minimum number of uniformly monotone subpolygons. They give an 0(nN^ + N'^n log n + N^) time algorithm that does not use Steiner points, and an 0(N^n log n-{- N^) time algorithm that does allow Steiner points. For the problem of partitioning a polygon into monotone components while minimizing the total internal edge length, Keil [64] gives an O(Nn^) time algorithm. Other subpolygons. The problem of partitioning a polygonal region into the minimum number of trapezoids, with two horizontal sides, arises in VLSI artwork processing systems [6]. A triangle with a horizontal side is considered to be a trapezoid with two horizontal sides one of which is degenerate. See Figure 4. In such systems the layout is stored as a set of polygonal regions which should be partitioned into fundamental figures since the aperture of a pattern generator is restricted. Trapezoids have been used as fundamental figures. Asano et al [6] develop an O(n^) time algorithm, based on circle graphs, for the problem when the polygon does not contain holes. If a minimum partition is not important, Chazelle is able to partition a polygon into trapezoids inO(n) time as a by-product of his linear time triangulation algorithm [27]. In the case where the polygon does contain holes, Asano et al. [6] show the problem to be NP-complete, and they provide an 0(n log n) time approximation algorithm that finds a partition containing not more than three times the number of trapezoids in a minimum partition.

500

J.M. Keil

Fig. 4. A partition into trapezoids.

Everett et al. [40] consider the problem of partitioning a polygon into convex quadrilaterals. They use Steiner points, and give an 0{n) time algorithm that is not guaranteed to provide the minimum number of components. Another 0(n) time algorithm for this problem, that limits the number of Steiner points, is given in [117]. It is not always possible to partition a polygon into convex quadrilaterals without adding Steiner points. Lubiw [92] shows that the problem of deciding whether or not a partition without Steiner points is possible is NP-complete. Algorithms for partitioning convex polygons with point holes into quadrilaterals are given in [135,18]. Levcopoulos et al. [82,79] provide some algorithms for partitioning some types of polygons into m-gons under the minimum edge length optimization criterion.

2.2. Polygon covering Much of the work done concerning covering general polygons has involved convex or starshaped components. Convex subpolygons. The problem of covering a polygon with the minimum number of convex subpolygons finds application in syntactic pattern recognition [41,111,113,112, 110]; for example in the recognition of Chinese characters. In 1982 O'Rourke was one of

Polygon decomposition

501

the first to investigate the complexity of this problem. He showed that, although it is difficult to restrict the possible locations of Steiner points [106], the problem is nevertheless decidable [105,104]. For polygons with holes, O'Rourke and Supowit show that the problem is NP-hard [110], with or without Steiner points, and for this problem O'Rourke [107] provides an algorithm which runs in exponential time. Several years later, in sharp contrast to the partitioning situation, Culberson and Reckhow show that even if the polygon does not contain holes, the problem of covering a polygon with the minimum number of convex components remains NP-hard [35]. The difficulty of the problem motivates the consideration of the problem of covering a polygon with a fixed number of convex subpolygons. Shermer [130] provides a linear time algorithm for recognizing polygons that can be covered with two convex subpolygons. Belleville provides a linear time algorithm for recognizing polygons that can be covered with three convex subpolygons [13,14]. A more general type of polygon decomposition allows set difference as well as union as an operator to apply to the components. This additional operator may allow for a smaller number of component pieces. Batchelor [12] investigates a procedural approach to convex sum/difference decompositions. This type of decomposition has been applied to the automatic transformation of sequential programs for efficient execution on parallel computers [95]. Also, Tor and Middleditch [132] give an 0(n^) time algorithm for finding a convex sum/difference decomposition that may not necessarily use the minimum number of components. Star-shaped subpolygons. The problem of covering a polygon with star-shaped subpolygons has often been investigated as the problem of guarding an art gallery [9,108,129]. The region visible from a guard is a star-shaped subpolygon and the polygon models the art gallery. Knowledge of this problem helps with the understanding of visibility problems within polygons. General satisfactory solutions are not known for the minimum star-shaped covering problem. In 1983 for polygons with holes, O'Rourke and Supowit show the problem to be NP-hard [110]. Later Lee and Lin [71] show the problem remains NP-hard, even without holes, if the kernel of each star-shaped subpolygon must contain a vertex. Aggarwal [1] then shows that the unrestricted problem is NP-hard for polygons without holes. More recently other variations of the problem are shown to be NP-hard by Hecker and Herwig [59],andbyNilsson[102]. In 1987 Ghosh [48] develops an 0(n^ log n) time approximation algorithm, that finds a cover within a factor of 0(log n) of optimal, if the kernels of the subpolygons are restricted to contain vertices. His algorithm works whether or not the polygon contains holes. In 1988 Aggarwal et al. [3] consider a restricted problem, for polygons without holes, where subpolygon sides must be contained in either edges, edge extensions or segments of lines passing through two vertices of the polygon. For this restricted problem they develop an 0(n^ log n) time approximation algorithm that produces a cover within a factor of 0(log n) of optimal. They also show that the restricted problem remains NP-hard. Belleville [15] investigates the problem of recognizing polygons that can be covered by two star-shaped subpolygons. He gives an 0(n^) time algorithm for recognizing such polygons.

502

JM. Keil

Shermer [128] contributes to knowledge of related problems by giving bounds on the number of generalized star-shaped components required in a generalized cover. Other subpolygons. Spiral polygons and rectangles are two other types of component subpolygons that have been used to cover a polygon. For polygons with holes, O'Rourke and Supowit [110] show that covering with the minimum number of spiral subpolygons is NP-hard. Levcopoulos and Lingas [78] consider covering acute polygons, whose interior angles are all greater than 90°, by rectangles. They show that for convex polygons, the minimum number of rectangles needed in a cover is 0{n log(r(P))), where r{P) is the ratio of the length of the longest edge of the polygon to the length of the shortest edge of the polygon. Later Levcopoulos [72,75] extends this result and gives an algorithm that covers such an acute polygon with 0{n log n-\-m{P)) rectangles in time 0{n log n-\-m{P)), where m(P) is the number of rectangles in an optimal cover.

3. Orthogonal polygons In this section we turn our attention to the problem of decomposing orthogonal polygons. An orthogonal polygon is a polygon whose edges are either horizontal or vertical. Orthogonal polygons are also referred to as rectilinear polygons. They arise in applications, such as image processing and VLSI design, where a polygon is stored relative to an implicit grid. The set of orthogonal polygons is a subset of the set of all polygons, thus any polynomial time algorithm developed for general polygons will apply to orthogonal polygons, but problems NP-complete for general polygons may become tractable when restricted to orthogonal polygons. There are also natural subpolygons for orthogonal polygons, such as axis aligned rectangles or squares that are less relevant to general polygons. In the next two subsections we treat partitioning and covering problems for orthogonal polygons.

3.1. Partitioning orthogonal polygons Rectangles are the most important type of component to consider in relation to orthogonal polygons. The problem of partitioning orthogonal polygons into axis aligned rectangles has many applications. Image processing is often more efficient when the image in rectangular. For example, Ferrari et al. [42] indicate that the convolving of an image with a point spread function can be made particularly efficient by specifying the nonnegative values of the point spread function over a rectangular domain and specifying that function to be zero outside that domain. They suggest handling a nonrectangular orthogonal image by partitioning it into the minimum number of rectangular subregions. In VLSI design, two variations of the problem arise. The first occurs in optimal automated VLSI mask fabrication [84,101,103]. In mask generation a figure is usually engraved on a piece of glass using a pattern generator. A traditional pattern generator has a rectangular opening, thus the figure must be partitioned into rectangles so that the pattern generator

Polygon

503

decomposition

L

L

n

r

n

r

J

L

J

L

Fig. 5. Horizontal and vertical chords between reflex vertices.

can expose each such rectangle. The entire figure can be viewed as an orthogonal polygon. Since the time required for mask generation depends on the number of rectangles, the problem of partitioning an orthogonal polygon into the minimum number of rectangles becomes relevant. Another VLSI design problem is that of dividing the routing region into channels. Lingas et al. [83] suggest that partitioning the orthogonal routing region into rectangles, while minimizing the total length of the lines used to form the decomposition, will produce large "natural-looking" channels with a minimum of channel to channel interaction. Thus the minimum "ink" criteria is also relevant. Other application areas for the problem of partitioning orthogonal polygons into the minimum number of rectangles include database systems [91] and computer graphics [54]. At this point we should note that the use of Steiner points is inherent in the solution of the problem of partitioning into rectangles. For example for an orthogonal polygon with one reflex vertex, a partition can be formed by adding a horizontal line segment from the reflex vertex to the polygon boundary. In fact a generalization of this idea forms the basis for most partitioning algorithms. The following theorem [88,103,42] expresses this. See Figure 5. THEOREM 1. An orthogonal polygon can be minimally partitioned into N — L — H -\- I rectangles, where N is the number of reflex vertices, H is the number of holes and L is the maximum number of nonintersecting chords that can be drawn either horizontally or vertically between reflex vertices.

504

JM. Keil

The theorem impHes that a key step in the decomposition problem is that of finding the maximum number of independent vertices in the intersection graph of the vertical or horizontal chords between reflex vertices. This problem can in turn be solved by finding a maximum matching in a bipartite graph. In 1979 Lipski et al. [88] exploited this approach to develop an 0{n^/^) time algorithm for partitioning orthogonal polygons with holes. Algorithms for the same problem running within the same time bounds were also developed in [103] and [42]. In the early 1980s the special structure of the bipartite graph involved allowed the development of improved algorithms for the problem running in 0(n^/^ log n) time [62,86,87]. These algorithms have been recently extended by Soltan and Gorpinevich [131] to run in the same time bounds even if the holes degenerate to points. It is open as to whether or not faster algorithms can be developed. The only known lower bound for the problem with holes is Q.{n log n) [84]. If the polygons do not contain holes then faster algorithms are possible. In 1983 Gourley and Green [54] develop an O(n^) time algorithm that partitions an orthogonal polygon without holes into within 3/2 of the minimum number of rectangles. In 1988 Naha and Sahni [101] also develop an algorithm that partitions into less than 3/2 of the minimum number of rectangles, but their algorithm runs in 0(n log n) time. Finally in 1989, Liou et al. [84] produce an 0(n) time algorithm to optimally partition an orthogonal polygon without holes into the minimum number of rectangles. The 0{n) time is achieved assuming that the polygon is first triangulated using Chazelle's linear time triangulation algorithm. Note that the three dimensional version of the problem is NP-complete [36]. Minimizing the total length of the line segments introduced in the partitioning process is the other optimization criterion that arises in the applications. See Figure 6. Lingas et al. [83] were the first to investigate this optimization criteria. They present an 0{n^) time algorithm for the problem of partitioning an orthogonal polygon without holes into rectangles using the minimum amount of ink. If the polygon contains holes they show that the problem becomes NP-complete. In applications holes do occur, thus the search was on for approximation algorithms for the problem. The first algorithm of this type was given by Lingas [81]. In 1983 he presented an O(n^) time algorithm to partition an orthogonal polygon with holes into rectangles such that the amount of "ink" used is within a constant of the minimum amount possible. Unfortunately, the constant for this algorithm is large (41). In 1986 Levcopoulos [74] was able to reduce the constant to five while also producing an algorithm running in only 0{n^) time. In the same year [73] he further reduced the time to 0(/i log n), but at the expense of a large increase in the size of the constant. The restriction of the problem to where the orthogonal polygon becomes a rectangle and the holes become points is also NP-complete [83]. Gonzalez and Zheng [51] show how to adapt any approximation algorithm for the restricted problem to yield an approximation algorithm for the more general problem where the boundary polygon need not be a rectangle. Their method is to use the algorithm given in [83] to partition the boundary orthogonal polygon into rectangles, then each of these rectangles along with the point holes inside it, becomes an instance of the restricted version of the problem. In 1985 Gonzalez and Zheng [51] give an approximation algorithm, running in 0{n^) time that partitions a rectangle with point holes into disjoint rectangles, using no more than 3 + \/3 times the minimum amount of "ink" required. The next year Levcopoulos

Polygon

decomposition

505

(a)

(b) Fig. 6. A partition using (a) the minimum number of rectangles, and (b) the minimum amount of "ink"

[73] improves the time to 0(n log n), while maintaining the same bound. Later Gonzalez and Zheng [53] give an algorithm that runs in 0{n^) time, that produces a solution within 3 times optimal. They use a so called "guillotine" partition to develop an approximation algorithm within 1.75 times optimal [52], but which uses O(n^) time. See Figure 7. A recent paper [49] provides a simpler proof that the "guillotine" partition is within 2 times optimal. If time is more important, Gonzalez et al. [50] give an algorithm that runs in 0{n log n) time, but only finds a solution guaranteed to be within four times optimal. If Steiner points are disallowed, then quadrilaterals rather than rectangles become the natural component type for the decomposition of orthogonal polygons. Kahn, Klawe and Kleitman [63] show that it is always possible to partition an orthogonal polygon into convex quadrilaterals. This is not always possible for arbitrary polygons. This partitioning of a polygon into convex quadrilaterals is referred to as quadrilateralization. Sack and Toussaint develop an 0{n log n) time algorithm for quadrilateralizing an orthogonal polygon [120,122]. They use a two step process. First the orthogonal polygon is partitioned into a specific type of monotone polygon, these are in turn partitioned into quadrilaterals in linear time [121]. Lubiw [92] also provides an 0(n log n) time quadrilateralization algorithm for orthogonal polygons. Arbitrary monotone or star-shaped orthogonal polygons can be quadrilateralized in linear time [122].

n

JM. Keil

506

i 1

1 1 i 1

1

1

1 1

1

1 1 i 1

1 r

1 1 1

A

1

w

^

;

1

''

'

^_J

1

Fig. 7. A guillotine partition.

Let us now turn to the problem of finding the minimum edge length quadrilateralization of an orthogonal polygon. For this problem Keil and Sack [68] give an O(n^) time algorithm. Later Conn and O'Rourke [32] improve this to Oirv' log n) time. There are other known results concerning orthogonal partitioning. Liu and Ntafos [90] show how to partition a monotone orthogonal polygon into star-shaped components. Their algorithm runs in 0{n log n) time, allows Steiner points, and yields a decomposition within four times optimal. Gunther [56] gives a polynomial time algorithm for partitioning an orthogonal polygon into orthogonal polygons with k or fewer vertices. In most cases this algorithm finds a partition that is within a factor of two of optimal. Gyori et al. [58] also have some results on partitioning orthogonal polygons into subpolygons with a fixed number of vertices.

Polygon

507

decomposition

3.2. Covering orthogonal polygons Tools from graph theory are useful when developing algorithms for covering orthogonal polygons. If each edge of an orthogonal polygon is extended to a line, a rectangular grid results. Based on this grid, a graph can be associated with a covering problem as follows. The vertices of the graph are the grid squares that lie within the polygon and two such vertices are adjacent if the associated grid squares can be covered by a subpolygon lying entirely within the polygon. Depending upon the type of subpolygon, there can be a correspondence between covering the graph with the minimum number of cliques and the original polygon covering problem. For example in Figure 8, if the grid squares are joined by edges if they lie in the same rectangle, then the problem of covering the polygon with rectangles corresponds to covering the derived graph with cliques. If such a correspondence exists, then the tractability of both problems depends upon the properties of the derived graph. This graph theory approach underlies several of the algorithms we shall encounter in this section. The types of subpolygons that have been studied include rectangles, squares, orthogonally convex, orthogonally star-shaped and others. Rectangles. The problem of covering an orthogonal polygon with the minimum number of axis aligned rectangles has found application in data compression [91], the storing of graphic images [96], and the manufacture of integrated circuits [23,60]. As early as 1979 Masek [96] showed that if the orthogonal polygon contains holes, then the problem is NP-complete. Later Conn and O'Rourke [31] show that for an orthogonal polygon with holes it is also NP-complete if only the boundary or only the reflex vertices



1 1 1 1

_ 1 1

Fig. 8. Each grid region can be associated with the vertex of a graph.

508

JM. Keil

need to be covered. Attention then turned to the case where the polygon does not contain holes. In 1981 Chaiken et al. [23] initiated the above mentioned graph theory approach. They define a graph G with grid squares for vertices and with two vertices adjacent if there is a rectangle, lying entirely within the polygon, that contains both associated grid squares. They show that the cliques of this graph correspond to the rectangles in the polygon whose sides lie on grid lines. The rectangle cover problem then corresponds to the problem of covering the vertices of the graph with the minimum number of cliques. This clique problem is NP-complete in general but polynomially solvable for the class of perfect graphs. Unfortunately, the graph derived from the rectangle problem is not perfect, even if the polygon does not contain holes [23]. In the search for a solvable restriction of the problem attention turned to restricted types of orthogonal polygons. An orthogonal polygon is called horizontally {vertically) convex if its intersection with every horizontal (vertical) line is either empty or a single line segment. For an example see Figure 9. An orthogonal polygon is called orthogonally convex if it is both horizontally and vertically convex. Chaiken et al. [23] have an example showing that even for orthogonally convex polygons the derived graph is not perfect. Thus for the rectangle covering problem the graph approach has not yielded efficient algorithms. Note that the intersection graph of the maximal rectangles in an orthogonal polygon without holes is perfect [127]. To develop a polynomial time algorithm for the special case of covering an orthogonally convex polygon with the minimum number of rectangles, in 1981 Chaiken et al. [23] use an approach that reduces the problem to the same problem on a smaller polygon. Later Liou et al. [85] develop an 0(n) time algorithm for this problem. Brandstadt also contributes a linear algorithm for the restricted case of 2-staircase polygons [19]. In 1984 Franzblau and Kleitman [43] handle the larger class of horizontally convex polygons. They give a polynomial time algorithm for covering this class with the minimum number of rectangles. See also [57]. In 1985 Lubiw [93] was able to provide a polynomial time algorithm for another restricted class of orthogonal polygons. Her algorithm handles orthogonal polygons that do not contain a rectangle that touches the boundary of the polygon only at two opposite comers of the rectangle. In spite of these efforts on special cases the general problem of covering an orthogonal polygon without holes with the minimum number of rectangles remained open for some time. Finally in 1988, Culberson and Reckhow [35] settle the issue by showing the problem to be NP-complete. Later Herman and Das Gupta [16] go further and show that no polynomial time approximation scheme for the problem exists unless P = NP. The difficulty of the problem led Cheng et al. [30], in 1984, to develop a linear approximation algorithm that is guaranteed to find a solution within four times optimal for hole free polygons. Then in 1989 Franzblau develops an 0{n log n) time approximation algorithm that yields a covering containing 0(^ log 9) rectangles, where 0 is the minimum number of rectangles required for a covering [44]. She also shows that an optimal partitioning contains at most 20 -\- H — \ rectangles, where H is the number of holes contained in the polygon.

Polygon

decomposition

509

Fig. 9. A horizontally convex polygon.

Recently, Keil [67] introduces a type of rectangle decomposition which is intermediate between partitioning and covering. This non-piercing covering allows rectangles to overlap, but if two rectangles A and B overlap, then either A — B or B — A must be connected. Keil provides an 0(n log n-\-mn) time algorithm for finding an optimal non-piercing covering of an orthogonal polygon P without holes, where m is the number of edges in the visibility graph of P that are either horizontal, vertical or form the diagonal of an empty rectangle. Squares. Covering polygons with axis aligned squares has application in the construction of data structures used in the storage and processing of digital images. For example, the digital medial axis transform (MAT) [136] is based on representing an image by the union of squares. Simple images may be covered by few squares and may be easily reconstructed from the MAT. Scott and Iyengar [125] define the Translation Invariant Data Structure (TID), as a method for representing images. An image is considered to be a grid of "black" and "white"

510

JM. Keil

pixels, and the TID for a given image consists of a list of maximal squares covering all black regions within the image. In order to reduce the cost of storing and manipulating a TID, the underlying list of squares should be as small as possible. Scott and Iyengar [125] give a heuristic for finding a small covering set of squares as part of their TID construction algorithm. Albertson and O'Keefe [5] investigate a graph associated with the square covering problem. A unit square in the plane whose comers are integer lattice points is called a block. A polygon with integer vertices then contains of set of A^ blocks. Albertson and O'Keefe define a graph, with the blocks as vertices and with such vertices adjacent if the corresponding blocks can be covered by a square lying entirely within the polygon. They show that for polygons without holes this graph is perfect. They further show that the blocks corresponding to a cUque in the graph form a set of blocks entirely contained within a single square lying in the polygon. Aupperle et al. [8] investigate this graph further and show that for polygons without holes the graph is chordal. This allows the use of an algorithm for covering chordal graphs by cliques to serve as an 0(N^-^) time algorithm for the problem of covering an orthogonal polygon without holes with the minimum number of squares. By further exploiting the geometry, Aupperle [7] adapts this approach to produce an 0(A^^^) time algorithm for this problem. The fastest algorithm based on the blocks lying in the polygon runs in O(A^) time and is due to Moitra [97,98]. The number A^ of blocks lying in the polygon could be much larger that n, the number of vertices defining the polygon. Even if the block side is optimized A^ may be Q (n^). In light of this Bar-Yehuda and Ben-Chanoch [11] consider an alternative approach of covering the polygon one square at a time and achieve an 0(n -\- 0) time algorithm for covering an orthogonal polygon without holes, where 0 is the minimum number of squares required in a cover. If the polygon contains holes the square coverage problem becomes NP-complete [8,7]. Orthogonal convex and star-shaped subpolygons. When restricting the polygons to be orthogonal we find it is natural to also restrict the notion of visibility. We consider two notions of orthogonal visibility [33]. Two points of a polygon are said to be r-visible if there exists a rectangle that contains the two points [66]. Using r-visibility leads to the fact that an r-convex polygon is just a rectangle. The decomposition of an orthogonal polygon into rectangles has been discussed in previous subsections. An r-star-shaped polygon, P, is an orthogonal polygon for which there exists a point q of P such that for all other points p of P p is r-visible (i.e. lies in the same rectangle) to q. Recall that an orthogonally convex polygon is an orthogonal polygon that is both horizontally and vertically convex. Two points of an orthogonal polygon are said to be s-visible (staircase visible) if there exists an orthogonally convex subpolygon containing both points. Note that an s-convex polygon is simply an orthogonally convex polygon. An s-starshaped polygon contains a point q, such that for every point /?, in the polygon, there is an orthogonally convex subpolygon containing both p and q. In this subsection we will consider the problems of covering an orthogonal polygon with the minimum number of r-stars, s-stars and orthogonally convex polygons. A classification of orthogonal polygons, due to Reckhow and Culberson [119], based on the types of "dents" encountered, has been useful in the work on these problems [34,33,

Polygon

decomposition

511

Fig. 10. A class 3 polygon containing N, W and S dents.

100,99,116,118,119]. If the boundary of the orthogonal polygon is traversed in the clockwise direction, at each comer either a right 90° (outside comer) or a left 90° (inside corner) tum is made. A dent is an edge of P both of whose endpoints are inside comers. If the polygon is aligned so that north (N) corresponds with the positive y axis, then dents can be classified according to compass directions. For example, an N dent is traversed from west to east in a clockwise traversal of the polygon. An orthogonal polygon can then be classified according to the number and the types of dents it contains. A class k orthogonal polygon contains dents of k different orientations. Class 0 orthogonal polygons are the orthogonally convex polygons. A vertically or horizontally convex polygon (class 2a) is a class 2 orthogonal polygon which has only opposing pairs of dent types (i.e. N and S or E and W). Class 2b orthogonal polygons have two dent orientations that are orthogonal to one another (i.e. W and N, N and E, E and S, or S and W). For an example of a class 3 polygon see Figure 10. The graph theory approach has been important in the development of understanding of these problems. By extending the dent edges across the polygon a partition into 0(n^) basic regions results. These basic regions correspond to vertices in the definition of several relevant graphs. Motwani et al [100,99] define an s-convex visibility graph, using the basic regions as vertices, where two vertices are adjacent in the graph if the corresponding basic regions can be covered by a single orthogonally convex subpolygon. They define an r-star (s-star) visibility graph, again using the basic regions as vertices, where two vertices u and V are adjacent if there is a region w that is r-visible (s-visible) to the regions corresponding to u and i;. Related directed graphs were defined by Culberson and Reckhow [119,33,118, 34]. In 1986 Keil [66] provides the first algorithm for minimally covering with orthogonally convex components. He provides an 0(n^) time algorithm for covering horizontally convex orthogonal polygons. In 1987, for this problem, Reckhow and Culberson [119,34] give an ^ (n^) lower bound on actually listing the vertices of all the subpolygons in the output, but provide an 0(n) time algorithm for finding the minimum number of orthogonally convex polygons in an optimal cover of a horizontally convex orthogonal polygon. Culberson and Reckhow also provide an 0(n^) algorithm for minimally covering class 2b type orthogonal

512

J.M. Keil

polygons, and they give a complex algorithm for handling a larger class. Later for class 3 polygons Reckhow [118] provides an 0{n^) time algorithm. For the problem of covering with orthogonally convex components the relevant convex visibility graph is formed by connecting two grid squares if they can be covered by a single orthogonally convex subpolygon [99,116,118,119]. Motwani et al. [99] prove that a minimum clique cover of this visibility graph corresponds exactly to a minimum cover of the corresponding orthogonal polygon by orthogonal convex polygons. Thus we may solve the polygon covering problem using existing graph clique cover algorithms. The complexity of the available clique cover algorithms depends upon the properties of the convex visibility graph. For class 2 polygons, the convex visibiHty graph is a permutation graph [99,116]. For class 3 polygons, the graph turns out to be weakly triangulated [99,118]. Although these graph classes allow polynomial time clique cover algorithms, the known geometric algorithms are still the most efficient algorithms to solve the problem of covering an orthogonal polygon with the minimum number of orthogonally convex subpolygons. For class 4 polygons (general orthogonal polygons) the convex visibility graph is not perfect [99,116] and the complexity of the general problem of covering orthogonal polygons with the minimum number of orthogonally convex subpolygons remains open. For covering with r-stars, Keil [66] provides an O(n^) time algorithm for optimally covering a horizontally convex orthogonal polygon. This is later improved to 0{n) time in [46]. For general class 2 polygons, class 3 polygons and general orthogonal polygons the problem of covering with the minimum number of r-stars remains open. For covering with s-stars, Culberson and Reckhow [33] provide 0{n^) time algorithms for optimally covering horizontally convex orthogonal polygons and general class 2 polygons. For class 3 and class 4 polygons the development of algorithms has depended upon properties of the s-star visibility graph. Motwani et al. [100] show that for class 3 polygons the derived s-star visibility graph is chordal. They show that a minimum chque cover algorithm for chordal graphs can be used to s-star cover class 3 polygons in Oirv") time. For class 4 polygons they show that the derived s-star graph is weakly triangulated. This then leads to an O(n^) time algorithm for the general s-star covering problem [100]. Other covering subpolygons. There are some other known results related to the covering of orthogonal polygons with subpolygons. Bremner and Shermer [20,21] studied an extension of orthogonal visibility, called (9-visibility, in which two points of the polygon are (9-visible if there is a path between them whose intersection with every line in the set O of orientations is either empty of connected. For orthogonal visibility O = {0°, 90°}. A polygon P is O-convex if every two points of P are O-visible, and C7-star-shaped if there is a point of P from which every other point of P is O-visible. Bremner and Shermer [20,21] were able to characterize classes of orientations for which minimum covers of a (not necessarily orthogonal) polygon by (9-convex or (9-star-shaped components could be found in polynomial time. Regular star-shaped (nonorthogonal) polygons have also been studied as covering subpolygons. Edelsbrunner et al. [38] give an 0{n log n) time algorithm that covers an orthogonal polygon with L§ J + 1 star-shaped subpolygons, where r is the number of reflex vertices in the polygon. This is improved by Sack and Toussaint [123] who give an 0{n) time

Polygon decomposition

513

algorithm for covering an orthogonal polygon with [fj star-shaped components. Carlsson et al. [22] are able to produce an optimal star-shaped cover for histograms in linear time. Gewali and Ntafos [47] consider covering with a variation of r-stars where periscope vision is allowed. They give an O(n^) algorithm for optimally covering a restricted type of orthogonal polygon. A different variation of r-stars is considered by Maire [94]. His stars consist of a union of a vertical and a horizontal rectangle and look like "plus" signs. He defines a type of star graph and shows it is weakly triangulated implying an optimal algorithm for the covering problem. Acknowledgement The author wishes to thank the Natural Sciences and Engineering Research Council of Canada for financial support.

References [1] A. Aggarwal, The Art Gallery Problem: Its Variations, Applications, and Algorithmic Aspects, PhD thesis, Johns Hopkins Univ., Baltimore, MD (1984). [2] A. Aggarwal and B. Chazelle, Efficient algorithm for partitioning a polygon into star-shaped polygons. Report, IBM T.J. Watson Res. Center, Yorktown Heights, NY (1984). [3] A. Aggarwal, S.K. Ghosh and R.K. Shyamasundar, Computational complexity of restricted polygon decompositions. Computational Morphology, G.T. Toussaint, ed., North-Holland, Amsterdam (1988), 1-11. [4] A. Aggarwal, L.J. Guibas, J. Saxe and RW. Shor, A linear-time algorithm for computing the Voronoi diagram of a convex polygon. Discrete Comput. Geom. 4 (6) (1989), 591-604. [5] M. Albertson and C.J. O'Keefe, Covering regions by squares, SIAM J. Algebraic Discrete Methods 2 (3) (1981), 240-243. [6] Ta. Asano, Te. Asano and H. Imai, Partitioning a polygonal region into trapezoids, J. ACM 33 (1986), 290-312. [7] L.J. Aupperle, Covering regions by squares, M.Sc. thesis, Dept. Comput. Sci., Univ. Saskatchewan, Saskatoon, Sask. (1987). [8] L.J. Aupperle, H.E. Conn, J.M. Keil and J. O'Rourke, Covering orthogonal polygons with squares, Proc. 26th AUerton Conf. Commun. Control Comput. (October 1988), 97-106. [9] D. Avis and G.T. Toussaint, An efficient algorithm for decomposing a polygon into star-shaped polygons. Pattern Recogn. 13 (1981), 395-398. [10] C. Bajaj and T.K. Dey, Convex decompositions ofpolyhedra and robustness, SIAM J. Comput. 21 (1992), 339-364. [11] R. Bar-Yehuda and E. Ben-Chanoch, A linear time algorithm for covering simple polygons with similar rectangles, Internal. J. Comput. Geom. Appl. 6 (1996), 79-102. [12] B.G. Batchelor, Hierarchical shape description based upon convex hulls of concavities, J. Cybem. 10 (1980), 205-210. [13] P. Belleville, On restricted boundary covers and convex three-covers, Proc. 5th Canad. Conf. Comput. Geom., Waterloo, Canada (1993), 467^72. [14] P. Belleville, A Study of Convex Covers in Two or More Dimensions, PhD thesis, School of Computing Science, Simon Fraser University (1995). [15] P. Belleville, Computing two-covers of simple polygons. Master's thesis. School of Computer Science, McGill University (1991). [16] P. Berman and B. DasGupta, Approximating the rectilinear polygon cover problems, Proc. 4th Canad. Conf. Comput. Geom. (1992), 229-235.

514

JM. Keil

[17] M. Bern and D. Eppstein, Mesh generation and optimal triangulation, Computing in Euclidean Geometry, D.-Z. Du and F.K. Hwang, eds. Lecture Notes Series on Comput. 1, World Scientific, Singapore (1992), 23-90. [18] P. Bose and G. Toussaint, Generating quadrangulations ofplanar point sets, Comput. Aided Geom. Design 14 (1997), 763-785. [19] A. Brandstadt, The jump number problem for biconvex graphs and rectangle covers of rectangular regions. Lecture Notes in Comput. Sci. 380 (1989), 68-77. [20] D. Bremner, Point visibility graphs and restricted-orientation polygon covering, M.Sc. thesis, School of Computing Science, Simon Eraser University, Bumaby, BC (April 1993). [21] D. Bremner and T. Shermer, Point visibility graphs and restricted-oriention convex cover. Technical Report CMPT TR 93-07, School of Computing Science, Simon Eraser University (1993). [22] S. Carlsson, B.J. Nilsson and S. Ntafos, Optimum guard covers and m-watchmen routes for restricted polygons, Proc. 2nd Workshop Algorithms Data Struct., Lecture Notes in Comput. Sci. 519, SpringerVerlag (1991), 367-378. [23] S. Chaiken, D. Kleitman, M. Saks and J. Shearer, Covering regions by rectangles, SI AM J. Algebraic Discrete Methods 2 (1981), 394-410. [24] B. Chazelle, Computational geometry and convexity, PhD thesis, Dept. Comput. Sci., Yale Univ., New Haven, CT (1979). Carnegie-Mellon Univ. Report CS-80-150. [25] B. Chazelle, A theorem on polygon cutting with applications, Proc. 23rd Annu. IEEE Sympos. Found. Comput. Sci. (1982), 339-349. [26] B. Chazelle, Approximation and decomposition of shapes. Advances in Robotics 1: Algorithmic and Geometric Aspects of Robotics, J.T. Schwartz and C.-K. Yap, eds, Lawrence Erlbaum Associates, Hillsdale, NJ(1987), 145-186. [27] B. Chazelle, Triangulating a simple polygon in linear time. Discrete Comput. Geom. 6 (1991), 485-524. [28] B. Chazelle and D.P. Dobkin, Decomposing a polygon into its convex parts, Proc. 11th Annu. ACM Sympos. Theory Comput. (1979), 38-48. [29] B. Chazelle and D.P. Dobkin, Optimal convex decompositions. Computational Geometry, G.T. Toussaint, ed., North-Holland, Amsterdam, Netherlands (1985), 63-133. [30] Y Cheng, S. Iyengar and R. Kashyap, A new method of image compression using irreducible covers of maximal rectangles, IEEE Trans, on Software Eng. 14 (1988), 651-658. [31] H. Conn and J. O'Rourke, Some restricted rectangle covering problems. Technical Report JHU 87-13, Dept. Comput. Sci., Johns Hopkins Univ., Baltimore, MD (1987). [32] H.E. Conn and J. O'Rourke, Minimum weight quadrilaterization in 0(/i log n) time, Proc. 28th AUerton Conf. Commun. Control Comput. (October 1990), 788-797. [33] J. Culberson and R. A. Reckhow, Dent diagrams: A unified approach to polygon covering problems. Technical Report TR 87-14, Dept. of Computing Sci., University of Alberta (1987). [34] J. Culberson and R.A. Reckhow, Orthogonally convex coverings of orthogonal polygons without holes, J. Comput. Syst. Sci. 39 (1989), 166-204. [35] J. Culberson and R.A. Reckhow, Covering polygons is hard, J. Algorithms 17 (1994), 2-44. [36] V.J. Dielissen and A. Kaldewaij, Rectangular partition is polynomial in two dimensions but NP-complete in three. Inform. Process. Lett. 38 (1991), 1-6. [37] D.P. Dobkin, D.L. Souvaine and C.J. Van Wyk, Decomposition and intersection of simple splinegons, Algorithmica 3 (1988), 473^86. [38] H. Edelsbrunner, J. O'Rourke and E. Welzl, Stationing guards in rectilinear art galleries, Comput. Vision Graph. Image Process. 28 (1984), 167-176. [39] H. ElGindy and G.T. Toussaint, On geodesic properties of polygons relevant to linear time triangulation. Visual Comput. 5(1-2) (1989), 68-74. [40] H. Everett, W. Lenhart, M. Overmars, T. Shermer and J. Urrutia, Strictly convex quadrilateralizations of polygons, Proc. 4th Canad. Conf. Comput. Geom. (1992), 77-83. [41] H.Y.F. Feng and T. Pavlidis, Decomposition of polygons into simpler components: Feature generation for syntactic pattern recognition, IEEE Trans. Comput. C-24 (1975), 636-650. [42] L. Ferrari, P. V. Sankar and J. Sklansky, Minimal rectangular partitions of digitized blobs, Comput. Vision Graph., Image Process. 28 (1984), 58-71.

Polygon decomposition

515

[43] D. Franzblau and D. Kleitman, An algorithm for covering polygons with rectangles, Inform. Control 63 (1984), 164-189. [44] D.S. Franzblau, Performance guarantees on a sweep-line heuristic for covering rectilinear polygons with rectangles, SI AM J. Discrete Math. 2 (1989), 307-321. [45] M.R. Garey, D.S. Johnson, F.P. Preparata and R.E. Tarjan, Triangulating a simple polygon. Inform. Process. Lett. 7 (1978), 175-179. [46] L. Gewali, J.M. Keil and S. Ntafos, On covering orthogonal polygons with star shaped polygons. Inform. Sci. 65 (1992), 45-63. [47] L. Gewali and S. Ntafos, Minimum covers for grids and orthogonal polygons by periscope guards, Proc. 2ndCanad. Conf. Comput. Geom. (1990), 358-361. [48] S. K. Ghosh, Approximation algorithms for art gallery problems, Proc. Canadian Inform. Process. Soc. Congress (1987). [49] T. Gonzalez, M. Razzazi, M.-T. Shing and S.-Q. Zheng, On optimal guillotine partitions approximating optimal d-box partitions, Comput. Geom. 4 (1994), 1-12. [50] T. Gonzalez, M. Razzazi and S.-Q. Zheng, An efficient divide-and-conquer approximation algorithm for partitioning into d-boxes, Intemat. J. Comput. Geom. Appl. 3 (1993), 417^28. [51] T. Gonzalez and S.-Q. Zheng, Bounds for partitioning rectilinear polygons, Proc. 1st Annu. ACM Sympos. Comput. Geom. (1985), 281-287. [52] T. Gonzalez and S.-Q. Zheng, Improved bounds for rectangular and guillotine partitions, J. Symbolic Comput. 7 (1989), 591-610. [53] T. Gonzalez and S.-Q. Zheng, Approximation algorithms for partitioning a rectangle with interior points, Algorithmica 5 (1990), 11-42. [54] K.D. Gourley and D.M. Green, A polygon-to-rectangle conversion algorithm, IEEE Comput. Graphics 3 (1983), 31-36. [55] D.H. Greene, The decomposition of polygons into convex parts. Computational Geometry, F.P. Preparata, ed.. Advances in Computing Research, Vol. 1, JAI Press, London, England (1983), 235-259. [56] O. Gunther, Minimum k-partitioning of rectilinear polygons, J. Symbolic Comput. 9 (1990), 457^83. [57] E. Gyori, Covering simply connected regions by rectangles, Combinatorica 5 (1985), 53-55. [58] E. Gyori, F. Hoffmann, K. Kriegel and T. Shermer, Generalized guarding and partitioning for rectilinear polygons, Proc. 6th Canad. Conf. Comput. Geom. (1994), 302-307. [59] H.-D. Hecker and D. Herwig, Some NP-hard polygon cover problems, J. Inform. Process. Cybem. 25 (1989), 101-108. [60] A. Hegedus, Algorithms for covering polygons by rectangles, Comput. Aided Geom. Design 14 (1982), 257-260. [61] S. Hertel and K. Mehlhorn, Fast triangulation of the plane with respect to simple polygons. Inform. Control 64 (1985), 52-76. [62] H. Imai and Ta. Asano, Efficient algorithms for geometric graph search problems, SIAM J. Comput. 15 (1986), 478^94. [63] J. Kahn, M. Klawe and D. Kleitman, Traditional galleries require fewer watchmen, SIAM J. Algebraic Discrete Methods 4 (1983), 194-206. [64] J.M. Keil, Decomposing a polygon into simpler components, PhD thesis, Univ. of Toronto, Toronto, Canada (1983). Report 163/83. [65] J.M. Keil, Decomposing a polygon into simpler components, SIAM J. Comput. 14 (1985), 799-817, [66] J.M. Keil, Minimally covering a horizontally convex orthogonal polygon, Proc. 2nd Annu. ACM Sympos. Comput. Geom. (1986), 43-51. [67] J.M. Keil, Covering orthogonal polygons with non-piercing rectangles, Intemat, J. Comput. Geom. Appl. (1996). [68] J.M. Keil and J,-R, Sack, Minimum decompositions of polygonal objects. Computational Geometry, G.T. Toussaint, ed., North-Holland, Amsterdam, Netherlands (1985), 197-216, [69] J.M. Keil and J. Snoeyink, On the time bound for convex decomposition of simple polygons. Proceedings of the Tenth Canadian Conference on Computational Geometry (1998). [70] G.T, Klincsek, Minimal triangulations of polygonal domains. Discrete Math. 9 (1980), 121-123. [71] D.T. Lee and A. Lin, Computational complexity of art gallery problems, IEEE Trans. Inform, Theory 32 (1986), 276-282,

516

JM. Keil

[72] C. Levcopoulos, A fast heuristic for covering polygons by rectangles, Proc. Fundamentals of Comput. Theory, Lecture Notes in Comput. Sci. 199, Springer-Verlag (1985). [73] C. Levcopoulos, Fast heuristics for minimum length rectangular partitions of polygons, Proc. 2nd Annu. ACM Sympos. Comput. Geom. (1986), 100-108. [74] C. Levcopoulos, Minimum length and thickest-first rectangular partitions ofpolygons. Report LITH-IDAR-86-01, Dept. of Computer and Information Sci., Linkoping University, Linkoping, Sweden (1986). [75] C. Levcopoulos, Improved bounds for covering general polygons with rectangles, Proc. Conf. Found. Softw. Tech. Theoret. Comput. Sci., Lecture Notes in Comput. Sci. 287, Springer-Veriag (1987), 95-102. [76] C. Levcopoulos and D. Krznaric, Quasi-greedy triangulations approximating the minimum weight triangulation, Proc. 7nd Annu. ACM-SIAM Symp. on Discrete Algorithms (1996), 392^01. [77] C. Levcopoulos and A. Lingas, Bounds on the length of convex partitions of polygons, Proc. 4th Conf. Found. Softw. Tech. Theoret. Comput. Sci., Lecture Notes in Comput. Sci., Springer-Verlag (1984), 279295. [78] C. Levcopoulos and A. Lingas, Covering polygons with minimum number of rectangles, Proc. 1st Sympos. Theoret. Aspects Comput. Sci., Lecture Notes in Comput. Sci., Springer-Verlag (1984), 63-72. [79] C. Levcopoulos, A. Lingas and J.-R. Sack, Heuristics for optimum binary search trees and minimum weight triangulation problems, Theoret. Comput. Sci. 66 (2) (August 1989), 181-203. [80] A. Lingas, The power of non-rectilinear holes, Proc. 9th Intemat. Colloq. Automata Lang. Program., Lecture Notes in Comput. Sci. 140, Springer-Veriag (1982), 369-383. [81] A. Lingas, Heuristics for minimum edge length rectangular partitions of rectilinear figures, Proc. 6th GI Conf. Theoret. Comput. Sci., Lecture Notes in Comput. Sci. 145, Springer-Veriag (1983), 199-210. [82] A. Lingas, C. Levcopoulos and J.-R. Sack, Algorithms for minimum length partitions of polygons, BIT 27 (1987), 474^79. [83] A. Lingas, R. Pinter, R. Rivest and A. Shamir, Minimum edge length partitioning of rectilinear polygons, Proc. 20th AUerton Conf. Commun. Control Comput. (1982), 53-63. [84] W.T. Liou, J.J.M. Tan and R.C.T. Lee, Minimum partitioning simple rectilinear polygons in 0{n log log n) time, Proc. 5th Annu. ACM Sympos. Comput. Geom. (1989), 344-353. [85] W.T. Liou, C.Y. Tang and R.C.T. Lee, Covering convex rectilinear polygons in linear time, Intemat. J. Comput. Geom. Appl. 1 (2) (1991), 137-185. [86] W. Lipski, Jr., Finding a Manhattan path and related problems, Networks 13 (1983), 399^09. [87] W. Lipski, Jr., An 0{n log n) Manhattan path algorithm. Inform. Process. Lett. 19 (1984), 99-102. [88] W. Lipski, Jr., E. Lodi, F. Luccio, C. Mugnai and L. PagH, On two-dimensional data organization II, Fund. Inform. 2 (1979), 245-260. [89] R. Liu and S. Ntafos, On decomposing polygons into uniformly monotone parts. Inform. Process. Lett. 27 (1988), 85-89. [90] R. Liu and S. Ntafos, On partitioning rectilinear polygons into star-shaped polygons, Algorithmica 6 (1991), 771-800. [91] E. Lodi, F. Luccio, C. Mugnai and L. PagH, On two-dimensional data organization I, Fund. Inform. 2 (1979), 211-226. [92] A. Lubiw, Decomposing polygonal regions into convex quadrilaterals, Proc. 1st Annu. ACM Sympos. Comput. Geom. (1985), 97-106. [93] A. Lubiw, The Boolean basis problem and how to cover some polygons by rectangles, SIAM J. Discrete Math. 3(1990), 98-115. [94] F. Maire, Polyominos and perfect graphs. Inform. Proc. Lett. 50 (1994), 57-61. [95] M. Manjuathaiah and D. Nicole, Accuractely representing the union of convex sections. Manuscript (1995). [96] W.J. Masek, Some NP-complete set covering problems. Manuscript (1979). [97] D. Moitra, Efficient parallel algorithms for covering binary images, PhD thesis, Dept. Comput. Sci., Cornell Univ., Ithaca, NY (1989). Technical Report TR-89-1013. [98] D. Moitra, Finding a minimal cover for binary images: An optimal parallel algorithm, Algorithmica 6 (1991), 624-657. [99] R. Motwani, A. Raghunathan and H. Saran, Perfect graphs and orthogonally convex covers, SIAM J. Discrete Math. 2 (1989), 371-392. [100] R. Motwani, A. Raghunathan and H. Saran, Covering orthogonal polygons with star polygons: The perfect graph approach, J. Comput. Syst. Sci. 40 (1990), 1 9 ^ 8 .

Polygon decomposition

517

[101] S. Nahar and S. Sahni, Fast algorithm for polygon decomposition, IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems 7 (1988), 473^83. [102] B. Nilsson, Guarding art galleries, PhD tliesis, Dept. Comput. Sci., Lund Univ., Lund, Sweden (1994). [103] T. Ohtsuki, Minimum dissection of rectilinear regions. Proceedings of the 1982 IEEE International Symposium on Circuits and Systems, Rome (1982), 1210-1213. [104] J. O'Rourke, The complexity of computing minimum convex covers for polygons, Proc. 20th AUerton Conf. Commun. Control Comput. (1982), 75-84. [105] J. O'Rourke, The decidability of covering by convex polygons. Report JHU-EECS 82-4, Dept. Elect. Engrg. Comput. Sci., Johns Hopkins Univ., Baltimore, MD (1982). [106] J. O'Rourke, Minimum convex cover for polygons: Some counter examples. Report JHU-EE 82-1, Dept. Elect. Engrg. Comput. Sci., Johns Hopkins Univ., Baltimore, MD (1982). [107] J. O'Rourke, Polygon decomposition and switching function minimization, Comput. Graph. Image Process. 18(1982), 382-391. [108] J. O'Rourke, Art Gallery Theorems and Algorithms, Oxford University Press, New York, NY (1987). [109] J. O'Rourke, C.-B. Chien, T. Olson and D. Naddor, A new linear algorithm for intersecting convex polygons, Comput. Graph. Image Process. 19 (1982), 384-391. [110] J. O'Rourke and K.J. Supowit, Some NP-hard polygon decomposition problems, IEEE Trans. Inform. Theory IT-30 (1983), 181-190. [ I l l ] T. Pavlidis, Structural Pattern Recognition, Springer-Verlag, Berlin-Heidelberg (1977). [112] T. Pavlidis, Survey: A review of algorithms for shape analysis, Comput. Graph. Image Process. 7 (1978), 243-258. [113] T. Pavlidis and H.-Y Feng, Shape discrimination. Syntactic Pattern Recognition, K.S. Fu, ed., SpringerVerlag, New York (1977), 125-145. [114] D. Plaisted and J. Hong, A heuristic triangulation algorithm, J. Algorithms 8 (1987), 405^37. [115] F.P. Preparata and M.I. Shamos, Computational Geometry: An Introduction, Springer-Verlag, New York, NY (1985). [116] A. Raghunathan, Polygon decomposition and perfect graphs, PhD thesis. University of California at Berkeley, Berkeley, California (1988). [117] S. Ramaswami, P. Ramos and G. Toussaint, Converting triangulations to quadrangulations, Proc. 7rd Annu. Canadian Conf. on Comput. Geom. (1995), 297-302. [118] R. Reckhow, Covering orthogonal convex polygons with three orientations of dents. Report TR87-17, Department of Computing Sci., Edmonton, Alberta (1987). [119] R.A. Reckhow and J. Culberson, Covering a simple orthogonal polygon with a minimum number of orthogonally convex polygons, Proc. 3rd Annu. ACM Sympos. Comput. Geom. (1987), 268-277. [120] J.-R. Sack, An 0(n log n) algorithm for decomposing rectilinear polygons into convex quadrilaterals, Proc. 20th AUerton Conf. Commun. Control Comput. (1982), 64-75. [121] J.-R. Sack, Rectilinear computational geometry, PhD thesis, School Comput. Sci., Carleton Univ., Ottawa, ON (1984). Report SCS-TR-54. [122] J.-R. Sack and G.T. Toussaint, A linear-time algorithm for decomposing rectilinear polygons into convex quadrilaterals, Proc. 19th AUerton Conf. Commun. Control Comput. (1981), 21-30. [123] J.-R. Sack and G.T. Toussaint, Guard placement in rectilinear polygons. Computational Morphology, G.T. Toussaint, ed., North-HoUand, Amsterdam, Netherlands (1988), 153-175. [124] B. Schachter, Decomposition of polygons into convex sets, IEEE Trans. Comput. C-27(ll) (1978), 10781082. [125] D.S. Scott and S.S. Iyengar, Tid - a translation invariant data structure for storing images, Comm. ACM 29 (5) (1986), 418^29. [126] M. Shapira and A. Rappoport, Shape blending using the star-skeleton representation. Manuscript (1994). [127] J. Shearer, A class ofperfect graphs, SIAM J. Algebraic Discrete Methods 3 (3) (1982), 281-284. [128] TC. Shermer, Covering and guarding polygons using l]^-sets, Geometriae Dedicata 37 (1991), 183-203. [129] TC. Shermer, Recent results in art galleries, Proc. IEEE 80 (9) (September 1992), 1384-1399. [130] T.C. Shermer, On recognizing unions of two convex polygons and related problems. Pattern Recognition Lett. 14 (9) (1993), 737-745. [131] V. Soltan and A. Gorpinevich, Minimum dissection of rectilinear polygon with arbitrary holes in to rectangles. Discrete Comput. Geom. 9 (1993), 57-79.

518

JM. Keil

[132] S.B. Tor and A.E. Middleditch, Convex decomposition of simple polygons, ACM Trans. Graph. 3 (1984), 244-265. [133] G. Toussaint, Efficient triangulation of simple polygons. Visual Comput. 7 (1991), 280-295. [134] G.T. Toussaint, Pattern recognition and geometrical complexity, Proc. Fifth Inter. Conf. on Pattern Recognition (1980), 1324-1347. [135] G.T. Toussaint, Quadrangulations of planar sets, Proc. 4th Worlcshop on Algorithms and Data Structures (1995), 218-227. [136] A.Y. Wu, S.K. Bhaskar and A. Rosenfeld, Computation of geometric properties from the medial axis transform in 0(n log n) time, Comput. Vision, Graph. Image Process. 34 (1986), 76-92.

CHAPTER 12

Link Distance Problems Anil Maheshwari*^ Carleton University, School of Computer Science, Ottawa, ON, Canada KIS 5B6 E-mail: maheshwa @ scs. carleton. ca

Jorg-Rudiger Sack* Carleton University, School of Computer Science, Ottawa, ON, Canada KIS 5B6 E-mail: sack®scs.carleton.ca

Hristo N. Djidjev Department of Computer Science, University of Warwick, Coventry, CV4 7AL, England E-mail: hristo @ dcs. Warwick, ac. uk

Contents 1. Introduction 1.1. Motivation and applications 1.2. Notation and definitions 1.3. Organization of this chapter 2. Sequential algorithms for link distance problems 2.1. Simple polygons 2.2. Link paths for polygons with holes 2.3. Rectilinear polygons 2.4. Rectilinear link paths among obstacles 2.5. Robustness issues in link distance computations 3. Parallel algorithms for link distance problems 3.1. Simple polygons 3.2. Rectilinear polygons 4. Applications and extensions 4.1. Movement of robot arms in bounded regions

521 521 522 522 522 523 531 533 536 538 539 540 542 547 547

*This work was supported in part by the Environmental Protection Agency grant R82-5207-01-0. ^Part of this work was done at the Tata Institute of Fundamental Research, Mumbai, India. •^This research was supported in part by NSERC (Natural Sciences and Engineering Research Council of Canada) and Almerco Inc. HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V. All rights reserved

519

520

A. Maheshwari

et al.

4.2. c-oriented paths 4.3. Approximating polygons and subdivisions with minimum hnk paths 4.4. Separation problems 4.5. Bicriteria shortest path problems 4.6. Nested polygon problems 4.7. Central link segment 4.8. Walking in an unknown street 4.9. Miscellaneous References

548 548 549 549 550 552 553 554 555

Link distance problems

521

1. Introduction 1.1. Motivation and applications The study of link distance problems is motivated by applications in several areas in computer science, such as Computer Graphics, Geographical Information Systems, Robotics, Image Processing, Cartography, VLSI, Computer Vision, and Solid Modeling. The link distance, defined with respect to a planar region R, sets the distance between a pair of points {s, t) in R to be the minimum number of line segments needed to construct a path in R that connects s io t. A path connecting s and t of length equal to the link distance between s and t is called a minimum link path. Frequently in these application areas it is important to produce paths of low combinatorial complexity. In the following we list some applications: the details can be found in [7,25,40,55,60,63,68,72,80,83]. We also refer the reader to Section 4. 1. Robotics/Motion Planning: Consider a robot that can move forward and rotate, and that is given the task to navigate in a constrained planar region. Suppose that straightline motion is "free", but rotation is "expensive". One motion planning task is to move the robot collision-free between two specified positions inside the region, such that the total number of turns is minimized. For point robots, the task corresponds to computing a minimum link path. (Chapter 15 of this Handbook [75] mentions how non-point robots can be reduced to the point robots.) 2. Communication Systems Design: In the design of communication systems using microwaves or lights as a communication media, direct communication is possible only if transmitter and receiver are in direct line-of-sight, a condition rarely met. Therefore, special devices, such as repeaters, are installed. Finding a minimum link path between two points inside a polygonal region means minimizing the number of repeaters required for the communication inside that region. (For details, see [72].) 3. Placement of telescoping manipulators: Suppose that an environment for a manipulator with telescoping links is described by a polygonal region. A telescoping link is a link of flexible size. The task here is to determine the minimum number of telescoping links that will allow the manipulator to reach every point in the environment. (For details, see [55].) 4. Curve compression: Minimum link paths are also used for curve compression in solid modeling. A typical problem in solid modeling is to compute the intersection of surfaces. The intersection of two surfaces can be quite complex even if the surfaces themselves are of relative low degree. To avoid the combinatorial explosion inherent in the closed form solution, it is convenient to compute a piecewise linear approximation to the curve of intersections. Natarajan [68] proposes to replace an intersection curve A by its minimum link analog, which is computed inside an e-fattening of A. The related work on approximating simple polygons and polygon maps is discussed in [40,63]. 5. VLSI Layouts: In graph layout and VLSI layout problems, it is desirable to minimize the number of bends. Moreover, in VLSI layout problems, wires typically run along orthogonal axes, and this motivates the study of link path problems in rectilinear settings. (For details, see [83,23].)

522

A. Maheshwari et al

An interesting aspect of the study of link distance problems is that often algorithms are more complex and/or have higher time complexities than those solving the corresponding problems for the Euclidean distance. One of the reasons for this is that in simple polygons the Euclidean path connecting two points is unique whereas this is usually not the case for a minimum link path.

1.2. Notation and definitions We are using standard notation and definitions. We assume that a simple polygon P is given as a clockwise sequence of vertices with their respective coordinates. The symbol P is used to denote the region of the plane enclosed by P, and bd(P) denotes the boundary of P.lf p and q are two points on bd(P), then the clockwise bd(P) from p to q is denoted by bd(p, q). Two points are said to be visible if the line segment joining them does not intersect the exterior of P (but might intersect bd(P)). The visibility polygon from a point X e P consists of all points y e P that are visible from x. A subpolygon of P is said to be a weak visibility polygon from a segment s if every point of the polygon is visible from some point of s. (Note that it is not necessarily the same point on s that sees all points in P.) Let SP{u, v) denote a Euclidean shortest path inside P from a point u to another point V. The shortest path tree of a polygon rooted at a vertex w, denoted hy SPT{u), is a tree containing Euclidean shortest paths from u to all other vertices of P. A Euclidean shortest path between any two vertices of a polygon is a polygonal path whose turning points are vertices of the polygon. If the polygon is simple, there is a unique shortest path between any pair of vertices. For details on shortest path characteristics, see Chapter 15 in this Handbook [75] or [41,56], and [84]. Throughout this chapter we assume that the simple polygons considered are triangulated. If required, an arbitrary simple polygon can be triangulated in linear time using the algorithm of Chazelle [12].

1.3. Organization of this chapter In this chapter, we will survey various link distance problems set in simple polygons, rectilinear polygons, and polygonal regions with holes. Sections 2 and 3 deal with sequential and parallel algorithms for link distance problems in Euclidean and rectilinear metric spaces, respectively. Section 4 deals with various applications and extensions of link distance problems.

2. Sequential algorithms for link distance problems In this section, we survey sequential algorithms for link distance problems in simple polygons (with and without holes), in rectilinear polygons, and among rectilinear obstacles. In Section 2.1, we describe algorithms for computing minimum link paths from a fixed source, for computing the link diameter, the link radius, and the link center, and for answering link

Link distance problems

523

distance and minimum link path queries in a simple polygon. In Section 2.2, we survey the literature on link path problems in polygonal domains in the presence of polygonal obstacles. In Section 2.3, we discuss algorithms for rectilinear link distance problems, including the computation of rectilinear histogram partition, rectilinear link diameter and link center. In Section 2.4, we survey the literature on rectilinear link distance problems in rectilinear domains. Finally, in Section 2.5, we study robustness issues arising in link distance computations. (General robustness issues are discussed in Chapter 14 of this Handbook [75].)

2.1. Simple polygons 2.1.1. Link path problems for a fixed source. Let P be a simple polygon on n-vertices and let 5", a fixed source, be a point inside P. In this section we study the following three problems: • Link Distance Query Problem: Given an arbitrary point t e P, compute di (s,t), i.e., the link distance between s and t. • Minimum Link Path Problem: Given an arbitrary point t, compute a minimum link path between >y and t. • Single-Source Multiple-Destination Minimum Link Path Problem: Compute minimum link paths from s to all vertices i; of P. The problem of finding minimum link paths were first studied by ElGindy [30] and Suri [78]. An 0(n) time algorithm is presented in [78] for the minimum link path problem. Ghosh [35] proposes an alternative linear time algorithm for solving the minimum link path problem. After an initial preprocessing of P, the link distance query problem can be solved in 0(log n) time by the algorithms of ElGindy [30] and Reif and Storer [72]. Their preprocessing step takes i? (n log n) time. Suri [78] presents an optimal 0 (n) algorithm for preprocessing P while maintaining 0(log n) query time. Using the algorithm of ElGindy [30], one can solve the single-source multiple-destination problem in 0(n log n) time, and this solution was further improved by Suri [78] to be linear time. Suri [80,81,78] presents a general technique that can be used to solve a number of link path problems. The technique consists of a partitioning of the polygon, called a window partitioning, into regions of equal link distance from s. Next we survey Suri's algorithm [80,81,78] for computing such a window partition of P. The main idea, in window partitioning of a polygon P with respect to the point 5 G P, is to partition P into regions over which the link distance from s is equal. To achieve this, first compute the visibility polygon from s and call it V(s). Every point in V(s) is reachable from s by one link. Removing V(s) from P leaves the set of points that are at least two links away from s. Of these, the points at link distance two are those visible from some point in V(s). We define a window of the visibility polygon V(s) of P from s as the set of all boundary edges of V(s) that do not belong to the boundary of P. It is easy to see that the set of points of P at link distance two from s consists of exactly those points of P — V(s) that are (weakly) visible from some window edge of V(s). The procedure of computing visibility polygons from window edges is repeated until P is entirely covered (see Figure 1). Such a visibility polygon from a window edge is referred to as a window

524

A. Maheshwari

et al.

(a)

B

(b) Fig. 1. A window partition. The numbers indicate the Unk distance from s to any point inside that region. W{s) is the corresponding window tree.

Link distance problems

525

induced polygon or simply, a window polygon. This procedure divides P into regions of equal link distance from s. The resulting partition of P is called a window partition of P with respect to s. The window tree W(s) denotes the planar dual of the window partition of P with respect to s, i.e., W(s) has a node for each region of the partition and an arc between two nodes whose regions share a window edge (see Figure 1). Before proceeding further, we analyze the complexity of computing a window partition and a window tree. In order to compute a window partition, we need to invoke visibility computations inside P from point s and weak visibility polygon computations from several window edges in P. Several 0(n) time algorithms are known for computing a visibility polygon from a vertex (see ElGindy and Avis [31] and Lee [57]), and for computing a weak visibility polygon from a segment (see Guibas et al. [41]), inside a simple polygon (see Chapter 19 in this Handbook [75]). A brute force approach requires computing a weak visibility polygon from each of the 0(n) window edges (in the worst case). Thus the worst case complexity for this approach is 0(n^). Suri [78] presents a more efficient method for computing all visibility polygons from the window edges. In his approach, only those triangles of the given triangulation, that (at least partially) belong to the visibility polygon are processed. In particular, if e is a window edge, then the visibility polygon of ^, V(e), is computed in 0(ke) time, where ke is the number of triangles of the given triangulation that have nonempty intersection with V(e). His algorithm is a modification of the algorithm of Guibas et al. [41] for computing shortest paths. Using Suri's algorithm, it is easy to see that the window partition is computed in linear time, since each triangle of the given triangulation can intersect at most three regions of the window partition. The algorithm of Suri, while computing visibility information, maintains the adjacency information, and thus the window tree can be computed in additional 0(n) time from the window partition. The above result is summarized in the following theorem. 2.1 (Suri [78]). Let P be a simple polygon on n vertices. Let s be a point in P. The window partition and the window tree of P with respect to s can be computed in optimal time and space @(n). THEOREM

Before we describe the solution to the three problems mentioned in this section, we need the following result on planar point location (see Chapter 10 in this Handbook [75]). A triangulated planar subdivision, containing n triangles, can be preprocessed in linear time by the algorithm of Kirkpatrick [51] to obtain a data structure such that, given a query point t, the unique triangle of the subdivision containing t can be located in O(logn) time. Using the window partition and the window tree data structures, the three problems formulated at the beginning of this section can be solved as follows. Let 5 be a fixed source point inside P. (i) Link distance query: given a query point t e P, determine the link distance between s andt Find the region R{t) containing t in 0(log n) time by applying the planar pointlocation algorithm. Using the window tree data structure, compute the depth of the

526

A. Maheshwari et al

node corresponding to R{t) in additional 0(1) time. Thus, the link distance query between s and t can be answered in O(logn) time. (ii) Minimum link path: find a minimum link path between s and a query point t e P We first solve the link query problem with respect to t. Once we know which region, say R(t), of the window partition contains t, we determine the window edge e of that region and the segment qofe that is visible from t. Using the window tree data structure, the link path between a point of q and s can be computed in 0(kt) time, where kt is the depth of the node corresponding to the region R(t). Hence the link path between s and t can be computed in 0(log n-{-kt) time. (iii) Single-source multiple-destination problem: compute the link paths between s and all vertices of P Compute the window partition from s and additionally, for each vertex v'm P, store a pointer to the window polygon containing v. This implicitly determines the minimum link paths as well. These results are summarized in the following theorem. 2.2 (Suri [81 ]). Let P be a simple polygon on n vertices and s be a point in P. P can be preprocessed in linear time so that: (1) Given a query point t, the link distance between s and t can be computed in 0(log n) time. (2) A minimum link path between s and a query point t can be computed in 0(k + log w) time, where k is the link distance between s and t. (3) Minimum link paths from s to all vertices of P can be constructed in time proportional to the total number of links in all the minimum link paths.

THEOREM

Ghosh [35,36] proposes an alternate algorithm to solve the minimum link path problem. His algorithm is based on the notion of visibility from a convex set of points inside a simple polygon. This is another generalization of point visibility inside simple polygons. Ghosh's approach has been the basis for the parallel link path algorithm of Chandru et al. [10]; it is sketched in Section 3.1. 2.1.2. Link diameter, link radius and link center In this section we survey the literature on Hnk diameter, link radius and link center algorithms. The link diameter of P is defined as the maximum of the link distances between any two points of P. It can be shown that the link diameter is realized by a pair of vertices of P. This leads to a simple O(n^) time algorithm for computing the link diameter by computing the link distances, in linear time, from each vertex to all vertices. Suri [81] presents a simple algorithm to compute an approximate link diameter which differs from the actual Hnk diameter by at most 2. An approximate link diameter is computed by computing the furthest neighbor for each node in the window tree. A furthest neighbor for a node x in a tree is a node y whose tree distance from x is a maximum over all nodes of the tree. Let a and b be two points in P, belonging to the two nodes

Link distance problems

527

w\ and W2, respectively, of the window tree. Let di{a, b) be the Unk distance between a and b and let g{w\, W2) denote the tree distance between w\ and if2- It can be shown that g{w\, W2) — l^dL(a,b)^g(w\, W2) + 3, and this results in an approximation of the link distance. This method can also be used to determine approximately the link distance between any two query points as follows. Locate the nodes of the window tree which contains the query points by performing point location in the regions which correspond to the nodes of the tree and then compute the tree distance between them. THEOREM 2.3 (Suri [81]). An approximate link diameter of a simple polygon can be computed in linear time, and it differs from the exact link diameter by at most two.

The link diameter problem is an example of a number of link distance problems for which it is often much easier and usually more efficient to provide an approximate solution than the exact solution. To compute the exact link diameter takes 0{n logn) time using a fairly complex algorithm developed by Suri [80]. THEOREM 2.4 (Suri [81]). The link diameter of a simple polygon can be computed in 0{n\ogn) time.

It remains an open problem to determine whether the link diameter can be computed more efficiently, or if Q (n log n) time is a lower bound for that problem. Now we turn our attention to the computation of link radius and link center. Suppose that, conceptually, window trees are computed from each point in P. Then, there is at least one point for which the depth of its window tree is a minimum over all depths. This depth is called the link radius of P. It is interesting to note that, while there are direct algorithms to compute the link diameter, the link radius determination is carried out indirectly as will be described below. The relationship between link distance and link radius is studied by Lenhart, Pollack, Sack, Seidel, Sharir, Suri, Toussaint, Whitesides, and Yap [58]: LEMMA 2.1 (Lenhart et al. [58]). The link radius, R and the link diameter, D, of a simple polygon are related as: [ y l ^ / ? ^ [ y l + L

In general, there is not only one point whose window tree depth equals the link radius but an entire region; this region is known as the link center. The link center is connected and has the following convexity property: the link path of shortest Euclidean length between any two points in the link center lies in it (as does the shortest Euclidean path). This is not difficult to see if we look at the construction of the link center and its underlying properties discussed below. After a brief discussion of the motivation, we will first discuss an O(n^) time algorithm to compute link center and link radius and then sketch an 0{n log n) time for these problems. The study of the link center problem is not only algorithmically interesting and used to solve other geometric problems, e.g. to compute the link radius, it is also motivated by the following potential applications. It could be applied to locate a transmitter so that the maximum number of retransmissions needed to reach any point in a polygonal region is minimized. It could also be used to choose the best location of a mobile unit, minimizing

528

A. Maheshwari et al.

the maximum number of turns required to reach any point inside a polygonal region. The link radius gives the maximum number of retransmissions or turns required between any pair of points. To compute the link center, rather than computing window trees from each point in P, it suffices to compute window trees from vertices of P. More precisely, it suffices to compute window partitions from convex vertices, v, of P to a depth of R in the associated window tree. Let us call a polygon arising from computing a window tree for v to depth d the d-neighborhoodpolygon for v. As Lenhart et al. [58] show the link center is the intersection of all /?-neighborhood polygons for convex vertices of P. Based on this fact, they show that the link center can be computed in O(n^) time. Note that their algorithm (and the link center algorithm discussed below) is based on exact knowledge of the link radius. As there are only two choices for the link radius, given the link diameter, the link center algorithm can first be executed with the smaller value. If that fails, it is rerun with the larger value. Here failing means that the computed intersection is empty (the link center is, by definition, non-empty). Therefore, as a sideeffect one obtains an O(n^) algorithm to compute the link radius. THEOREM 2.5 (Lenhart et al. [58]). Link center and link radius of a simple n-vertex polygon can be determined in 0{n^) time.

Next we describe how to reduce the time complexities of the link center and link radius problem from 0{n^)ioO{n log n). Three challenges arise: • The total number of vertices of the /?-neighborhood polygons for all convex vertices may be quadratic in n. • The actual computation of the 0(n) neighborhood polygons must be carried out more efficiently than by computing them individually in 6)(n) time each. • Finally, their intersection must be carried out efficiently. We sketch the approach taken by Djidjev, Lingas and Sack [25]. (An alternate approach has been proposed independently by Ke [49].) Firstly, a region is identified which contains the link center. This region is the 2neighborhood polygon of a particular diagonal which is, in some sense, "central" to P. To define the notion of central diagonal, consider that every diagonal induces two subpolygons in P, and denote by c/, Cr the maximum link distances to any point in the induced subpolygons. The quantities Q , Cr are called covering radii. A diagonal of P is called central if it minimizes (the absolute value of) the difference between c/ and Cr among all diagonals of a given triangulation of P. The following property has been derived for central diagonals: LEMMA 2.2 (Djidjev, Lingas and Sack [25]). In any simple polygon with link radius R a central diagonal exists for which both covering radii, ci and Cr, are at least R — 1. Furthermore, such a diagonal can be found in 0(n log n) time.

Before we describe the method for finding a central diagonal, we motivate this concept. For this, observe that convex vertices that are too close to the link center are irrelevant as

Link distance problems

529

their /?-neighborhood polygons would equal P and they therefore do not contribute to the link center when intersected. "Too close" is quantifiable by using the link distance from the central diagonal which can be computed using Suri's window tree algorithm. We thus assume that irrelevant convex vertices are omitted from further consideration. Now, consider any 7-neighborhood polygon of a (fixed) convex vertex of P, for some 7, 1 < j ^ 7? — 2. In general, this polygon contains several window edges in P and therefore defines an equal number of window polygons. Only one of those window polygons contains the link center as well as the central diagonal. Consequently, given knowledge of the location of a central diagonal, one can identify the unique window polygon which is relevant for the link center computation and ignore all other. A procedure for finding a central diagonal is based on the fact that an edge of the triangulation exists which splits the polygon into two pieces, each containing at least 1/3, but no more than roughly 2/3, of the vertices. By using Suri's linear-time window tree algorithm recursively at most \ogn times the above bound is easily achievable. Having computed the window tree for P from the central diagonal one can perform a post-order traversal of the tree and use the post-order numbering to order the triangles of the triangulation of P. We now use this ordering to compute all 7-neighborhood polygons iteratively, for all relevant convex vertices of P. The reasons for choosing this order include: (1) to eliminate redundant computation, and (2) to make the computation of j neighborhood polygons more efficient when j — 1-neighborhood polygons have already been constructed. We justify these reasons as follows: 1. Assume that, at some stage of the algorithm, several window edges of different (j — 1)-neighborhood polygons are incident to a common reflex vertex and that all induced window polygons are relevant. As the link center is the intersection of Rneighborhood polygons, at most one window polygon (the one that is contained in all others) can finally contribute to the intersection; only the window edge corresponding to that window polygon will be kept. This window edge is easily found as it is angularly extremal. (Note that inside the 2-visibility polygon from the central diagonal two window edges bounding the link center may be incident to a common vertex.) 2. Assume now that, at some stage of the algorithm, a triangle is examined which contains a reflex vertex, say r, of P. When the triangle is examined it induces at most three subpolygons in P: one of which contains the central diagonal; the other(s) have been processed inductively and, in particular, the (j — 1)- neighborhood polygons have been constructed for all contained (relevant) convex vertices. For the reflex vertex r to be incident to one of the j-neighborhood polygons it must see at least one relevant window edge for a (7 — 1)-neighborhood polygon. To compute the corresponding window edge (incident to r) a ray-shooting technique is developed which calculates extremal shots to all window edges of (7 — 1)-neighborhood polygons and then back-extends the angularly extremal shot. The final iteration of the algorithm produces a set of reduced P-neighborhood polygons. These are intersected efficiently to produce the link center. The analysis of the entire fairly complex algorithm reveals that all steps can be carried out in 0{n logn) time, which leads to the following theorem:

530

A. Maheshwari et al.

THEOREM 2.6 (Djidjev et al. [25]). Link center and link radius of a simple n-vertex polygon can be determined in 0(n log n) time.

2.1.3. Arbitrary link distance queries. In this section we discuss upper and lower bounds for the following link distance query problem. Given two query points p, q inside a simple preprocessed polygon P report their link distance. Before we discuss algorithms for this query problem, we present a lower bound for this problem, established by Arkin, Mitchell and Suri [7]. They show that binary search on n sorted real numbers, which has an ^(log n) lower bound, can be reduced to the link distance query problem in 0(1) time as follows. Let 0 < x\ < JC2 < • • • < x« be the real numbers serving as input to the binary search problem. Compute a simple zigzag-shaped polygon on 0(n) vertices, that encloses the points (0,0), (x\, 0), ..., (JC«, 0), but that has a "zigzag" just before each of the points (JC/, 0). Now, if x^ > 0 is an input to the binary search problem, then it is easy to see that the link distance query to the point (0, JCO from (0,0) lets us to compute the index of a point, such that xi ^x^ < x/+i. This leads to the following theorem. THEOREM 2.7 (Arkin et al. [7]). In the algebraic decision model of computation, ^ (logn) comparisons are necessary to compute the link distance between a fixed source point and a query point inside a (preprocessed) simple polygon on n-vertices.

Now to the upper bound. Arkin et al. [7] also present a data structure for answering link path queries. The cost of preprocessing is O(n^) time and space, and link path queries can be answered in 0(log n-\- k) time, where k is the number of links. The data structure can also be used to answer link path queries between a pair of segments in P, between a pair of convex polygons inside P, and between a pair of simple polygons. Moreover, approximation algorithms for all of these problems have been presented, for which the preprocessing requires only O(n^) time and space, and approximate link path queries can be answered within 0(log n-\-k) time. In order to answer link path queries between two query points s and t, first an implicit representation of the shortest path between s and t is computed using the data structure for shortest path queries developed by Guibas and Hershberger [39]. Next, extract the first polygonal vertex, say f, and the last one, say w, on the shortest path. If neither v nor w exists, then the link path between s and r is a single segment and thus its link distance is one. If v = w, then the link distance between s and t is two. Assume now that v ^w. Determine whether there is an inflection edge (or eave) on the shortest path between s and t (see [35]). If there exist an inflection edge, say ab, on the shortest path then the link path between s and t can be computed by knowing the link paths between the end points of the inflection edge and s and t, respectively. Suppose that there is no inflection edge on the shortest path. In this case, the link path between s and t is convex and can be computed using certain bilinear functions (as described in [2,10]). The result is summarized in the following theorem: 2.8 (Arkin et al. [7]). A simple n-vertexpolygon can be preprocessed in O(n^) time and space, so that link path queries between any pair of points, can be answered in 0(log n-\-k) time, where k is the link distance.

THEOREM

Link distance problems

531

Arkin et al. [7] show that approximate Hnk distance queries, with error at most 1, can be answered by using only 0(n^) preprocessing cost. Arkin et al. extend the above data structure to report link distances between two simple polygons contained inside a polygon with n vertices. Preprocessing requires O(n^) time and link distance queries can be answered in 0(k log^ n) time, where k is the total number of vertices of the two contained polygons. Chiang and Tamassia [13] show how to answer link path queries between two convex polygons inside a simple polygon. Their algorithm requires 0(n^) preprocessing time and queries can be answered in 0(log k + log n) time, where k is the total number of vertices of the convex polygons.

2.2. Link paths for polygons with holes So far we have considered link distance problems in simple polygons, i.e., polygons without holes. In this subsection we focus on polygons that may have multiple holes. Formally, a subset of the plane whose boundary is a union of finitely many line segments or half rays will be a called SLpolygon with holes ([66]). In particular, this definition allows unbounded (outer) polygons. Holes, including the outer polygon, if any, are called also obstacles. A link path should not intersect any of the obstacle edges, but is allowed to touch them. The link path problem for polygons with holes has applications in path planning for a point robot, where a collision-free minimum link path has to be computed among obstacles in the plane. Link distance problems for polygons with holes seem more difficult than the same problems for simple polygons, and the algorithms from the previous sections do not immediately generalize. The main difficulty in applying the window partitioning technique is that the visibility polygon from edge, which might have to be computed from any window edge and for any iteration, has a very large worst-case size. Figure 2 illustrates a polygon on n vertices and with 2k = Q{n) identical holes arranged in two rows of k holes each parallel to a window edge e. There are ^ — 1 gaps between holes in the first row and k — \ gaps in the second row. When e illuminates the obstacles, each pair of gaps produces a "ray", and there are {k — 1)^ such rays and Q(k^) intersections. Thus, the size of the visibility polygon from e is i?(/i^). Other properties of simple polygons used in previous algorithms are that (1) the dual graph of a triangulation is a tree and (2) there is a unique path between each pair of vertices in a tree. That fact considerably restricts (and thus speeds up) the search for a shortest link path. For a triangulation of a general polygon with holes, in contrast, there might be multiple paths in the dual graph between a pair of vertices; in fact, the number of paths can be exponential with respect to the size of the polygon. Mitchell, Rote, and Woeginger [66] describe an algorithm for computing a shortest link path between a pair of vertices s and t that does not explicitly compute the entire window and works in almost quadratic worst-case time. The main ideas of their algorithm are the following: (i) At iteration ^, a set Visk of points that are at link distance k from the source s is computed, starting with VisQ = {s}. Vis\ is equal to the visibility polygon from s (as in the simple-polygon algorithm). At the ^-th iteration, for /: > 1, however, only visibility

532

A. Maheshwari

et al.

Fig. 2. Rays originating from a window edge e and passing through one of the gaps.

regions from "relevant" window edges of Visk are considered. Specifically, in case P \ Visk consists of several disconnected subregions (called cells), only the one that contains the target t is processed in the next iterations. For instance, up to Q (ri^) cells of the visibility region might not be adjacent to any obstacle point (point of P), see Figure 2. By this modification, all but 0(/i^) window edges of Visk can be ignored when computing Visk+\. We denote by Rk that relevant extension of Visk. (ii) We need a method for constructing Rk without explicitly computing the edges of Visk, whose number can be Q(n^). A more compact representation of the visibility region from an edge can be computed by adapting the technique of Suri and O'Rourke [79]. Their idea is to represent such a region as a union of O(n^) triangles, where the triangles can be constructed by a rotational line sweep around each obstacle vertex visible from the edge. (iii) Each region Rk can be further simplified by expanding it to its relative convex hull with respect to the obstacles (holes) of P that are not in Rk. The relative convex hull of a subpolygon Q of P with respect to a set S of holes is defined as the minimum-length simple polygon enclosing Q and disjoint from any hole in S. Mitchell et al. [66] prove the following properties of the relative convex hull. LEMMA 2.3. A point from the cell of P \ Rk that contains t is weakly visible from the boundary of Rk if and only if it is weakly visible from the boundary of Rk.

LEMMA 2.4. The total number of edges of all polygons Rk is 0(/i). (iv) To find the cell Ck containing t, the algorithm of Edelsbrunner, Guibas and Sharir [27] can be used. That algorithm finds a single face in an arrangement of s lines, without computing the entire arrangement, in 0{sa{s)\o^s) time, where a{s) is the inverse of the Ackermann function. Mitchell et al. [66] modify their algorithm so that, in addition, obstacle edges that belong to multiple cells Q do not have to be recomputed every time. The entire algorithm works as follows: There are / iterations, where / is the link distance from s to t. We denote by PQ the start vertex s and by P^_i, for /: > 1, a simple-polygon extension of Visk-\ that contains all relevant visibility information needed for the ^-th iteration. Also, at the beginning of iteration k, we have a set of illumination edges on the

Link distance problems

533

boundary of / \ _ i that are at link distance k — \ from s and will be used for computing Visk. The procedure for constructing Pk from Pk-\ consists of the following steps: 1. Compute Visk• Given Pk-\, describe Visk, the visibility polygon from the illumination edges, as a union of triangles by performing a rotational line sweep around obstacle vertices in P\Pk-\. 2. Compute the cell containing t. Apply an adaptation of the algorithm of Edelsbrunner, Guibas and Sharir [27] for finding a single face in an arrangement of lines. Let Q be the cell of P \ Visk containing t. Expand Visk by adding all other cells of P \ Visk except Ck, as well as any of the remaining obstacles only partially visible from the illumination edges. Let Rk be the resulting polygon. 3. Compute Rk. Find the relative convex hull of Rk\0 with respect to the set of obstacles O that are not contained in the interior of Rk. 4. Find Pk. Since Rk is an extension of Visk, the cell of P\Rk containing t will be only a subset of Ck and therefore might not be necessarily connected. Thus we need to find the cell C[ in the complement of Rk containing t. Denote by Pk the complement

ofq. The performance of the algorithm is given in the following theorem: THEOREM 2.9 (Mitchell et al. [66]). A minimum link path between two points in an arbitrary n-vertexpolygon P (with holes) can be computed in time 0(Ea(n) log n) and space 0(E), where E denotes the size of the visibility graph of P and a(n) is the inverse of the Ackermann function.

Note that the worst-case time complexity of the algorithm is 0(n^a(n) log^ n). It is an open problem to design a subquadratic algorithm for this problem. The best known lower bound is 0{n log n) [66]. Mitchell, Rote and Woeginger extend in [66] their algorithm to construct the shortest path tree from a start vertex s in time 0((E + ln)^^^n'^^^l^^^ log^-^^ n-\- E log^ n) and space 0(E), where / denotes the length of a longest link path from ^ to a vertex of P. Note that in the worst case the time bound for the shortest path tree problems is Q{n^^^), which is worse than the worst-case time bound of 0{n^a{n) log^ n) for the single pair problem.

2.3. Rectilinear polygons In this section we study link distance problems set in rectilinear polygons; in this context it is natural to use the rectilinear link distance measure. For the remainder of this section we therefore take link distance to mean rectilinear link distance. A rectilinear polygon^ is one whose edges are all aligned with a pair of orthogonal coordinate axes, which, without loss of generality, we take to be horizontal and vertical. Rectilinear polygons are commonly used as approximations to arbitrary simple polygons; and they arise naturally in domains dominated by Cartesian coordinates, such as raster graphics, VLSI design, robotics, or architecture. A rectilinear polygon is called trapezoided if both its vertical and horizontal ^ Rectilinear polygons have also been called orthogonal polygons, isothetic polygons and rectanguloid polygons in the literature.

534

A. Maheshwari et al

visibility maps are given. Trapezoidation of a rectilinear polygon takes 0{n) time. The following theorem of de Berg [23] is used as the main preprocessing step for the rectilinear link distance problems discussed. The theorem is an analogue to Chazelle's polygonal cutting theorem [11]; it is stated here for the rectilinear setting. THEOREM 2.10 (de Berg [23]). Let P be a simple rectilinear polygon on n vertices, each vertex is assigned a weight in {0, 1}, and let C(P) be the total weight of the vertices. There exists an axis-parallel cut segment lying completely inside P, that cuts P into two polygons, each having weight at most 3/4C(P). Moreover, this segment can be chosen such that it is incident upon at least one vertex and it can be computed in 0(n) time.

The proof given by de Berg [23] is simple, but requires case analysis; it is based on Chazelle's proof. The main idea is to determine first whether or not a vertical cut segment exists that satisfies the statement of the theorem. This is done by sweeping the polygon vertically. If one fails to find such a vertical cut segment, then de Berg shows that a horizontal segment exists that achieves the desired bounds. In the proof, one must address degeneracies, since several vertices can lie on the same horizontal or vertical segment. It turns out that it suffices to consider vertex-edge visible pairs, of which there are only 0(n) in number. The required cut segment can then be computed in linear time. Next, we turn our attention to the problem of answering arbitrary rectilinear link distance queries in a (preprocessed) rectilinear simple polygon. The above theorem is used recursively to subdivide the polygon. If source point, s, and destination point, r, fall into different subpolygons induced by a cut segment, then the Hnk path crosses this cut segment, say c. Suppose that we know the link distances ds and dt and minimum link paths from s to c and from t to c, respectively. Can we paste the two minimum link paths together to get a minimum link path from ^ to r? The answer is: it depends. It depends on how the paths approach c and which portions of c are reachable by Hnk paths of distances ds and dt from s and t, respectively. Suppose that there exists a common point on c that can be reached at distances ds and dt from s and t, respectively and that the paths are either both horizontal or both vertical. Then the paths can be pasted optimally together. If the orientation of the paths do not match, an additional turn is required at c if we perform a paste operation. If there is no common point on c one link path (may not be a minimum link path) would possibly turn, reaching c from 5", take an additional link traveling on c, and possibly turn again to join the path to t. Suboptimal paths may result from these paste operations. Consequently, more information is required about link paths from points to c. As de Berg shows, the set of points on c reachable from some point within a certain link distance form an interval (i.e., form a consecutive set of points on c). As can be seen from the above discussion, we require for each vertex "fast" and "slow" intervals on c which are determined by how, i.e., horizontally or vertically, the cut segment is approached. Fast intervals are at minimum link distance; slow intervals need one more link. If we know the fast and slow intervals for s and t on cut segment c it is trivial to perform the paste operation, in constant time, by analyzing the cases arising from the combination of the two pairs of intervals.

Link distance problems

535

Keeping, for each vertex of P, slow and fast intervals on each cut segment would, however, be too expensive. To address this, de Berg develops the notion of "next-vertex" for each vertex in P, Let v be some vertex of P at a link distance greater than two from a cut segment. Then there exists another vertex v\ such that any point on the cut segment can be reached optimally from v by going through v\ Vertex v^ is called "next-vertex" for v. He shows that "next-vertices" for all vertices v can be computed in linear time. The following theorem summarizes the results for link distance queries between two arbitrary vertices: THEOREM 2.11 (de Berg [23]). A data structure exists such that the rectilinear link distance between two query vertices of a rectilinear polygon P onn vertices can be computed m 0(1) time, and a shortest path can be reported inO{\ -\-k) time, where k is the number of links in the path. The data structure can be constructed in 0{n log n) time.

The above result is extended to arbitrary query points as follows: Using a planar point location algorithm, locate the appropriate subpolygons (namely the corresponding rectangles at the bottom level of recursion) P' and P^' of P containing s and t, respectively. If P' = P", then it is trivial to compute a minimum link path. Otherwise, using a lowest common ancestor computation on the underlying recursion tree, compute the subpolygon corresponding to the lowest common ancestor of tree nodes corresponding to P' and P". For the query points s and t, locate the "next vertex" of s and t. Then, use the above result for answering link path queries between two arbitrary vertices. (We omit technical details on how to locate the "next vertex"; for this see [23].) 2.12 (de Berg [23]). A data structure exists such that the rectilinear link distance between two query points of a rectilinear polygon P onn vertices can be computed in 0(log n) time, and a shortest path can be reported in 0(log n -\-k) time, where k is the number of links in the path. The data structure can be computed in 0(n log n) time and requires 0(n log n) storage. THEOREM

The cost of preprocessing can be reduced to be linear, as shown independently by Lingas, Maheshwari and Sack [59] and by Schuierer [77]. (The algorithm of [59] discussed in Section 3, has been developed to solve this problem optimally in parallel; as a side-effect, it also improves on de Berg's sequential preprocessing.) 2.3.1. Link diameter, link radius and link center Using the above cutting theorem, de Berg [23] presents a simple divide-and-conquer algorithm to compute a rectilinear link diameter of a rectilinear polygon P. Denote the link diameter of P by diam(P). The main steps in the algorithm are: 1. If P is a rectangle then diam(P) = 2. 2. Compute a cut segment e of P, that partitions P into two subpolygons Pi and P2, of roughly equal sizes. 3. Compute the diameters of Pi and P2, recursively. 4. Compute M = ni2ix{dL(v, w)\v e Pi,w e P2}. 5. Let diam(P) = max(diam(Pi), diam(P2), M).

536

A. Maheshwari et al

It is straightforward to see the correctness and complexity of this algorithm provided we know how to compute M. In [23] de Berg shows that M can be computed in linear time as follows: Let e be the cut segment of P, subdividing P into two subpolygons Pi and P2- Let d\ = maxldiiv, e) \ v e P\} and d2 = Tmix{dL(w, e) \ w e P2}. It turns out that M = di-\-d2-\- s, where ^ = +1,0, — 1, depending upon how the link paths from vertices at distance d\ in Pi and from vertices at distance J2 in P2 can be combined together. The result is summarized in the following theorem: THEOREM 2.13 (de Berg [23]). Rectilinear link diameter of a simple rectilinear polygon on n-vertices can be computed in 0(n log n) time.

Nilsson and Schuierer [70] present the first optimal linear time algorithm for solving the rectilinear link diameter problem, by modifying the algorithm of de Berg [23]. They show that it is sufficient to recurse on one of the subpolygons. For the other subpolygon it can be shown that its diameter is either smaller than the link diameter of the polygon, or the diameter of the subpolygon can be computed in linear time explicitly without resorting to recursion. Their result is summarized in the following: 2.14 (Nilsson and Schuierer [70]). The rectilinear link diameter of a simple rectilinear polygon on n-vertices can be computed in linear time.

THEOREM

In [23] de Berg also presents a linear time approximation algorithm for the link diameter problem, where the path reported could be off by at most three links. Nilsson and Schuierer [69] present an optimal linear time algorithm for computing the link center of a simple rectilinear polygon. Unlike in the case of simple polygon, the rectilinear link center may be disconnected. Lemma 2.1 also holds in the rectilinear setting. The main steps in their algorithm are the same as in those of Djidjev et al. [25]. We omit the details and summarize the result in the following: THEOREM 2.15 (Nilsson and Schuierer [69]). The rectilinear link center of a simple rectilinear polygon on n-vertices can be computed in linear time.

2.4. Rectilinear link paths among obstacles In this section, we discuss link path problems among rectilinear obstacles. In this context it is natural to require link paths also to be rectilinear. Such problems can be studied in static and as query versions, i.e., without or with preprocessing. We discuss the case of objects which are given as rectangles followed by a discussion of the case of rectilinear polygons. 2.4.1. Link paths queries among rectangles. De Rezende et al. [24] consider fixed source query problems for the special case of rectangular, axes-parallel obstacles (also called isothetic rectangles). For a given source point s in the plane and a query point t, determine the link distance and report a minimum link path from siot where the path avoids the obstacles. They design a simple algorithm which answers a link distance query in 0(log n)

Link distance problems

537

time and reports the link path in additional time proportional to the number of links on the path, after spending 0{n log n) preprocessing-time. A key observation is that, in this setting, all shortest paths from s are monotone in (at least) one of four rectilinear directions (i.e., —x, + x , —y, or -\-y directions). Moreover, the plane can be subdivided into four connected rectilinear regions (depending on s and the set of obstacles), so that all paths in any of those regions is monotone with respect to a common rectilinear direction. This subdivision can be constructed in 0{n log n) time [24]. Each region induced by the subdivision is further subdivided into 0 ( n ) (possibly semiinfinite) rectangular regions in 0{n log n) time. The distances from s to all vertices of the subdivision are now easily computed by applying fow plane sweeps (one for each of the four rectilinear directions) resulting in 0{n log n) time algorithm. Finally, the partition is preprocessed for point location in 0 ( n log n) time [51]. This completes the discussion of the preprocessing phase. To answer a link distance query to a target point t, first the subregion containing t is determined by planar point location in O(logn) time. Then, in additional 0 ( 1 ) time, the distance to t is computed as a function of the distances between s and t, from one side, and the vertices of the rectangle containing t. When the query asks for the path to be reported, the path can be produced in additional time proportional to its number of links. T H E O R E M 2.16 (De Rezende, Lee and Wu [24]). In 0(n be constructed so that any rectilinear link distance query point t outside rectangular obstacles can can be answered path queries can be answered in 0 ( l o g n-\-k) time, where andt.

log n) time a data structure can from a fixed point s to a query in 0 ( l o g n) time. Minimum link k is the link distance between s

2.4.2. Link paths queries among rectilinear obstacles. Das and Narasimhan [20] study link distance problems between two points s,t in free space defined by a set of rectilinear polygonal obstacles. They achieve the following results, where ^ is a fixed source point and t a query point: T H E O R E M 2.17 (Das and Narasimhan [20]). (1) A minimum link rectilinear path between two points amidst rectilinear polygonal obstacles (of total size 0{n)) can be computed in 0{n log n) time and 0{n) space. (2) In 0(n log n) time and 0(n) space a data structure can be constructed so that a minimum rectilinear link path from a fixed point s to a query point t outside the rectilinear obstacles can be reported in 0(log n -\- k) time, where k is the link distance between s and t. It is not difficult to show that Q (n log n) is a lower bound for the problem of computing a minimum rectilinear link path between two points. To see this bound consider the problem of sorting n distinct integers a/, for 1 < / < n. Construct, for each a/, a square of height and width at that is axes parallel and centered at (0, 0). Now create a small opening, say at the right-hand comer, and double the lines so as to create a polygon from the resulting polygonal chain. A set of rectilinear polygons is created that are nested inside each other, so that at < aj iff the polygon constructed for aj contains that for at. Any minimum link

538

A. Maheshwari et al.

path from (0, 0) to a point (8,0), where B is greater than the largest integer, will visit each polygon in order of increasing values ai and thus sort the input. The upper bounds for the static and the query version of the problem are obtained by a reduction to a graph theoretic problem. The problem is stated as follows: consider a set of n orthogonal line segments: the intersection graph is defined so that each vertex corresponds to a segment and each edge corresponds to a pair of intersecting segments. While this graph can potentially have Q{n^) edges the authors show that a breadth-first search of all vertices (not visiting all edges) of the graph can be performed in 0{n \ogn) time using 0{n) space. This implies that the graph distance from a particular segment can be obtained in the same time. To see now the ideas behind the reduction, construct a generalized "trapezoidation" of the input extending each polygonal edge of the input polygons until it intersects another polygonal edge. This defines an intersection graph. Note now that any rectilinear link path alternates between horizontal and vertical edges. So two subgraphs of the intersection graph are relevant, those defined by using either only the vertical edges as vertices together with their incident edges, or analogously the horizontal edges. Now, to find a minimum rectilinear link distance between two points, one performs two breadth-first searches: first in the vertical and then in the horizontal edge graph. The depth in the depth-first search is used to determine the link distance and, since the path could start either horizontally or vertically, two searches are required. Constructing the actual path is somewhat more involved and requires the addition of pointers to the above breadth-first search. This yields the desired bounds which improves the space bound of 0(n \ogn) derived earlier by Imai and Asano [46] (while keeping the optimal time complexity). Link distance queries from a fixed source point s are obtained by considering s to be a point obstacle and using it in both horizontal and vertical subgraphs of the intersection graph. In each subgraph, the regions induced by the partitions are labeled according to link distance from s. A planar point location algorithm can (like in the rectangular case) be used to achieve the desired query time and preprocessing costs. 2.5. Robustness issues in link distance computations Kahan and Snoeyink study in [47] the bit complexity of link distance problems for a computational model with finite precision arithmetics. They show that if a link path needs to be represented as a polygonal line, then Q{n^ \ogn) bits might be needed to store the exact coordinates of the vertices of that polygonal line (assuming the coordinates of each vertex of the original polygon are O(logn) bit numbers). In the worst-case scenario, a shortest link path might consist of a sequence of alternating vertices from the original polygon and Steiner (non-polygon) points, where for the computation of the k-ih Steiner point the coordinates of the k — 1-st are used {k > 1), leading to an accumulation of the error. Such a shortest link path will have a spiral-like shape as depicted on Figure 3 [47]. In order to handle the problem, Kahan and Snoeyink study link paths of restricted type, in which each Steiner point is an intersection of two lines, each determined by a pair of original polygon vertices. They show in [47] that a link path of that type exists between each pair sjoi points in a simple polygon that is at most twice longer than the shortest link path between s and t. Moreover, such a path can be found in linear time.

Link distance

'

'

I



539

problems

1

^

'

—^ A *" ^ ^

' I

I

^ '^ ^ ^

' I

/

Fig. 3. A spiraling shortest link path.

They also consider link paths between s and t whose vertex positions are restricted to points of an A/^ X A/^ grid and describe an algorithm that produces such a path of length 0(log A^) times the link distance between s and t. The time complexity of that algorithm is 0{n -\- /), where / is the length of the constructed path. That study indicates the need or more research in the area of error analysis and robustness for link distance problems. Better approximation factors seem possible for the restricted versions of the link distance problem discussed above. Note also that the lower bound on the bit complexity does not rule out the existence of exact subquadratic algorithms for link distance problems for the limited precision model. For instance, alternative descriptions of shortest link paths are possible, e.g., by the sequence of the polygon vertices in the shortest link path (omitting the Steiner points) or some other combinatorial representation. Another possible line of research is to appropriately adapt the existing link distance RAM algorithms so that the error accumulation problem is taken into account. Furthermore, implementation and experimental work will indicate how well the existing link distance RAM algorithms work on "typical" polygons.

3. Parallel algorithms for link distance problems All parallel algorithms discussed in this section are expressed with respect to the Parallel Random Access Machine (PRAM) Model of computation. More specifically, the results are for the common concurrent read exclusive write (CREW-PRAM) and exclusive read and exclusive write (EREW-PRAM) models of computation. See Karp and Ramachandran [48] for details on PRAM models. We also refer the reader to the Chapters 4 and 18 in this Handbook [75] for a more general discussion of parallel computational geometry. First we present parallel algorithms for minimum link path problems and the link center problem inside simple polygons. Then we survey the literature on parallel algorithms for rectilinear link distance problems.

540

A. Maheshwari et al.

3.1. Simple polygons Two problems set inside simple polygons are addressed in this section: computing a minimum link path and constructing the link center. 3.1.1. Minimum link path algorithms. First we sketch a parallel algorithm of Chandru et al. [10], that requires 0(logn loglogn) time and 0{n) space using 0(n/loglogn) CREWPRAM processors for the minimum link path problem, i.e., to compute a minimum link path between two points s and t inside a simple polygon P on n-vertices. The complexity results are stated with respect to the CREW-PRAM model of computation. The time x processor product of this algorithm is within a polylog factor of the best known sequential algorithm for this problem. The main difficulty in designing a parallel algorithm for this problem is that the sequential algorithms for link path problems use "greedy" constructions as a partial strategy and appear to be inherently sequential in nature (e.g., the computation of the window partition by successively computing the visibility polygons) Furthermore, some graph problems which can be solved sequentially by using the "greedy" construction as a strategy have been shown to be P-complete [6]. We then sketch the main ideas behind the parallel algorithm of Chandru et al. [10] which is based on the sequential method of Ghosh [35]. In order to obtain the effective parallelization, the "greedy construction" of link paths is parallelized. Fractional linear transforms are introduced by Aggarwal et al. [2] to resolve the optimality of greedy solutions to the minimum nested polygon problem. Chandru et al. use these fractional linear transforms to capture the combinatorial structure of the greedy paths, and to set up divide and conquer strategies that emulate the greedy methods. Let SP(s,t) = (s,u\,.. .,Uk,t) be the Euclidean shortest path inside P between s and t (see Figure 4). An edge M/M/-J-I of SP(s, t) is an eave or an inflection edge if M/_I and M/+2 lie on the opposite sides of the line passing through M/ and M/+I . Ghosh [35] shows that there exists a minimum link path between s and / that contains all eaves of SP{s, t). This is done by transforming any link path L between s and t to another link path V that contains all the eaves in SP{s, t) and has at most as many links as L. This observation suggests the following simple strategy to compute a minimum link path: 1. Extend each eave at both ends to the boundary of P. The extensions of eaves decompose P into subpolygons so that the minimum link path restricted to any subpolygon contains no eaves. 2. If the extensions of two consecutive eaves intersect at a point z then z is a turning point of the minimum link path containing the eaves. Otherwise, find a minimum link path connecting the extensions of every pair of consecutive eaves on SP{s,t) to form the minimum link path. Consider one such subpolygon Pij formed by extending two consecutive eaves uiut^i and Uj-\Uj of SP{s, t), where the extensions of the eaves do not intersect. Let fz+i and Vj-i be the extension points of the eaves UiUi-^\ and Uj-\Uj on bd(P) respectively. The boundary of Pij (in clockwise order) consists of bd(vi-\-\, fy-i), the segment Vj-iUj-i, SP(ui-^\,Uj-\) and the segment M/-(-If/^i. The task is to compute a minimum link path Lij between M/+I U/+I and Uj-\Vj-\. Observe that Lij is a convex path. It is easy to see that in order to compute L/y it suffices to consider the region C Vtj c Pij of all points which

Link distance problems Outer Chain

541 Right Limiting Edge

Left Limiting

SP(s,t) Inner chain Fig. 4. Computation of a minimum hnk path between s and t.

have "left" and "right tangents" io SP{uij^\,Uj-\) lying completely inside Pij. This region can be computed using the shortest path trees rooted at w/+i and Uj-i. In the sequential setting, it is now a trivial matter to compute Lij. Let zi be the first turning point of L/y. If f/+i belongs to CVij then zi = u/+i. Otherwise, the next clockwise vertex of w/4-1 in CVij is zi. Draw the right tangent of zi to 5'P(w/+i, wj_i) and extend it to the boundary of CVij meeting it at a point Z2- Similarly, draw the right tangent of Z2 to SP(ui^\,Uj-\) and extend it to the boundary of CVij meeting it at a point Z3, and so on until a point Zr is found on Uj-\Vj-\. This gives the greedy path ziZ2, Z2Z3,..., Zr-iZr between M/+Ii;/+i and Uj-iVj-i.lt can be shown that the greedy path is also a minimum link path. Chandru et al. [10] propose the following scheme to compute the greedy path in parallel. Each of derived subpolygons, CVij, has the following structure: an inner convex chain and two limiting edges, called left and right limiting edges. The objective is to find a greedy (link) path connecting the limiting edges. Observe that the greedy path alternatively touches a vertex of the inner chain and an edge of the outer chain. Call this alternating sequence a link sequence; it captures the combinatorial structure of the greedy path. Assume that we can determine the link sequence for each point on the outer chain. Then the greedy path is easily computed by successively traversing the alternating sequence of vertices and edges. To determine link sequences note that around any point on the outer chain there is an interval (on the outer chain) of points with equal link sequences. Now observe that the link sequence of any two adjacent intervals is almost identical, but for either one of the

542

A. Maheshwari et ah

tangent vertices on the inner chain or one of the edges on the outer chain. Chandru et al. show that with each interval, a unique bilinear function can be associated, such that for any point in the interval, the last turning point of the greedy path can be computed. Moreover, they establish that it is sufficient to maintain a total of n log n such intervals and they can be computed in a divide and conquer fashion. Their result is summarized in the following theorem. THEOREM 3.1 (Chandru et al. [10]). A minimum link path between two points in a simple polygon on n-vertices can be computed in 0(log n log log n) time and 0{n) space using 0{n/\og log n) CREW PRAM processors.

3.1.2. Link center Recall that the link center of a simple polygon P is the set of points x inside P such that the maximum link distance from x to any other point in P is minimized. Ghosh and Maheshwari [33] show that minimum link paths from a point to all vertices of a simple polygon P can be computed in O(log^nloglogn) time using 0(n) processors, and applying this result and the approach of Lenhart et al. [58], the link center of P can be computed in 0(log^n loglogn) time using O(n^) processors. The results are summarized in the following: 3.2 (Ghosh and Maheshwari [33]). Minimum link paths from a given point to all vertices in a simple polygon on n-vertices can be computed in 0(log^ n log log n) time using 0{n) CREW PRAM processors. Moreover, the link center can be computed in 0(log^ n log log n) time using 0(n^) CREW PRAM processors. THEOREM

3.2. Rectilinear polygons As we have seen throughout this survey, a fundamental tool used for solving a number of link distance problems is the window partition developed by Suri and described in Section 2.1. Problems whose solutions are based on that tool include computing link path, link center and central link segment, answering link distance queries, and, outside the area of link distance problems, e.g., constructing bounded Voronoi diagrams [54]. The analogue to the window partition in the context of rectilinear polygons is the rectilinear window partition or histogram partition introduced by Levcopoulos [17] who use it in the design of approximation algorithms for optimal polygon decompositions. For the parallel setting an optimal algorithm exists for computing a histogram partition from any segment inside a rectilinear polygon. The algorithm presented in [59] is discussed in the next subsection. Given the range of applications for the window partition, it is likely that this result will find application for the parallelization of other know sequential algorithms. It has already been used e.g., to compute link diameter, answer rectilinear link distance queries, and to solve other related problems; as discussed in the following subsection. 3.2.1. Rectilinear histogram partition. We discuss the optimal parallel algorithm of Lingas et al. [59] for computing a histogram partition in a trapezoided rectilinear polygon P.

Link distance problems

543

Recall that a polygon is trapezoided if both of its horizontal and vertical visibility maps are provided (see [38] for a parallel algorithm for trapezoidation). By horizontal (or vertical) visibility maps we mean that each edge is extended (possibly to both sides) towards the polygon interior until the boundary of the polygon is reached. In the histogram partitioning of a rectilinear polygon P, we partition P with respect to a diagonal d in P into regions of equal link distance from d (see Figure 5). To familiarize the reader with histogram partitions, we describe a sequential method for its construction (analogous to the window tree construction). First compute the rectilinear visibility polygon from J in P, which is a histogram with base d denoted by H{d). The histogram H{d) is the set of points which can be reached from d by one link. Remove the histogram H{d) from P and this partitions P into several subpolygons. Link distance two from d is realized from those points of P — H{d) which are visible from some boundary edge of H(d). So, for each boundary edge of H(d), which is not an edge of bd(P), i.e., for each window edge, compute the histogram in P — H(d). This procedure of partitioning P into histograms is repeated until P is completely covered and a partition of P into histograms is obtained. The above procedure for computing a histogram partition looks inherently sequential. A parallel algorithm must therefore use a different idea. We sketch the idea and omit most of the slightly involved technical issues arising due to the parallelization. Assume we are given a histogram partition of a rectilinear polygon P from some diagonal d. This induces a set of Steiner points arising from the intersection of window edges and edges of P. For ease of discussion, let us add those Steiner points to the vertex set of P. Assume further that all distances to vertices from diagonal d are available. (Recall that the link distance between a line segment d and a point /? in P, is defined as the minimum among all link distances between p and x, where x ed.) The diagonal d splits the polygon into two subpolygons. Pi and P2. We restrict the discussion to one of the two subpolygons, say to Pi. While all window edges are extensions of edges of Pi to the boundary of Pi, not all such extensions are window edges for d. All extensions are available through the visibility maps. We need to identify those extensions that are part of the histogram, i.e., that are window edges, and construct the window edges in parallel, i.e., match each vertex to the corresponding Steiner point. Finally, we will have to construct the histogram partition as a planar subdivision and we will need to know the link distance for each region of the subdivision. Let Pi = ai,.. .,b\, where a\, b\ are the endpoints of d. Note that we keep the added Steiner points as vertices of Pi. Traverse Pi (clockwise) starting at ai and suppose that we are encountering a window edge w. Two facts can be stated: 1. The clockwise traversal of Pi, starting at the first endpoint of w to the second, traverses the entire subpolygon defined by M;. If the window partition is computed recursively it can be augmented by w to obtain a window partition for the entire subpolygon. 2. Observe that when a subpolygon is entered through a window edge, the link distance of the next vertex increases by one and when it is exited, the link distance of the previous vertex decreases by one. Based on these facts we perform a labeling of the vertices by brackets as follows:

544

A. Maheshwari

et ah

Fig. 5. Histogram partition of a rectilinear polygon from a diagonal d. The numbers denote the rectilinear link distance from d. The well formed bracket sequence determining the histogram partition of the subpolygon above d is: (((())(())())(((())))).

(1) Assign an open bracket to the first endpoint of d and a close bracket to the second endpoint. (2) Assign an open bracket to a vertex pi if the link distance of pi^\ is greater than to Pi-

(3) Assign a close bracket to a vertex pi if the link distance of pi-\ is greater than to pi. Note that vertices for which the link distance to the predecessor (or successor) is equal are not assigned a bracket. An example of a well formed bracket sequence determining a window partition is given in Figure 5. We can now state the key lemma whose correctness follows by induction: LEMMA 3.1 (Lingas et al. [59]). Let P be a rectilinear polygon and d a diagonal of P. Then a clockwise traversal of each of the two subpolygons induced by d combined with the above labeling produces a well formed bracket sequence. Moreover, a matching bracket pair corresponds to the base of a histogram of P from d and vice versa. This correspondence allows us to compute histogram partitions by using the matching parenthesis algorithm of [8] or [18].

Link distance problems

545

3.3 (Lingas et al. [59]). A histogram {or window) partition of an n-vertex trapezoided rectilinear polygon with respect to a diagonal can be computed in optimal 0(log n) time using 0{n/\og n) EREW PRAM processors. THEOREM

What remains is to compute the Hnk distance from d to all vertices. The algorithm to compute the link distance to each vertex of P from d is fairly complex. It uses several tools which have been developed in parallel computing, such as the lowest common ancestor computation, tree traversals and tree contraction methods. The main idea of the algorithm is as follows: Define two types of (rectilinear) diagonals in P, called minimal and maximal diagonals. A minimal diagonal is one which does not contain any other diagonal, and a maximal diagonal is one which is not contained in any other diagonal (see Figure 6). Let d be a minimal horizontal diagonal that splits the input polygon into two subpolygons. Consider the bottom subpolygon, say Pi. We wish to compute the link distances from d to all vertices of Pi. First compute the link distances from d to all maximal horizontal diagonals of Pi as follows. Compute the dual of the horizontal trapezoidation, which is a tree T, and root T at the trapezoid incident to d. Using a lowest common ancestor computation, for each maximal diagonal w, compute the furthest ancestor f{u) in T that is (rectilinearly) visible to it. Construct another tree T' on maximal diagonals, where for each maximal diagonal M, its parent is f{u) if it is defined, otherwise, the parent of u h d.\nT', let the distance of any maximal diagonal u to the root be J(w). For each maximal diagonal u in T define its link distance md(u) to the root diagonal d as follows: If /(w) is defined then md(u) := 2d(f(u)) — 1, otherwise set md(u) = 1. Now the distance from each maximal horizontal diagonal to d is known. The link distance from any vertex v to d can be computed by an argument based upon that information and whether a minimum link path from V to d will be obtained by starting vertically or horizontally from a vertex v. The result is summarized in the following: THEOREM 3.4 (Lingas et al. [59]). Let d be a (horizontal/vertical) diagonal of a trapezoided rectilinear polygon P. The link distances from d to all vertices of P can be computed in optimal 0(log n) time using 0{n/\og n) EREW PRAM processors.

3.2.2. Applications of histogram partitions. Using the histogram partition of the polygon, a number of problems have already been solved. We list the main results. First we discuss the single-source multiple destination problem (defined in Section 2) i.e., the problem of computing the link distances from a point to all vertices of a rectilinear polygon P. From a given point p in P we first shoot in the four rectilinear directions towards the boundary of P thereby computing the horizontal and vertical chord containing p. As per Theorem 3.4 we compute the link distances for the horizontal and the vertical chord containing p to all vertices of P. If a user is satisfied with an approximation of the link distances from p we are done. Otherwise, observe that the correct distance from p to a vertex v can differ by at most one from the value obtained for v to one of the chords. The determination of the exact value is harder. We need to compute the intervals for i; on the chord(s). Given that information and the corresponding value for the link distance, the problem is solved.

546

A. Maheshwari et al. Dual Tree T

Maxinaal

Diagon

Minimal Diagonal Fig. 6. Computation of rectilinear link distance from d.

3.5 (Lingas et al. [59]). Let p be a point {or a rectilinear line segment) in a trapezoided rectilinear polygon P. The link distance from p to all vertices of P can be computed in optimal O(logAz) time using 0(n/\ogn) EREW PRAM processors.

THEOREM

In Section 2.3 we described de Berg's algorithm [23] for solving the problem of computing the link distance and a minimum link path between two arbitrary query points inside a rectilinear polygon P. The following parallel result is based on de Berg's sequential approach however, the processor-time product of the parallel algorithm represents an improvement over de Berg's complexity by a factor of 0(\ogn). THEOREM 3.6 (Lingas et al. [59]). A data structure which supports rectilinear link distance queries in an n-vertex trapezoided rectilinear polygon can be constructed in 0(log n) time using 0{n/\og n) processors on the EREW PRAM. Using this data structure a single processor can answer link distance queries between two points in 0(log n) time.

For computing the rectilinear link diameter of a rectilinear polygon, de Berg [23] presents an 0{n log n) sequential algorithm (see Section 2.3). His algorithm can be parallelized in a straightforward manner. Due to the recursive nature of the algorithm, the resulting parallel time complexity is 0(log^ n). By observing that in de Berg's recurrences it is sufficient to recur on one of the subpolygons, Nilsson and Schuierer [69] are able to obtain a linear time algorithm for the rectilinear link diameter problem. Unfortunately, a straightforward parallelization of this algorithm does not lead to an improvement in the

Link distance problems

547

parallel running time. To obtain an algorithm with optimal processor-time product, Lingas et al. [59] provide a generic technique where a subset of the subproblems arising in the recursion are presolved to speed-up the recursive calls. This led to the following theorem: THEOREM 3.7 (Lingas et al. [59]). The rectilinear link diameter of an n-vertex simple rectilinear polygon can be computed in 0(log* n log n) time using 0(n/log* n log n) CREWPRAM processors. A minimum link path connecting two vertices of P realizing the diameter can be found in 0(log n) time using 0{n/\og n) processors on an EREW-PRAM.

The rectilinear link radius, central diagonal and link center (recall Section 2.1.2) can be computed efficiently in parallel by parallelizing each of the steps in the sequential algorithm of Nilsson and Schuierer [69]. The results are summarized in the following: THEOREM 3.8 (Lingas et al. [59]). A central diagonal, the rectilinear link center and link radius of an n-vertex simple rectilinear polygon can be computed in 0(log* n log n) time using 0(n/log* n log n) processors on the CREW PRAM.

3.2.3. Minimum link path between two points. McDonald and Peters [62] present a parallel algorithm for computing a minimum rectilinear link path joining two points inside a simple trapezoided rectilinear polygon. Furthermore, they show that this path is also of minimum length measured in the Li-norm. Their algorithm runs in 0(log n) time using 0(n/log n) EREW PRAM processors. Their algorithm proceeds by first finding a rectilinear path connecting the source and target points and then locally improving it to produce an optimal path. Local improvements are done on the basis of five different transformations. Maheshwari and Sack [61] propose a conceptually simpler algorithm for the above problem. Their algorithm proceeds by first trimming the polygon, using the dual of trapezoidation, to determine a subpolygon containing a minimum link path. The subpolygon is monotone (i.e., has a staircase appearance). A simple and efficient greedy algorithm is then used to compute minimum link paths in this subpolygon. Furthermore, also their path is shortest in the L \ -norm.

4. Applications and extensions In this section we study several problems that involve the link metric, and investigate problems that have motivated the study of link distance problems.

4.1. Movement of robot arms in bounded regions Hopcroft et al. [44] consider the problem of motion of planar linkages from the point of view of complexity. A linkage is defined as a collection of rigid rods called links. The endpoints of various links are connected by joints, each joint connecting two or more links. The links rotate freely about the joints. Hopcroft et al.'s main result is to show that the problem of deciding whether a planar linkage in some initial configuration can be moved

548

A. Maheshwari et al.

so that a designated joint reaches a given point in the plane is PS PACE-hard. Reif [74,73] studies a similar problem in 3-dimensional space and shows that the reachability problem is PS PACE-hard even for a particular hinged, tree-like linkage required to move in a nonconvex region. Hopcroft et al. [45] study the following problem: A carpenter's ruler consists of a sequence of n links L\, ...,Ln, that are hinged together at their endpoints. These links may rotate freely about their joints and are allowed to cross. By providing a reduction from set partition problem, they show that the following problem is NP-complete. Given positive integers nj\,.. .Jn, and k, can a carpenter's ruler with lengths /i , . . . , / „ be folded, so that each pair of consecutive links forms either an angle of 0° or 180°, and so that its folded length is at most kl They show that a ruler with lengths l\, ...Jn, can always be folded into a length of at most 2m, where m — max{//, 1 ^ / ^ n}. In two dimensions, where the arms have unrestricted movements, they show that the set of points that the last arm An can reach is a disc of radius r centered at AQ, i.e., the first link, provided that no link has a length, // greater than r — //, where r is the sum of the lengths of all links. They provide polynomial time solutions for moving the arm from its initial position to a final position under the restriction that the end point of the arm reaches a specified point within the circle. 4.2. c-oriented paths Adegeest et al. [1] consider problems on paths composed of a minimum number of line segments parallel to a fixed set of c orientations that avoid a set of obstacles in the plane. Notice that the rectilinear link distance problems are special cases where c = 2 and the orientations are orthogonal to each other. They preprocess a set of obstacles with disjoint interiors, consisting of n line segments, and a starting point 5. The resulting data structure is of size 0(cn) and allows: (1) minimum link distance queries from s to query points t to be answered in 0(c\ogn) time, and (2) a corresponding minimum link path to be reported in additional 0(k) time. The data structure can be computed in 0(c^n log n) time and space. The algorithm proceeds by computing all points that are reachable using one link from s, two links from s, and so on. However, this has to be done carefully since any of these regions reachable by the same number of links could possibly have quadratic complexity.

4.3. Approximating polygons and subdivisions with minimum link paths Guibas, Hershberger, Mitchell, and Snoeyink [40] consider the problem of simplifying a polygon or a polygonal subdivision. Their approach is to fatten the given object, by convolving the object with a disk, and then compute an approximation in the fattened region. The minimum-link subdivision problem is stated as follows: Given a subdivision 5 in a polygonal region P, compute a subdivision S' homeomorphic to 5* in P that is composed of the minimum number of line segments. Guibas et al. [40] show that the decision version of this problem is NP-Hard. This is achieved by providing a reduction of an instance of the planar maximum 2-SAT problem. They also establish that the problem of finding a minimum-link simple polygon of a given homotopy type is NP-Hard, by providing a reduction from maximum-2SAT.

Link distance problems

549

They consider the problem of finding a minimum Hnk simple curve enclosing all holes inside a simple polygon P. They provide an approximation algorithm that runs in linear time and reports a simple closed curve enclosing all holes that has at most 0(h) segments more than the minimum link curve of the same homotopy class, where h is the number of holes (see Guibas et al. [40] for precise definitions). Consider the following problem: Compute a polygonal chain consisting of a minimum number of line segments, that visits a given ordered sequence of n disjoint convex objects 0\, O2, •.., On, in order. Egyed and Wenger [29] consider the problem of stabbing disjoint convex objects in order with a line and Guibas et al. [40] provide a simple algorithm that computes the longest possible prefix Oi, O 2 , . . . , 6>/ in 0(/) time and space. Guibas et al. discuss several variations of this problem. They provide algorithms for computing a stabbing line for a sequence of possibly intersecting unit disks or translates of constant size convex polygons. They stucs stabbing problems for an ordered sequence of objects by a polygonal chain.

4.4. Separation problems Given a set of disjoint polygons P\,..., Pk.in the plane, and an integer parameter m, it is A/^P-complete to decide if the Pi s can be separated by a polygonal family consisting of m edges, so that there exist polygons R\,..., Rk, with pairwise-disjoint boundaries such that Pi ^ Pi and XII ^H ^^ (s^^ [19]). A separating family R is called 2inf(n)-approximation if the ratio between the number of facets in R and the number of facets in a minimum separating family is bounded by f(n). Edelsbrunner, Robison and Shen [28] present an approximation algorithm for pairwise disjoint convex polygons. Das provide approximate solutions for several problems in the rectilinear metric setting. Mitchell and Suri [67] consider the problem of separation of polyhedral objects. They design an 0{n log n) time algorithm for constructing a 7-approximation of the minimum separating family for a set of disjoint polygons in two dimensions, where n is the number of edges in the input family of polygons. The algorithm proceeds by computing a triangulation of free space, and then the dual graph of the triangulation is simplified, by first removing the triangles corresponding to degree 1 vertices, and then collapsing all vertices of degree 2. This results in a 3-regular planar graph with k + 1-faces, 2k — 2 vertices and 3^ — 3 arcs. Now the free space is split into polygonal region, one for each edge of the graph, which is further simplified using a minimum link path computation. We note that there is a large body of literature on variants of separation problems. In the linear separability problem, the question is to determine whether two sets of points can be separated by a hyperplane in J-dimensions. A solution is obtained by reducing the linear separability problem to a linear programming problem. It is A/^P-complete to decide whether the planar two point set is separable by k lines (Meggiddo [64]). 4.5. Bicriteria shortest path problems Arkin et al. [7] present an algorithm to obtain a path inside a simple polygon that is at most \/2 times the length of the shortest path and has at most twice the number of links as the

550

A. Maheshwari et al

minimum link path. They first compute a minimum Hnk path between s and t and then transform it to get a desired path. In a polygonal chain {u, v, w} define the turning angle at V to be the absolute value of the angle from the directed line wf to the directed line vw. The following observations on the turning angles are useful. If the turning angle at each bend in the minimum link path is less than 90 degrees, then it can be shown that its total length is at most Vl times the shortest path. For the bends, where the turning angle is greater than 90 degrees, add an extra link so that the turning angle is smaller than 90 degrees. This construction can at most double the number of links, but makes the total length at most ^/2 times the optimal. Mitchell et al. [65] study the problem of finding a shortest polygonal path from s to t within a simple polygon P, subject to the restriction that the path can have at most k links. They present an algorithm that runs in 0(n^k^ \og{NK/s^^^)) time and produces a A:-link path that is {\ -\- s) times the length of the shortest /:-link path, for any error tolerance 6: > 0, where A^ is the largest integer coordinate among the n vertices of P. The algorithm uses a combination of dynamic programming and a form of binary search. The polygon P is split into subpolygons using extension of eaves as in Ghosh [35], and in each of the subpolygons a convex path is computed for different values of the number of Unks. They also address the problem of approximating a shortest A:-link path in polygons with holes. They propose an approximation algorithm that runs in 0{kE^) time and returns a path that has at most 2k links and whose length is at most that of shortest /c-link path, where E is the number of edges in the visibility graph.

4.6. Nested polygon problems In general terms, this class of problems is motivated by simplifying the complexity of describing certain object configurations through providing approximations, especially by minimizing their combinatorial complexity. The problem considered here is that of finding a polygon K nested between two given polygons P and Q, where Q C K c P, that has a minimum number of vertices. We discuss the problem instances in which the polygons P and Q are convex, arbitrary simple and rectilinear; we also mention the parallel setting. (See Edelsbrunner and Preparata for the related problem in the point set [26].) Aggarwal et al. [2] consider the problem instance where both P and Q are convex. Their algorithm runs in 0(n log k) time, where n is the total number of vertices of input polygons P and 2 , and k is the number of vertices of the output polygon K. Observe that if K is a minimal nested polygon, then it will be convex. Define a supporting line segment in the region P — Q to be a line segment with both of its end points on the boundary of P and the segment is tangential to Q. Define a supporting polygon AT^ = ( u i , . . . , f/) to be a polygon formed by supporting segments f/ i^z+i, 1 < / < /, except for perhaps the last segment. Observe that K' will be either a minimal nested polygon or will have at most one more vertex than the minimal nested polygon, since any minimal nested polygon will have a vertex in the region bounded by the supporting line segment and the polygon P (excluding the side containing Q). The polygon K' can be computed in linear time by following a simple greedy algorithm: start from a point v\ on the boundary of P and draw a tangent to Q and then walk along this tangent until boundary of P is reached at the point V2. Now repeat

Link distance problems

551

the same steps from V2 and so on, until you wrap around Q and reach v\. Observe that any minimal nested polygon K^ can be transformed to another minimal nested polygon K formed only of supporting line segments (except possibly one) and Aggarwal et al. compute such a minimum nested polygon K. They first compute an approximate polygon K^ as discussed above, which could have one more vertex than optimal, and its last link may not be a supporting line segment (if it is, then K' is an optimal one). They slide the polygon K^ to see whether the last link could ever vanish. The sliding process is discretized by keeping track of the change in the combinatorial structure of the contact points of K^ with respect to P and Q (namely the tangential vertex of Q and supporting edges of P) when K^ is slid. Aggarwal et al. show that this combinatorial structure can be maintained by using certain bilinear functions in 0(nlogk) time. (These functions were already discussed in the data structure for answering arbitrary link queries and in the parallel algorithm for computing a minimum link path.) Aggarwal et al. compute the fix-point of these functions to determine whether the last link of the greedy polygon will ever collapse. Their result is summarized in the following theorem. THEOREM 4.1 (Aggarwal et al. [2]). A minimum polygon nested between two convex polygons can be computed in 0(n log k) time, where n is the total number of vertices of the input polygons and k is the number of vertices in the minimum nested polygon.

A natural question arises: What happens when P and Q are non-convex? Suri and O'Rourke [82] present an 0(pq) time algorithm, where p and q are the number of vertices of polygons P and Q, respectively. Wang and Chan [85] propose an 0((p -h q) log(p -\-q)) time algorithm for solving that problem. Using the notion of complete visibility, Ghosh [35] describes an algorithm that runs in 0((p -\- q) log(p -{- q)) time. Furthermore, Ghosh shows that the minimum nested polygon will be convex if and only if the boundary of P does not intersect the convex hull of Q. Using the relative convex hull of Q with respect to P, it can be determined in linear time whether the nested polygon K is convex or nonconvex. Ghosh and Maheshwari [32] establish that if K is nonconvex, then it can be computed in linear time, since a "starting point" of the polygon K can be easily determined. It is counter intuitive that the convex case turns out to be more computationally complex than the nonconvex case. The reason for this is that it is hard to find the "starting point" in the convex case. Chandru et al. [10] develop parallel algorithms for computing minimum nested polygons, and their algorithms run in O(lognloglogn) time using 0(n) CREW PRAM processors. DasGupta and Veni Madhavan [22] present approximate algorithms for computing nested polygons for special classes of polygons. Bhadury et al. [9] study the nested polygon problem in the context of art gallery problems. They present several polynomial time algorithms for these problems by reducing them to a circle covering problem, and then solving them by formulating them as integer linear programs with the circular ones property. Nested polygon problems in the rectilinear settings are solved by Maheshwari and Sack [61]. Given two rectilinear polygons P and Q, where Q C P, compute a rectilinear polygon K, where Q C K C P and K has the minimum number of vertices. They [61] also provide optimal sequential and parallel algorithms for this problem.

552

A. Maheshwari et al.

4.7. Central link segment We next discuss a problem related to the notion of visibility of a polygon from a segment. Recall that a polygon is called visible (or weakly visible) from a line segment ^ in a polygon P if each point in P can be seen from some point on 5". As Sack and Suri [76] show the problem of deciding whether there exists a polygonal edge e so that a simple polygon P is visible from e takes linear time; all edges from which a given input polygon is visible can be reported in the same time. The problems of whether a segment exist segment from which P is visible, and if the answer is positive, finding a shortest segment, can be accomplished in 0{n) time by the algorithm of Das and Narasimhan [21]. Generalizing the notion of segment visibility, a polygon is defined to be k-visible from some segment ^ in P if the link distance from each point p in P to some point on s (depending on p) is at most k. This implies that the number of mirrors required to illuminate any point p from the (segment) light source s is at most k — \. The k-visibility problem is to find the minimum value A:* for which a given polygon P is ^*-visible from some segment in P and to determine such a segment s of minimum length. This problem can be formulated as a link distance problem as follows. The covering eccentricity of a given segment ^ in P is defined as c(s, P) = msLxmindLiv, w), veP wes where diiv, w) is the link distance between v and w. A central link segment of P is defined as a segment in P with minimum covering eccentricity among all segments in P. The covering radius Re of P is the covering eccentricity of any central link segment. Therefore, an algorithm for finding a shortest central link segment and the covering radius of a polygon solves the ^-visibility problem. The central link segment problem for a simple polygon was studied by Ke [50], who gave an inconect solution to the problem, and by Aleksandrov et al. [3]. The algorithm [3] runs in 0(n log n) time and is based on ideas of the algorithm from [25], for computing the link center. Here we sketch the main ideas of this approach. We assume that P is an AI-vertex simple polygon. Recall that the link center and the link radius of P can be found in 0(n log n) time (see Section 2.1.2). The following relation between the link radius R and the covering radius Re of P is easy to verify: LEMMA 4.1. The covering radius Re of P equals either R — \ or R. On the other hand, if Re = P, then any segment intersecting the link center is a central link segment and any point inside the link center is a shortest central link segment. Thus, except for the determination problem, that is whether Re equals R or not, the interesting case arises when Re equals P — 1. We assume hereafter that Re = R—l.lf that assumption happens to be incorrect, our algorithm will output an empty set of segments with eccentricity P — 1 and in that case the algorithm for finding the link center will also solve the problem of finding a shortest central link segment. We have the following characterization of a central link segment:

Link distance problems

553

4.2. A segment 5** is a central link segment with covering radius R — I iff s"" intersects the R — 1-neighborhood polygons for all vertices of P.

LEMMA

Recall that the link center algorithm computes "relevant" portions of the ^-neighborhood polygons for the vertices of P, for ^ = 1,...,/? — 1, in total 0(n log n) time. Lemma 4.2 implies that in order to find a shortest central link segment of P, it is sufficient to construct a shortest segment intersecting the relevant portions of the P — 1 -neighborhood polygons of the vertices of P. This can be reduced to the problem of intersecting 0(n) subpolygons of P each determined by a diagonal of P. Let V denote the set of those subpolygons. Next, the properties of a central diagonal defined in Section 2.1.2 are used to further reduce the above problem (of finding a shortest link segment intersecting all subpolygons of P ) to the problem of finding shortest segments joining 0(n) pairs of convex polygonal chains inside P. Any of those segments will be a central link segment. Intuitively, any of the above pairs of chains will consist of a chain entirely to the "left" of the central diagonal and a chain entirely to the "right" of it. The algorithm for finding the shortest segments for all pairs is fairly complicated technically, since any of the G(n) chains may have size 0(n), although the union of all chains is of size 0(n) only. (The reason is that any segment may belong to multiple chains.) The above algorithm uses the algorithm of Kirkpatrick and Snoeyink [52] which, given an X-shaped polygon X consisting of four outward convex chains X\, X2, X3, X4 listed in clockwise order, finds a shortest segment inside X with endpoints on X\ and X3 in O(logn) time (assuming the representation of X\, X2, X3, and X4 allows binary search). Finally, the shortest central link segment of P is determined as the shortest segments among all candidates found for the individual pairs. We have the following result: THEOREM 4.2 (Aleksandrov et al. [3]). A shortest central link segment of any simple nvertex polygon can be found in 0(n log n) time.

4.8. Walking in an unknown street So far in this chapter we have presented link path algorithms under the assumption that the object space (a polygon or a polygon with holes) is known in advance. In the field of robotics, a robot may move without complete knowledge of its operating environment and take steps only on the basis of local information provided by sensors attached to it. Therefore it is natural to study path problems in unknown environments. We assume that the robot has an on-board vision system, it is placed at a starting point s, and it is searching for a target point t. Furthermore, it is assumed that the robot recognizes t as soon as it sees it. The objective is to design efficient on-line algorithms which a robot can use to search for the target. Recently, in computational geometry, several researchers have studied online path problems under the Euclidean metric, for example see [53]. The efficiency of an on-line link path algorithm is stated in terms of its competitive ratio and it is stated to be the ratio of the number of links computed by the online algorithm to the minimum link length path. Ghosh and Saluja [34] study the online computation of a minimum link path between two specified points inside a simple polygon. It is easy to construct examples to show that

554

A. Maheshwari et al.

for arbitrary n-vertex simple polygons, no online algorithm can achieve a competitive ratio better than n/4. Therefore, Ghosh and Saluja [34] consider this problem for a special class of polygons, called street polygons, first introduced by Klein [53]. A simple polygon P with two distinguished vertices, s and t, is said to be street polygon, if the clockwise and counterclockwise boundary of P from 5- to r are mutually weakly visible. They [34] show that they can achieve an optimal competitive ratio of 2 — 1/m, where m is the link distance between s and t. If the street is rectilinear and the robot moves in rectilinear directions, then they can achieve an optimal competitive ratio of 1 + 1/m.

4.9. Miscellaneous Among the variations considered for link distance problems we mention first the problem of computing a minimum link length watchman tour. Given a polygonal art gallery and a series of distinguished points in a gallery determine the link length of a minimum link length path for a watchman visiting all the points. (For an inspiring treatment of art gallery problems we refer the reader to O'Rourke's book [71] on that subject.) The problem for arbitrary polygonal galleries is A/^P-complete (Kranakis et al. attribute the result to Clote), since the edge embedding on a grid problem [15] can be reduced to it. Restricted versions of the problem have thus been considered. Kranakis et al. [16] consider the problem of a complete J-dimensional grid-like gallery. They give lower and matching upper bounds of 2^z — 1 for the 2-dimensional complete grid of size n, and they establish bounds for ddimensional grids, for J > 2. The lower bounds presented in the latter case have recently been improved to (1 -h 1/2J) * n^~', for allJ > 2 (see [14]). A somewhat related question is studied by Gyori et al. [42]. They solve in particular a generalized guard problem set in a rectilinear polygon P. They define the notion of a T/t-guard as a tree of diameter k completely contained in P. The guard G is said to cover a point A: if X is visible from some point on G. Gyori et al. discuss generalized guard problems estabhshing upper and lower bounds on the number of guards required to cover any rectilinear polygon with/without holes by T^-guards. Relating to the topics of this chapter, they state that the bounds for 72/:-guards using normal visibility can be interpreted as bounds on point guards with {k -\- l)-link visibility. (Recall that two points are /-link visible to each other if their link distance is /.) Alsuwaiyel and Lee [4] show that the problem of computing a minimum link path n inside a simple polygon on n vertices such that the interior of the polygon is weakly visible from 7Z is A/^P-Hard. They provide an approximation algorithm that runs in 0{n^\ogn) time. The resulting path may have at most three times more links than an optimal path. Alsuwaiyel and Lee [5] give an approximation algorithm to compute a watchman route (tour), that runs in time 0{n^) and produces a route (tour) having at most four times more links than an optimal watchman route (tour). A watchman route is a route such that the entire polygon (with holes) is weakly visible from the route. They show that the performance ratio can be improved to 3.5 by increasing the running time to 0(n-^). For computing bounded curvature paths inside a simple polygon, Ghosh et al. [37] use link path computations. Finally, Hershberger and Snoeyink [43] consider the problem of computing minimum length paths of a given homotopy class.

Link distance problems

555

Acknowledgements We thank the referees also L.G. Aleksandrov and P. Morin for providing valuable input into this document.

References [1] J. Adegeest, M. Overmars and J. Snoeyink, Minimum-link c-oriented paths: Single-source queries, Intemat. J. Comput. Geom. Appl. 4 (1) (1994), 39-51. [2] A. Aggarwal, H. Booth, J. O'Rourke, S. Suri and C.K. Yap, Finding minimal convex nested polygons. Inform. Comput. 83 (1) (October 1989), 98-110. [3] L. Aleksandrov, H. Djidjev and J.-R. Sack, An 0(n log n) algorithm for finding a shortest central link segment, Internat. J. Comput. Geom. Appl. (1999), accepted for publication. [4] M.H. Alsuwaiyel and D.T. Lee, Minimal link visibility paths inside a simple polygon, Comput. Geom. Theory Appl. 3 (1) (1993), 1-25. [5] M.H. Alsuwaiyel and D.T. Lee, Finding an approximate minimum-link visibility path inside a simple polygon. Inform. Process. Lett. 55 (2) (1995), 75-79. [6] R. Anderson and E. Mayr, Parallelism and greedy algorithms. Report 1003, Dept. Comput. Sci., Standford Univ., USA (1984). [7] E.M. Arkin, J.S.B. Mitchell and S. Suri, Logarithmic-time link path queries in a simple polygon, Intemat. J. Comput. Geom. Appl. 5 (4) (1995), 369-395. [8] O. Berkman, B. Schieber and U. Vishkin, Some doubly logarithmic optimal parallel algorithms based on finding all nearest smaller values. Technical Report UMIACS-TR-88-79, University of Maryland (1988). [9] J. Bhadury, V. Chandru, A. Maheshwari and R. Chandrasekran, Art gallery problems for convex nested polygons, INFORMS J. Comput. 9 (1) (1997), 100-110. [10] V. Chandru, S.K. Ghosh, A. Maheshwari, V.T. Rajan and S. Saluja, NC-algorithms for minimum link path and related problems, J. Algorithms 19 (1995), 173-203. [11] B. Chazelle, A theorem on polygonal cutting with application, Proc. 23rd Annual IEEE Sympium on Foundations of Computer Science (1982), 339-349. [12] B. Chazelle, Triangulating a simple polygon in linear time. Discrete Comput. Geom. 6 (1991), 485-524. [13] Y.-J. Chiang and R.T. Tamassia, Optimal shortest path and minimum-link path queries between two convex polygons inside a simple polygonal obstacle, Intemat. J. Comput. Geom. Appl. (1995). [14] M.J. Collins and B.M.E. Moret, Improved lower bounds for the link length of rectilinear spanning paths in grids. Manuscript (1998). [15] F. Gavril, Some NP-complete problems on graphs, Proc. 11th Conf. on Information Sciences and Systems, Johns Hopkins University, Baltimore, MD (1977), 91-95. [16] E. Kranakis, D. Krizanc and L. Meertens, Link length of rectilinear watchman tours in grids, Ars Combinatoria 38 (1994), 177. [17] C. Levcopoulos, On approximation behavior of the greedy triangulation, PhD thesis, No. 74, Linkoping Studies in Science and Technology, Linkoping University, Sweden (1986). [18] C. Levcopoulos and O. Petersson, Matching parenthesis in parallel. Discrete Appl. Math. (1992). [19] G. Das, Approximation schemes in computational geometry, PhD thesis, University of Wisconsin (1990). [20] G. Das and G. Narasimhan, Geometric searching and link distances, Proc. 2nd Workshop Algorithms Data Stmct., Lecture Notes in Comput. Sci. 519, Springer-Verlag (1991), 261-272. [21] G. Das and G. Narasimhan, Optimal linear-time algorithm for the shortest illuminating line segment in a polygon, Proc. 10th ACM Symp. Comp. Geom. (1994), 259-266. [22] B. Dasgupta and C.E. Veni Madhavan, An approximate algorithm for the minimal vertex nested polygon problem. Inform. Process. Lett. 33 (1989), 35-44. [23] M. de Berg, On rectilinear link distance, Comput. Geom. 1(1)(1991), 13-34. [24] P.J. de Rezende, D.T. Lee and Y.F Wu, Rectilinear shortest paths in the presence of rectangular barriers. Discrete Comput. Geom. 4 (1989), 41-53.

556

A. Maheshwari

et al.

[25] H.N. Djidjev, A. Lingas and J.-R. Sack, An 0(n log n) algorithm for computing the link center of a simple polygon. Discrete Comput. Geom. 8 (1992), 131-152. [26] H. Edelsbrunner and F.P. Preparata, Minimum polygonal separation. Inform. Comput. 77 (1988), IX^-lil. [27] H. Edelsbrunner, L.J. Guibas and M. Sharir, The complexity of many faces in arrangements of lines and of segments. Discrete Comput. Geom. 5 (1990), 197-216. [28] H. Edelsbrunner, A.D. Robison and X. Shen, Covering convex sets with non-overlapping polygons. Discrete Math. 81 (1990), 153-164. [29] P. Egyed and R. Wenger, Ordered stabbing ofpairwise disjoint convex sets in linear time. Discrete Appl. Math. 32 (1991), 133-140. [30] H. ElGindy, Hierarchical decomposition of polygons with applications, PhD thesis. School of Computer Science, McGill Univ., Montreal, Canada (1985). [31] H. ElGindy and D. Avis, A linear algorithm for computing the visibility polygon from a point, J. Algorithms 2 (2) (1981), 186-197. [32] S .K. Ghosh and A. Maheshwari, An optimal algorithm for computing a minimum nested nonconvex polygon. Inform. Process. Lett. 36 (1990), 277-280. [33] S.K. Ghosh and A. Maheshwari, Parallel algorithms for all minimum link paths and link center problems, Proc. 3rd Scand. Workshop Algorithm Theory, Lecture Notes in Comput. Sci. 621, Springer-Verlag (1992), 106-117. [34] S.K. Ghosh and S. Saluja, Optimal on-line algorithms for walking with minimum number of turns in unknown streets, Comput Geom. 8 (5) (1997), 241-266. [35] S.K. Ghosh, Computing the visibility polygon from a convex set and related problems, J. Algorithms 12 (1991), 75-95. [36] S.K. Ghosh, A note on computing the visibility polygon from a convex chain, J. Algorithms 21 (1996), 657-662. [37] S.K. Ghosh, J.-D. Boissonnat and S. Lazard, A linear time algorithm for computing a convex path of bounded curvature in a simple polygon. Technical Report TCS-97/1, Tata Institute of Fundamental Research, TCS, TIER, Homi Bhabha Road, Mumbai-400 005, India (1997). [38] M.T. Goodrich, Planar separators and parallel polygon triangulation, Proc. 24th Annu. ACM Symp. Theory Comput. (1992), 507-516. [39] L.J. Guibas and J. Hershberger, Optimal shortest path queries in a simple polygon, J. Comput. Syst. Sci. 39 (1989), 126-152. [40] L.J. Guibas, J.E. Hershberger, J.S.B. Mitchell and J. S. Snoeyink, Approximating polygons and subdivisions with minimum link paths, Intemat. J. Comput. Geom. Appl. 3 (4) (1993), 383^15. [41] L.J. Guibas, J. Hershberger, D. Leven, M. Sharir and R.E. Tarjan, Linear-time algorithms for visibility and shortest path problems inside a triangulated simple polygon, Algorithmica 2 (1987), 209-233. [42] E. Gyori, F. Hoffmann, K. Kriegel and T. Shermer, Generalized guarding and partitioning for rectilinear polygons, CGTA 6 (1996), 21-44. [43] J. Hershberger and J. Snoeyink, Computing minimum length paths of a given homotopy class, CGTA 4 (1994), 63-98. [44] J.E. Hopcroft, D.A. Joseph and S.H. Whitesides, Movement problems for 2-dimensional linkages, SIAM J. Comput. 13 (1984), 610-629. [45] J.E. Hopcroft, D.A. Joseph and S.H. Whitesides, On the movement of robot arms in 2-dimensional bounded regions, SIAM J. Comput. 14 (1985), 315-333. [46] H. Imai and T. Asano, Efficient algorithms for geometric graph search problems, SIAM J. Comput. 15 (2) (1986), 478-494. [47] S. Kahan and J. Snoeyink, On the bit complexity of minimum link paths: Superquadratic algorithms for problems solvable in linear time, Proc. 12th Symp. Comp. Geom. (1996), 151-158. [48] R.M. Karp and V. Ramachandran, Parallel algorithms for shared memory machines. Handbook of Theoretical Computer Science, J. van Leeuwen, ed., Elsevier/The MIT Press, Amsterdam (1990), 869-941. [49] Y. Ke, An efficient algorithm for link distance problems inside a simple polygon, Proc. 5th ACM Symp. on Comp. Geom. (1989), 69-78. [50] Y. Ke, Polygon visibility algorithms for weak visibility and link distance problems, PhD thesis, Johns Hopkins University (1989). [51] D.G. Kirkpatrick, Optimal search in planar subdivisions, SIAM J. Comput. 12 (1983), 28-35.

Link distance problems

557

[52] D.G. Kirkpatrick and J. Snoeyink, Computing constrained shortest segments: Butterfly wingspans in logarithmic time, Proc. 5th Canadian Conf. on Comp. Geom. (1993), 163-168. [53] R. Klein, Walking an unknown street with bounded detour, Comput. Geom. 1 (1992), 325-351. [54] R. Klein and A. Lingas, Manhattonian proximity in a simple polygon, Intemat. J. Comput. Geom. Appl. 5 (1995), 53-74. [55] K. Kolarov and B. Roth, On the number of links and placement of telescoping manipulators in an environment with obstacles, Proc. International Conference on Advanced Robotics (1991). [56] D.T. Lee and F.P. Preparata, Euclidean shortest paths in the presence of rectilinear barriers. Networks 14 (1984), 393^10. [57] D.T. Lee, Visibility of a simple polygon. Computer Vision, Graphics and Image Process. 22 (1983), 207-221. [58] W. Lenhart, R. Pollack, J.-R. Sack, R. Seidel, M. Sharir, S. Suri, G.T. Toussaint, S. Whitesides and C.K. Yap, Computing the link center of a simple polygon. Discrete Comput. Geom. 3 (1988), 281-293. [59] A. Lingas, A. Maheshwari and J.-R. Sack, Optimal parallel algorithms for rectilinear link-distance problems, Algorithmica 14 (1995), 261-289. [60] A. Maheshwari, Parallel algorithms for minimum link path and related problems, PhD thesis, Tata Institute of Fundamental Research, Bombay, India (1993). [61] A. Maheshwari and J.-R. Sack, Simple optimal algorithms for rectilinear link path and polygon separation problems. Parallel Process. Lett. 9 (1) (1999). [62] K.M. McDonald and J.G. Peters, Smallest paths in simple rectilinear polygons, IEEE Trans. ComputerAided Design 11 (1992), 864-875. [63] R.B. McMaster, Automated line generalization, Cartographica 26 (1987), 74-111. [64] N. Megiddo, Linear time algorithm for linear programming in R and related problems, SIAM J. Comput. 12 (1983), 759-776. [65] J.S.B. Mitchell, C. Piatko and E.M. Arkin, Computing a shortest k-linkpath in a polygon, Proc. 33rd Annu. IEEE Symp. Found. Comput. Sci. (1992), 573-582. [66] J.S.B. Mitchell, G. Rote and G. Woginger, Minimum-link paths among obstacles in the plane, Algorithmica 8(1992),431-:^59. [67] J.S.B. Mitchell and S. Suri, Separation and approximation of polyhedral objects, Comput. Geom. 5 (1995), 95-114. [68] B. Natarajan, On comparing and compressing piece-wise linear curves. Report, Hewlett Packward (1991). [69] B.J. Nilsson and S. Schuierer, An optimal algorithm for the rectilinear link center of a rectilinear polygon, Comput. Geom. 6 (1996), 169-194. [70] B.J. Nilsson and S. Schuierer, Computing the rectilinear link diameter of a polygon. Computational Geometry — Methods, Algorithms and Applications: Proc. Intemat. Workshop Comput. Geom. CG '91, Lecture Notes in Comput. Sci. 553, Springer-Verlag (1991), 203-215. [71] J. O'Rourke, Art Gallery Theorems and Algorithms, The Int. Serieson Monomgraphs on Computer Science, Oxford University Press (1987). [72] J. Reif and J. A. Storer, Minimizing turns for discrete movement in the interior of a polygon, IEEE J. Robotics and Automation 3 (1987), 182-193. [73] J.H. Reif, Complexity of the mover's problem and generalizations, Proc. 20th Annu. IEEE Symp. Found. Comput. Sci. (1979), 421-427. [74] J.H. Reif, Complexity of the generalized movers problem, J. Hopcroft, J. Schwartz and M. Sharir, eds, Planning, Geometry and Complexity of Robot Motion, Ablex Pub. Corp., Norwood, NJ (1987), 267-281. [75] J.-R. Sack and J. Urrutia (eds). Handbook on Computational Geometry, Elsevier Science B.V., Amsterdam, the Netherlands (1999). [76] J.-R. Sack and S. Suri, An optimal algorithm for detecting weak visibility of a polygon, IEEE Transactions on Computers 39 (10) (1990), 1213-1219. [77] S. Schuierer, An optimal data structure for shortest rectilinear path queries in a simple rectilinear polygon, Intemat. J. Comput. Geom. Appl. 6 (1996), 205-226. [78] S. Suri, A linear time algorithm for minimum link paths inside a simple polygon, Comput. Vision Graph. Image Process. 35 (1986), 99-110. [79] S. Suri and J. O'Rourke, Worst-case optimal algorithms for constructing visibility polygons with holes, Proc. 2nd ACM Symp. on Comp. Geom. (1986), 14-23.

558

A. Maheshwari

et al.

[80] S. Suri, Minimum link paths in polygons and related problems, PhD thesis, Dept. Comput. Sci., Johns Hopkins Univ., Baltimore, MD (1987). [81] S. Suri, On some link distance problems in a simple polygon, IEEE Trans. Robot. Autom. 6 (1990), 108113. [82] S. Suri and J. O'Rourke, Finding minimal nested polygons, Proc. 23rd AUerton Conf. Commun. Control Comput. (1985), 470-479. [83] R.T. Tamassia, On embedding a graph in the grid with minimum number of bends, SIAM J. Comput. (1986). [84] G.T. Toussaint, Shortest path solves edge-to-edge visibility in a polygon. Pattern Recognition Lett. 4 (1986), 165-170. [85] C.A, Wang and E.P.F. Chan, Finding the minimum visible vertex distance between two non-intersecting simple polygons, Proc. 2nd ACM Symp. Comp. Geom. (1986), 34-42.

CHAPTER 13

Derandomization in Computational Geometry Jiff Matousek* Department of Applied Mathematics, Charles University, Malostranske nam. 25, 118 00 Praha 1, Czech Republic

Contents 1. 2. 3. 4.

Randomized algorithms and derandomization Basic types of randomized algorithms in computational geometry General derandomization techniques Deterministic sampling in range spaces 4.1. g-nets and ^-approximations 4.2. A deterministic algorithm for ^-approximations 4.3. £-approximations via geometric partitions 4.4. Higher moment bounds and seminets 5. Derandomization of basic computational geometry algorithms 5.1. Cuttings 5.2. Convex hull, diameter, and other problems Appendix: The parametric search technique References

561 562 566 570 570 573 576 578 581 581 586 589 590

Abstract We survey techniques for replacing randomized algorithms in computational geometry by deterministic ones with a similar asymptotic running time.

* Supported by grants GACR 0194 and GAUK 193,194. Part of this survey was written during a visit at ETH Zurich. HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V. All rights reserved

559

This Page Intentionally Left Blank

Derandomization in computational geometry

561

1. Randomized algorithms and derandomization

A rapid growth of knowledge about randomized algorithms stimulates research in derandomization, that is, replacing randomized algorithms by deterministic ones with as small decrease of efficiency as possible. Related to the problem of derandomization is the question of reducing the amount of random bits needed by a randomized algorithm while retaining its efficiency; the derandomization can be viewed as an ultimate case. Randomized algorithms are also related to probabilistic proofs and constructions in combinatorics (which came first historically), whose development has similarly been accompanied by the effort to replace them by explicit, non-random constructions whenever possible. Derandomization of algorithms can be seen as a part of an effort to map the power of randomness and explain its role. Another, more practical motivation is that contemporary computers include no equipment for generating truly random bits. The random bits are simulated by pseudo-random generators; these can produce long sequences of bits which satisfy various statistical criteria of randomness, but the real randomness in the computation is in fact restricted to the initial setting of the generator. It is thus important to gain theoretical results on limiting the amount of randomness in the algorithm. In the sequel, the word 'derandomization' will usually be used in a narrower sense, where the deterministic algorithm arises by closely imitating the randomized one, by adding special subroutines that replace the random resources in a manner sufficient for that particular algorithm. Although it is difficult to strictly distinguish when a deterministic algorithm imitates the randomized one and when it becomes a brand new algorithm, in most specific cases the distinction appears quite clearly. One typical feature of derandomized algorithms is that they can hardly be understood without appealing to their underlying randomized counterpart; taken alone as deterministic algorithms they would appear completely mysterious. Derandomization turns out to be a surprisingly powerful methodology for designing deterministic algorithms. For such a basic problem as computing the convex hull oin points in a fixed dimension J , the only known worst-case optimal deterministic algorithm arises by a (quite complicated) derandomization of a simple randomized algorithm (Chazelle [33]). Bibliography and remarks. Excellent books on probabilistic methods in combinatorics are Spencer [117] and Alon and Spencer [14]; the latter one has a lot of material concerning explicit constructions and derandomization. The present survey was submitted for publication in the end of 1995. Some small updates and additions were made in June 1997, but certainly not as thorough as the progress in the field would perhaps deserve. Many of the cited papers first appeared in conference proceedings and, sometimes many years later, in a journal. Where available, we refer to the journal version (presumably more polished and containing fewer mistakes), although the chronology of the results may be considerably distorted by this.

562

/. Matousek

lines of S lines

oiH\S

I{S)

Fig. 1. Randomized divide-and-conquer for the intersection of half planes.

2. Basic types of randomized algorithms in computational geometry The derandomization in computational geometry can be declared relatively successful compared to other fields; for most randomized algorithms one can produce deterministic ones with only a small loss in asymptotic efficiency, and the open problems usually concern further relatively small improvements. The main reason for this is probably that the space dimension is assumed to be fixed and the constants of proportionality in the asymptotic notation are ignored. These constants grow considerably by derandomization, and currently most of the derandomized algorithms seem unsuitable for a practical use, especially if the dimension is not quite small. Before we start discussing derandomization, let us recall two basic paradigms for designing randomized algorithms in computational geometry: the randomized divide-andconquer and the randomized incremental construction. Our presentation is sketchy and is illustrated on a very simple artificial example, so simple that both approaches are a clear overkill for solving the problem. We assume that the reader can learn more about randomized geometric algorithms from other sources (for instance, in Chapter 16). We consider the following problem: 2.1. Given a set H of n lines in the plane, compute the intersection 1(H) of the halfplanes determined by the lines of H and containing the origin. PROBLEM

Randomized divide-and-conquer For this paradigm, we choose a random sample S C H of r lines, where r is a suitable parameter; in a simple version, it is a large constant. We construct the intersection polygon I{S) by some straightforward method, say in time O(r^). Then we triangulate the polygon 7(5) by connecting each of its vertices to the origin; see Figure 1. Let T{S) denote the set of the triangles of the resulting triangulation. For each triangle A e T(S), we compute the list H^ of lines of H intersecting A. The portion of 1(H) within each A is equal to the portion of I (HA) within A. We may thus construct each I(HA) recursively by the same method, clip it by A and finally glue these pieces together. The recursion stops when the current set of lines is small (smaller than r^, say); then we may construct the intersection by some inefficient direct method.

Derandomization in computational geometry

563

What can be said about the running time of this algorithm? We may assume that the construction of 7(5) and T{S) takes time O(r^). If we test every hne against each triangle, the computation of the lists HA needs 0{nr) time. This time is also amply sufficient for gluing the polygon I{H) together from the portions of the recursively computed polygons HHA).

Let riA denote the size of HA ; the key question is how large are the HA 'S. It turns out that one gains the right feeling about the problem by assuming that each «^ is roughly Cn/r, where C > 1 is some absolute constant independent of r. If we assume e.g., UA ^ 2n/r for each Z\, we get the recursion T{n) < 0(r^ +nr) + r T{2n/r), where T{n) denotes the running time for n lines. Together with the initial condition T{n) — 0{l) iov n ^r^ this yields a bound oiT{n) = 0{n^^^), where 5 > 0 is a constant which tends to 0 as r increases (but the multiplicative constant increases with r). Indeed, the total size of subproblems doubles with each level of the recursion. Hence the total work done at the bottom of the recursion (which turns out to dominate the overall work) will be n 2^, where i ^ log^ n is the depth of the recursion, so n 2^ ^ n^+^/^^^2^^ and we have 5^1/log2r. Unfortunately we cannot assume that UA^Cn/r fox each A. First of all, the n^'s are determined by the random choice of S, and some choices apparently give much worse values. One could still hope that the bounds are true at least with probability 1/2, say, but even this need not be the case. Valid bounds are the following: • (''Pointwise" bound.) For a randomly chosen 5, the following holds with high probability: For all zl G T( 5), (1) with an absolute constant C (independent of r). • {"Higher moments" hound.) For any constant c > 1 there exists a constant C = C{c) (independent of r) such that the expectation

^c{^yE[\riS)\].

E

(2)

AeT(S)

(In our case clearly E [|T(5)|] < r.) The higher moments estimate says that as far as the cth degree average is concerned, the quantities nA behave as if they were 0(n/r). The bound T{n) = 0(n^'^^) for the worst-case expected running time can be established using either of these two estimates, only the dependence of 5 on r is better for the second one. The difference appears more significantly in other algorithms of this type, where we don't want to choose r just a constant but rather a suitable power of n, say n^^^^. Then the pointwise (or "trianglewise") bound (1) brings in an extra poly logarithmic factor with

564

J. Matousek

each level of recursion, while (2) only a multiplicative constant. Why should a larger value of r help? With larger r, the problem is subdivided into a larger number of subproblems of appropriately smaller size, and thus the depth of the recursion is smaller, sometimes only constant. Since each level of recursion brings a multiplicative "excess" factor into the running time bound, this means that a smaller extra factor accumulates. Using a larger r usually requires that various auxiliary computations are done in a more clever way, however. In our simple example, we cannot increase r to «^/^^, since then the nr term (coming from a straightforward computation of the lists //^) would already become too large. But if we could compute these lists more cleverly, we could afford to increase r and would get a faster algorithm. In an algorithm of this type for a more general problem, we usually deal with some collection H of hyperplanes or more general surfaces. In the divide step, we choose a random sample S C H of size r and we partition all relevant regions of the arrangement of S into some simple cells like triangles, simplices, trapezoids etc.; the important feature is that each such cell A can be described by a constant-bounded number of parameters. We let T(S) denote the set of the resulting cells. Each A e T(S) defines one subproblem, dealing with the set HA of surfaces intersecting A. These subproblems are solved independently and from their solutions a global solution for H is recovered. This scheme of course cannot include all subtleties in specific applications. This scheme of algorithm with a constant r usually gives complexity at least by an n^ factor off from optimal, but it is extremely robust and it has been efficiently derandomized. Playing with larger values of r usually brings improvements (up to optimal algorithms sometimes), but the derandomization becomes more delicate. Let us now move to the second paradigm. Randomized incremental construction. To solve Problem 2.1 with this approach, we insert the lines one by one in a random order and maintain the current intersection. We start with the whole plane as the current polygon and with every inserted line we cut off an appropriate portion so that when we finish we have the desired intersection. How do we find which portion should be cut off for the newly inserted line? One seemingly sloppy but in fact quite efficient method is the maintenance of so-called conflict lists. (This method is also easy to generalize for more complicated problems.) Let Sr denote the set of the first r already inserted lines; together with the polygon I{Sr) we also maintain the triangulation T{Sr) (defined as above) and for each A e T(Sr), we store the set H^ of lines intersecting it; it is called the conflict list of A in this context. Finally we maintain the list of intersected triangles for every line not yet inserted. When inserting the (r -h l)st line we thus know the triangles it intersects and we remove these from the current triangulation (if no triangle is intersected we do nothing); see Figure 2. In order to complete the triangulation T(Sr-\-\) of the new polygon, it suffices to add 3 new triangles, see Figure 2. The conflict lists of these new triangles are found by inspecting the conflict lists of the deleted triangles, and then also the conflict lists of lines are updated accordingly. The total update time is proportional to the sum of conflict list sizes for the deleted triangles. Since each triangle must be created before it can be deleted, the total running time is proportional to the sum of sizes of the conflict lists of all triangles ever created.

Derandomization

--^^^^ ^ ^ ^ ^ ^

in computational

y

geometry

565

newly inserted line

Fig. 2. Retriangulating after the insertion of the 5th hne.

The initial intuition for analyzing this quantity is similar to that for the previous paradigm. Namely, the first r inserted lines form a random sample Sr and we expect that each A e T(Sr) has a conflict list of size about Cn/r. Therefore, the 3 newly created triangles should contribute about 0(n/r) to the running time, and the total should be n

Using the bound (1), we get only a weaker result 0(n log^ n). A more sophisticated analysis not relying on this bound shows that the expected contribution of the rth step of the algorithm is indeed only 0(n/r) and thus we get an optimal algorithm in this way. Let us sketch a general scheme of a geometric randomized incremental algorithm. We are given a set H of some objects (hyperplanes, surfaces, but perhaps also points for the case of computing a Voronoi diagram). Our goal is to compute a certain geometric partition T(H) of space induced by these objects. In order for the analysis to work properly, one needs (similarly as in the divide-and-conquer scheme) that the cells in this partition have a constant description complexity. If the desired partition does not have this property we define a suitable refinement first and work with this refinement. The algorithm inserts the objects in a random order, maintaining the partition T(Sr) for the set Sr of the already inserted objects. In order to perform the updates quickly, it usually maintains some auxiliary information, such as the conflict lists of the cells of T{Sr). The randomized incremental construction has also a vast potential for generalization, and if applicable, it usually gives better and simpler algorithms than the divide-and-conquer approach, mainly because there is no error accumulation phenomenon present. It is much more difficult to derandomize, and only few special results are known. An intuitive explanation for this difference is as follows: The basic form of the divide-and-conquer approach described above requires only a small random sample, for which we can at least easily verify that it has the required properties. On the other hand, the randomized incremental method needs a whole random permutation, and there does not seem to be any easy method of checking whether a given permutation is good other than actually running the considered incremental algorithm with that permutation. Or, put another way, the divide-and-conquer approach creates subproblems and then operates locally on each of them (until a final merging phase, which is usually easy and involves no randomization anymore), while the

566

J. Matousek

incremental method always remains on a global level. It seems that in many cases the most useful strategy for derandomization is to merge the local and global approaches: one uses a randomized algorithm of the divide-and-conquer type, but complemented with some global mechanism to prevent the accumulation of excess factors (more concrete examples will be discussed later). Bibliography and remarks. A survey on randomized algorithms is Karp [66]; a recent book is Motwani and Raghavan [70]. Randomized algorithms in computational geometry (mainly incremental ones) are treated extensively in Mulmuley [99]; other good sources are Guibas and Sharir [60], Seidel [116], Agarwal [5], Clarkson [39]. As was mentioned above, most algorithms in computational geometry are designed under the assumption that the space dimension is a (small) constant. One problem where the dependence of the running time on the dimension has been studied intensively is the linear programming problem, and here the inadequacy of the current derandomization methods for coping with large dimension stands out clearly: Randomized algorithms of the incremental type with a subexponential dependence on the dimension were found' (Kalai [65], Matousek et al. [75]), but the best known deterministic algorithms remain exponential. The randomized geometric divide-and-conquer strategy appears in the pioneering works of both Clarkson (e.g., [37]) and of Haussler and Welzl [63]. It has been elaborated in numerous subsequent works, mainly concerning computations with arrangements of lines, segments etc., see e.g., Agarwal and Sharir [13], Clarkson et al. [25], Chazelle et al. [28]. The bounds of type (1) appeared in Clarkson [36] and Haussler and Welzl [63]. The observation that one can get rid of the log r factor by using the bounds on higher moments is heavily used in Clarkson [37]. A technically different approach which effectively brings similar results as an application of a bound of type (2) is due to Chazelle and Friedman [31 ]. A more detailed exposition of the randomized geometric divide-and-conquer paradigm with examples can be found in Agarwal's survey paper [5]. A randomized incremental construction is used in Chew's early paper [35], together with an elegant technique nowadays called backwards analysis. Clarkson and Shor [45] give several simple optimal randomized incremental algorithms for problems considered very difficult at that time. Mulmuley [76-78] solves numerous other problems by algorithms of this type; his analysis is relatively complicated (using probabilistic games). The methodology of backwards analysis was elaborated by Seidel [94,95] and it allowed him to give very simple proofs e.g., for Mulmuley's algorithms.

3. General derandomization techniques Before explaining the specific methods developed for computational geometry algorithms, let us briefly summarize the general approaches to derandomization. The method of conditional probabilities. This is perhaps the most significant general derandomization method (implicitly used by Erdos and Selfridge [56], formulated and popularized by Spencer [97], and further enriched by Raghavan [86]). It is so well known and nicely explained in many places that we allow ourselves to be very brief. These algorithms work in the infinite precision computation model common in computational geometry, in contrast to known (weakly) polynomial algorithms for linear programming in the bit model.

Derandomization in computational geometry

567

To be specific, consider the probability space f2 of ^-component 0/1 vectors, where the entries of a random vector are set to 1 with probability p, the choices being independent for distinct entries. Let 0 : ^ -> M be some function (that is, a random variable on ^ ) , and suppose that our goal is to find a specific vector X e f2 such that fl) 1. We cannot expect the existence of small ^-nets for an arbitrary set system. A property guaranteeing small 6:-nets and satisfied by many natural geometric set systems is a bounded VC-dimension; a related concept is a. polynomially bounded shatter function. For a set system (X, IZ) and a natural number m ^ |X|, let 7T^{m) denote the maximum possible number of sets in a set system of the form 7^|^, for an m-point subset A c X. To understand this concept, the reader is invited to check that if Z = M^ is the plane and Ti is the system of all (closed) halfplanes, then 7ty^(m) < Cm^ for an absolute constant C and for all m. Shatter functions of set systems arising in computational geometry are usually polynomially bounded, and this implies the existence of small 6:-nets, as we will see below. First we introduce one more concept, the Vapnik-Chervonenkis dimension (or VCdimension for short). The VC-dimension of (X, IZ) can be defined as sup{J; Tt-jzid) = 2^}. In other words, it is the maximum d such that there exists a J-element shattered subset A c X, i.e. a subset such that each B (^ A can be expressed as 5 = A Pi /^ for some R elZ. It turns out that the shatter function of a set system of VC-dimension d is bounded by Co) "^ (T) "^ ^ G ) ' ^^^ ^^^ estimate is tight in the worst case (hence the shatter function of any set system is either exponential or polynomially bounded). Conversely, if the shatter function is bounded by a fixed polynomial, then the VC-dimension is bounded by a constant, but there are many natural geometric set systems of VC-dimension d whose shatter function has a considerably smaller order of magnitude than m^ (for instance, the set system defined by halfplanes in the plane has VC-dimension 3, while the shatter function is quadratic). With these definitions, a basic theorem on existence of 6:-nets can be stated as follows. THEOREM 4.1. For any d ^ 1 there exists a C(d) such that for any r > 1 and for any set system (X, IZ) with X finite and of VC-dimension at most d there exists a (l/r)-net of size at most C(d)rlogr. In fact, a random sample S C. X of this size is a (l/r)-net with a positive probability (even with probability whose complement to 1 is exponentially small in r). This size is in general best possible up to the value ofC(d).

This implies, among others, the bound (1) stated in Section 2. As the theorem says, the log r factor in general cannot be removed from the bound. A challenging and probably very

572

J. Matousek

difficult open problem is whether some improvement in the (l/r)-net size is possible for set systems defined geometrically, such as the one defined by triangles in the plane. A related notion to the e-net is an £-approximation. A subset A c X is an £approximation for (X,1Z), provided that \AnR\ \A\

\R\ 1, one can compute a {\ / r)-approximation of size 0(r^ logr) and a (l/r)-netofsize 0 ( r logr) in time 0(nr^), c = c(d) a constant, n = \X\. THEOREM

In particular, if r is a constant, both the (l/r)-net and the (l/r)-approximation can be computed in time 0 ( | Z | ) . This in itself suffices for derandomizing a large number of geometric algorithms (where constant-size samples are used) in a quite straightforward manner. Most of the other divide-and-conquer algorithms can be modified to use constantsize samples with some decrease in efficiency, typically ofn^. Finally, for many problems where one finds mainly randomized incremental algorithms in the literature, it is not too difficult to design divide-and-conquer type solutions, again with a somewhat worse efficiency. We describe the basic ingredients of an algorithm for deterministic computation of eapproximations.

573

574

/. Matousek

Polynomial-time sampling. As a first ingredient, we need a polynomial time algorithm for the following problem: Given an w-element set F, a system IZ of subsets of F, and a parameter r, we want to compute a (l/r)-approximation of size s = O(r^logr). This is much weaker than Theorem 4.2, since we only want time polynomial in r and n, with possibly large exponents. (1Z may be given by the list of its sets here.) As we know (by a result of Vapnik and Chervonenkis), a 5-element random subset of F is a (l/r)-approximation with a positive probability. This gives a straightforward randomized algorithm for our problem, which can be derandomized by the method of conditional probabilities. This yields the required polynomial time algorithm. A merge-reduce scheme. The algorithm for Theorem 4.2 is based on the above described polynomial-time sampling subroutine and on the following two easy observations: OBSERVATION 4.3. Let X i , . . . , Xm ^ X be disjoint subsets of equal cardinality and let Ai be an e-approximation of cardinality s for (X/,7^|;^.), / = 1,2, ...,m. Then AiU • • • (JAm is an e-approximation for the subsystem induced bylZonXiL)-" (JXm.

OBSERVATION 4.4. Let A be an s-approximation for {X.IZ) and let A' be a 8approximationfor (A, 7^1^). Then A' is an {a + 8)-approximationfor (Z, IZ). The algorithm starts by partitioning the point set X into small pieces of equal size (arbitrarily), and then it alternates between steps of two types — merging and reduction. On the beginning of the /th step, we have a partition 77/ of X into pieces of equal size, and for each such piece P we have an £/-approximation of size si for (P,lZ\p). If the /th step is a merging step, we group the pieces of the partition 77/_i into groups by gi pieces each (gi a suitable parameter), and we merge the pieces in each group into a single new piece. (The simplest sequential version has gi =2.) The ^/-approximation for the merged piece is obtained by simply merging the 6:/_i-approximations for the pieces being merged; we thus get Si = giSi-\, and by Observation 4.3 we may take Si =£i-i. If the /th step is a reduction step, we leave the partition of X unchanged (i.e. 77/ = 77/_i), but we replace each £/_i-approximation A for a piece P in 77/_i by a smaller ^/-approximation A^ To this end, we use polynomial-time sampling to obtain a 5/approximation A^ of size si for the set system (A, 7^|^), with 5/ chosen suitably. By Observation 4.4, such an A' is also an £/-approximation for the piece P, with si = £/_i + 0 a parameter. A cutting S is called an s-cutting for H provided that \HA\ ^ sn for every A e E, that is, the interior of no simplex of S is intersected by more than en hyperplanes of H. The above considerations can be restated by saying that a (l/r)-cutting for H must have Q{r^) simplices, and that 0((r logr)^) simplices can actually be achieved. The promised stronger result is THEOREM 5.1. For any collection Hofn hyperplanes in R^ and a parameter r, 1 < r ^ n, a (I/r)-cutting consisting ofO(r^) simplices exists, and it can be computed deterministically in 0(nr^~^) time, together with the list of hyperplanes intersecting each simplex of the cutting. The algorithm can be implemented in parallel on an EREW PRAM, with 0(\ogn) running time and 0(nr^^~^) work. For certain fixed a = a(d) > 0 and r < n^, a (I/r)-cutting of size 0(r^^) can be computed in 0(n logr) time.

We have already remarked that the O(r^) size is asymptotically optimal. The (sequential) running time 0(nr^~^) is also optimal in the following sense: if we want also the For a convex polygon, a bottom vertex triangulation means drawing all diagonals incident to the bottom vertex (the one with lexicographically smallest coordinate vector). For convex polytopes of higher dimension, it is defined inductively, by first bottom-vertex triangulating all lower-dimensional faces and then lifting each simplex in these triangulations towards the bottommost vertex of the polytope. There may be also some vertices on simplex boundaries, but it turns out that these can't play an important role. Here a simplex means an intersection ofd + 1 halfspaces, hence also "simplices" going to infinity are allowed.

Derandomization in computational geometry

583

lists HA to be output, then already the output size is of order nr^~^, as the lower bound argument shows. Cuttings via seminets. One way of constructing a (l/r)-cutting of size O(r^) is as follows: Take a random subset S c. H of size r, and let T{S) be the bottom-vertex triangulation of its arrangement. Consider a simplex A G T{S), and put tA = |//z\|(r/n) (the excess of Z\, i.e. how many times does \HA\ exceed the "right" value n/r). If r/\ < 1, leave A untouched; otherwise construct some (1/^^)-cutting SA for the collection HA of hyperplanes. Intersect all simplices of SA with the simplex A. Discard all intersections that are empty, and triangulate the intersections that are not simplices. Denote by SA the resulting collection of simplices. The promised (l/r)-cutting is the union of all EA over Ae S. Using the ^-net argument as above, we can guarantee that the size of each EA is 0{t^ log^ tA) = 0{t^^ ). Then we get that the total number of simplices in E is bounded by

and the expectation of this sum is O(r^) by Theorem 4.7(ii) (here it is important that the triangulation T{S) is not arbitrary but the bottom-vertex one — from this the validity of Axiom 2' can be derived). This provides a simple 0(nr^~^) randomized algorithm, which can be derandomized in polynomial time. Using sampling from an e-approximation, one can get an 0{nr^~^) deterministic algorithm by this approach if r is not too large, namely if r < n^~^ for some fixed 5 > 0 (see [64]). The part of Theorem 5.1 with O(nlogr) time for r 0. This was improved by Matousek and Schwarzkopf [93] to 0(n\og^^^^^n), by Ramos [107] to 0(n log^ n), by Amato et al. [7] to^ 0(n \og^ n), andfinallyby Ramos [110] to the current best time 0(n log^ n). Among these, the Ramos' 0(n log^ n) algorithm is the most elementary one. Parametric search is an ingenious algorithmic technique; roughly speaking, under certain favorable circumstances it allows one to convert decision algorithms into search algorithms. It was formulated by Megiddo [87]. A technical improvement, which sometimes reduces the running time by a logarithmic factor, was suggested by Cole [44] (see also Cole et al. [48], 9 Bronnimann et al. [20] who claimed the 0(n log-^ n) bound earlier have an error in the diameter computation part; this could probably be fixed, but the algorithm is more complicated than the one of [7].

Derandomization in computational geometry

589

Cohen and Megiddo [41], Norton et al. [104] for a higher-dimensional generalization). Parametric search is usually not considered a derandomization technique, but sometimes it helps considerably in constructing deterministic algorithms. Linear programming. A straightforward derandomization using e-nets is applied to Clarkson's algorithm [40] by Chazelle and Matousek [43], which surprisingly yields a linear time deterministic algorithm with the best known dependence on the dimension. This algorithm is also applicable for other problems similar to linear programming, extending previous results of Dyer [52]. Agarwal et al. [16] give another derandomized linear programming algorithm with a similar efficiency but based on Megiddo's original approach; this algorithm is applicable on yet another class of search and optimization problems. Their technique of searching can also speed up some applications of multidimensional parametric search. A very fast parallel deterministic implementation on CRCW PRAM was first found by Ajtai and Megiddo [10] using expanders; their algorithm has 0((loglogn)'^) running time with 0(n(loglogn)^) work. Goodrich [59] applies their ideas, ideas of Dyer [53], and a fast parallel computation of s-nets to give algorithms with 0(n) work and 0((loglogn)'^+^) running time on a CRCW PRAM, resp. 0(logn(loglogw)^-^) time on an EREW PRAM (previously, [53] gave an EREW PRAM algorithm with the same running time and slightly worse work). Segment arrangements and generalizations. Optimal randomized algorithms for constructing a segment arrangement, with 0{n\ogn + k) expected running time, were discovered by Clarkson and Shor [45] and by Mulmuley [76]. Amato et al. [8] give an 0(log^ n) time, workoptimal EREW PRAM version of their algorithm. Another deterministic algorithm with 0{n) space and optimal time was found also by Balaban [17]; this one doesn't seem to parallelize easily. [8] can also compute a single face in an arrangement of n segments in 0(n a^(n) \ogn) deterministic time (compared to the best known 0{na{n)\ogn) randomized algorithm of Chazelle et al. [26]). They use similar methods to construct a point location structure for an arrangement of n id — 1)-dimensional possibly intersecting simplices in R*^, J ^ 3, with O(logn) query time and 0{n^~~^ log^^'^^^n + k) deterministic preprocessing time and storage, where k is the complexity of the arrangement; this is based on derandomizing Pellegrini's work [105]. Deterministic computation of the lower envelope of n bounded-degree univariate algebraic functions has been studied by Ramos [110] and used as a subroutine in his 3-dimensional diameter algorithm.

Appendix: The parametric search technique Parametric search is a general strategy for algorithm design. Roughly speaking, it produces algorithms for searching from algorithms for verification, under suitable assumptions. Let us consider a problem in which the goal is to find a particular real number, ^*, which depends on some input objects. We consider these input objects fixed. Suppose that we have two algorithms at our disposal: First, an algorithm O, which for a given number t decides among the possibilities t < t"^, t = t'^ and ? > f* (although it does not explicitly know f*, only the input objects); let us call such an algorithm O the oracle. Second, an algorithm G (called the generic algorithm), whose computation depends on the input objects and on a real parameter t, and for which it is guaranteed that its computation for f = r* differs from the computation for any other /^ / r *. We can use algorithm O also in the role of G, but often it is possible to employ a simpler algorithm for G. Under certain quite weak assumptions about algorithm G, the parametric search produces an algorithm for finding f*.

590

/. Matousek

The main idea is to simulate the computation of algorithm G for the (yet unknown) parameter value ^ = f *. The computation of G of course depends on t, but we assume that all the required information about t is obtained by testing the signs of polynomials of small (constant bounded) degree in t. The coefficients in each such polynomial may depend on the input objects of the algorithm and on the outcomes of the previous tests, but not directly on t. The sign of a particular polynomial can be tested also in the unknown f *: We find the roots t\, ...Jk of the polynomial /?, we locate r* among them using the algorithm O and we derive the sign of /7(^*) from it. In this way we can simulate the computation of the algorithm G at f*. If we record all tests involving t made by algorithm G during its computation, we can then find the (unique) value f * giving appropriate results in all these tests, thereby solving the search problem. In this version we need several calls to the oracle for every test performed by algorithm G. The second idea is to do many tests at once whenever possible. If algorithm G executes a group of mutually independent tests with polynomials p\(t),..., pm(t) (meaning that the polynomial pi does not depend on the outcome of the test involving another polynomial Pj), we can answer all of them by 0(log«) calls of the oracle: We compute the roots of all the polynomials p\,..., pm and we locate the position of t* among them by binary search. Parametric search will thus be particularly efficient for algorithms G implemented in parallel, with a small number of parallel steps, since the tests in one parallel step are necessarily independent in the above mentioned sense. Parametric search was formulated by Megiddo [87], the idea of simulating an algorithm at a generic value appears in [57,61,86]. A technical improvement, which sometimes reduces the running time by a logarithmic factor, was suggested by Cole [44]. A generalization of parametric search to higher dimension, where the parameter Ms a point in R^ and the oracle can test the position of f * with respect to a given hyperplane, appears in [48, 41,104,83]. Currently is parametric search a quite popular technique also in computational geometry; from numerous recent works we select more or less randomly [47,1,15,29]. Algorithms based on parametric search, although theorefically elegant, appear quite complicated for implementation. In many specific problems, parametric search can be replaced by a randomized algorithm (see [51,80,34]) or by other techniques (e.g., [19,72]) with a similar efficiency.

Acknowledgment I am grateful to Bernard Chazelle, to Edgar Ramos, and to two anonymous referees for very useful comments to earlier versions of this paper.

References [1] RK. Agarwal, B. Aronov, M. Sharir and S. Suri, Selecting distances in the plane, Algorithmica 9 (1993), 495-514. [2] N. Alon, L. Babai and A. Itai, A fast and simple randomized algorithm for the maximal independent set problem, J. Algorithms 7 (1986), 567-583.

Derandomization

in computational

geometry

591

[3] P.K. Agarwal and J. Erickson, Geometric range searching and its relatives. Advances in Discr. and Cornput. Geom., B. Chazelle, E. Goodman and R. Pollack, eds, Amer. Math. Soc, Providence (1998). [4] P.K. Agarwal, Partitioning arrangements of lines: I. An efficient deterministic algorithm. Discrete Comput. Geom. 5 (1990), 449^83. [5] P.K. Agarwal, Geometric partitioning and its applications. Computational Geometry: Papers from the DIMACS Special Year, J.E. Goodman, R. Pollack and W. Steiger, eds, Amer. Math. Soc. (1991). [6] N. Alon, O. Goldreich, J. Hastad and R. Peralta, Simple construction of almost k-wise independent random variables. Random Struct. Algorithms 3 (1992), 289-304. [7] N.M. Amato, M.T. Goodrich and E. A. Ramos, Parallel algorithms for higher-dimensional convex hulls, Proc. 35th Annu. IEEE Sympos. Found. Comput. Sci. (1994), 683-694. [8] N.M. Amato, M.T. Goodrich and E. A. Ramos, Computing faces in segment and simplex arrangements, Proc. 27th Annu. ACM Sympos. Theory Comput. (1995), 672-682. [9] N. Alon, D. Haussler and E. Welzl, Partitioning and geometric embedding of range spaces of finite VapnikChervonenkis dimension, Proc. 3rd Annu. ACM Sympos. Comput. Geom. (1987), 331-340. [10] M. Ajtai and N. Megiddo, A deterministic poly (log log n)-time n-processor algorithm for linear programming infixed dimensions, Proc. 24th Annu. ACM Sympos. Theory Comput. (1992), 327-338. [11] P.K. Agarwal and J. Matousek, On range searching with semialgebraic sets. Discrete Comput, Geom. 11 (1994), 393-418. [12] P.K. Agarwal, J. Matousek and O. Schwarzkopf, Computing many faces in arrangements of lines and segments, SIAM J. Comput. 27 (1998), 491-505. [13] P.K. Agarwal and M. Sharir, Red-blue intersection detection algorithms, with applications to motion planning and collision detection, SIAM J. Comput. 19 (2) (1990), 297-321. [14] N. Alon and J. Spencer, The Probabilistic Method, Wiley, New York, NY (1993). [15] P.K. Agarwal, M, Sharir and S. Toledo, Applications of parametric searching in geometric optimization, Proc. 3rd ACM-SIAM Sympos. Discrete Algorithms (1992), 72-82. [16] P.K. Agarwal, M. Sharir and S. Toledo, An efficient multidimensional searching technique and its applications. Tech. Report CS-1993-20, Department Computer Science, Duke University (1993), [17] I. Balaban, An optimal algorithm for finding segment intersections, Proc, 11th ACM Sympos, Comput, Geom. (1995), 211-219. [18] J. Beck and W. Chen, Irregularities of Distribution, Cambridge University Press (1987). [19] H. Bronnimann and B. Chazelle, Optimal slope selection via cuttings, Comput. Geom. 10 (1998), 23-29. [20] H. Bronnimann, B. Chazelle and J. Matousek, Product range spaces, sensitive sampling, and derandomization, Proc. 34th Annu. IEEE Sympos. Found. Comput. Sci. (1993), 400^09. Revised version is to appear in SIAM J. Comput. [21] H. Bronnimann and M. Goodrich, Almost optimal set covers infinite VC-dimension, Discrete Comput. Geom. 14 (1995), 463-484. [22] B. Berger and J. Rompel, Simulating {\ognY-wise independence in NC, J. ACM 38(4) (1991), 1028-1046, [23] B, Berger, J, Rompel and P,W. Shor, Efficient NC algorithms for set cover with applications to learning and geometry, J. Comput. Syst. Sci. 49 (1994), 454-477. [24] B. Chazelle and H. Edelsbrunner, An optimal algorithm for intersecting line segments in the plane, J, ACM 39 (1992), 1-54, [25] K, Clarkson, H, Edelsbrunner, L, Guibas, M, Sharir and E, Welzl, Combinatorial complexity bounds for arrangements of curves and spheres. Discrete Comput, Geom, 5 (1990), 99-160, [26] B, Chazelle, H, Edelsbrunner, L. Guibas, M. Sharir and J. Snoeyink, Computing a face in an arrangement of line segments and related problems, SIAM J. Comput. 22 (1993), 1286-1302. [27] B. Chazelle, H. Edelsbrunner, L. Guibas and M. Sharir, A singly-exponential stratification scheme for real semi-algebraic varieties and its applications, Proc. 16th Intemat. Colloq. Automata Lang. Program., Lecture Notes Comput, Sci, 372, Springer-Verlag (1989), 179-192, [28] B, Chazelle, H. Edelsbrunner, L.J, Guibas and M. Sharir, Lines in space: Combinatorics, algorithms, and applications, Proc. 21st Annu, ACM Sympos. Theory Comput. (1989), 382-393. [29] B. Chazelle, H. Edelsbrunner, L. Guibas and M. Sharir, Diameter, width, closest line pair and parametric searching. Discrete Comput. Geom. 10 (1993), 183-196. [30] K.L. Clarkson, D. Eppstein, G.L. Miller, C. Sturtivant and S.-H. Teng, Approximating center points with iterated Radon points, Intemat. J. Comput. Geom. Appl. 6 (1996), 357-377.

592

J. Matousek

[31] B. Chazelle and J. Friedman, A deterministic view of random sampling and its use in geometry, Combinatorica 10 (3) (1990), 229-249. [32] B. Chazelle, Cutting hyperplanes for divide-and-conquer. Discrete Comput. Geom. 9 (2) (1993), 145-158. [33] B. Chazelle, An optimal convex hull algorithm in any fixed dimension. Discrete Comput. Geom. 10 (1993), 377^09. [34] T.M. Chan, Fixed-dimensional linear programming queries made easy, Proc. 12th Annu. ACM Sympos. Comput. Geom. (1996), 284-290. [35] L.P. Chew, Building Voronoi diagrams for convex polygons in linear expected time. Technical Report PCS-TR90-147, Dept. Math. Comput. Sci., Dartmouth College, Hanover, NH (1986). [36] K.L. Clarkson, New applications of random sampling in computational geometry. Discrete Comput. Geom. 2 (1987), 195-222. [37] K.L. Clarkson, Applications of random sampling in computational geometry, II, Proc. 4th Annu. ACM Sympos. Comput. Geom. (1988), 1-11. [38] K.L. Clarkson, A randomized algorithm for closest-point queries, SIAM J. Comput. 17 (1988), 830-847. [39] K.L. Clarkson, Randomized geometric algorithms. Computing in Euclidean Geometry, D.-Z. Du and F.K. Hwang, eds, Lecture Notes Series on Computing, Vol. 1, World Scientific, Singapore (1992), 117-162. [40] K.L. Clarkson, Las Vegas algorithms for linear and integer programming, J. ACM 42 (1995), 488^99. [41] E. Cohen and N. Megiddo, Strongly polynomial-time and NC algorithms for detecting cycles in dynamic graphs, J. ACM 40 (1993), 791-832. [42] B. Chazelle and J. Matousek, Derandomizing an output-sensitive convex hull algorithm in three dimensions, Comput. Geom. 5 (1994), 27-32. [43] B. Chazelle and J. Matousek, On linear-time deterministic algorithms for optimization problems infixed dimension, J. Algorithms 21 (1996), 116-132. [44] R. Cole, Slowing down sorting networks to obtain faster sorting algorithms, J. ACM 34 (1987), 200-208. [45] K.L. Clarkson and P.W. Shor, Algorithms for diametral pairs and convex hulls that are optimal, randomized, and incremental, Proc. 4th Annu. ACM Sympos. Comput. Geom. (1988), 12-17. [46] K.L. Clarkson and P.W. Shor, Applications of random sampling in computational geometry, II, Discrete Comput. Geom. 4 (1989), 387^21. [47] R. Cole, J. Salowe, W. Steiger and E. Szemeredi, An optimal-time algorithm for slope selection, SIAM J. Comput. 18 (1989), 792-810. [48] R. Cole, M. Sharir and C. Yap, On k-hulls and related problems, SIAM J. Comput. 16 (1) (1987), 61-67. [49] B. Chazelle and E. Welzl, Quasi-optimal range searching in spaces of finite VC-dimension, Discrete Comput. Geom. 4 (1989), 467^89. [50] M. de Berg, K. Dobrindt and O. Schwarzkopf, On lazy randomized incremental construction. Discrete Comput. Geom. 14 (1995), 261-286. [51] M.B. Dillencourt, D.M. Mount and N.S. Netanyahu, A randomized algorithm for slope selection. Internal. J. Comput. Geom. Appl. 2 (1992), 1-27. [52] M.E. Dyer, A class of convex programs with applications to computational geometry, Proc. 8th Annu. ACM Sympos. Comput. Geom. (1992), 9-15. [53] M. Dyer, A parallel algorithm for linear programming in fixed dimension, Proc. 11th ACM Symp. on Comput. Geom. (1995), 345-349. [54] H. Edelsbrunner, Algorithms in Combinatorial Geometry, EATCS Monographs on Theoretical Computer Science, Vol. 10, Springer-Verlag, Heidelberg, West Germany (1987). [55] G. Even, O. Goldreich, M. Luby, N. Nisan and B. Velikovic, Approximations of general independent distributions, Proc. 24th ACM Symp. on Theory of Computing (1992), 10-16. [56] R Erdos and J.L. Selfridge, On a combinatorial game, J. Comb. Theory Ser. A 14 (1973), 298-301. [57] M. Eisner and D. Severance, Mathematical techniques for efficient record segmentation in large shared database, J. ACM 23 (1976), 619-635. [58] M.T. Goodrich, Geometric partitioning made easier, even in parallel, Proc. 9th Annu. ACM Sympos. Comput. Geom. (1993), 73-82. [59] M. Goodrich, Fixed-dimensional parallel linear programming via relative €-approximations, Proc. 7th Annual ACM-SIAM Sympos. on Discrete Algorithms (1996), 132-141.

Derandomization

in computational

geometry

593

[60] L. Guibas and M. Shark, Combinatorics and algorithms of arrangements. New Trends in Discrete and Computational Geometry, J. Pach, ed., Algorithms and Combinatorics, Vol. 10, Springer-Verlag (1993), 9-36. [61] D. Gusfield, Parametric combinatorial computing and a problem ofprogram module distribution, J. ACM 30 (1983), 551-563. [62] J. Harris, Algebraic Geometry (A First Course), Springer-Verlag, Berlin (1992). [63] D. Haussler and E. Welzl, Epsilon-nets and simplex range queries. Discrete Comput. Geom. 2 (1987), 127-151. [64] A. Joffe, On a set of almost deterministic k-independent random variables, Ann. Probab. 2 (1974), 161162. [65] G. Kalai, A subexponential randomized simplex algorithm, Proc. 24th Annu. ACM Sympos. Theory Comput. (1992), 475^82. [66] R. Karp, An introduction to randomized algorithms. Discrete Appl. Math. 34 (1991), 165-201. [67] D. Karger and D. KoUer, (De)randomized construction of small sample spaces in J\fC, J. Comput. Syst. Sci. 55 (1997), 402-413. [68] H. Karloff and Y. Mansour, On construction ofk-wise independent random variables, Combinatorica 17 (1997), 91-107. [69] D. KoUer and N. Megiddo, Constructin small sample spaces satisfying given constraints, SIAM J. Discrete Math. 7 (1994), 260-274. [70] J. Komlos, J. Pach and G. Woeginger, Almost tight bounds for €-nets. Discrete Comput. Geom. 7 (1992), 163-173. [71] H. Karloff and P. Raghavan, Randomized algorithms and pseudorandom numbers, J. ACM 40 (3) (1993), 454-476. [72] M.J. Katz and M. Sharir, An expander-based approach to geometric optimization, SIAM J. Comput. 26 (1997), 1384-1408. [73] M.J. Katz and M. Sharir, Optimal slope selection via expanders. Inform. Process. Lett. 47 (1993), 115122. [74] R. Karp and M. Wigderson, A fast parallel algorithm for the maximum independent set problem, J. ACM 32 (1985), 762-773. [75] C.-Y. Lo, J. Matousek and W.L. Steiger, Algorithms for ham-sandwich cuts. Discrete Comput. Geom. 11 (1994), 433. [76] M. Luby, Removing randomness in parallel computation without processor penalty, J. Comput. Syst. Sci. 47 (1993), 250-286. [77] M. Luby and A. Wigderson, Pairwise independence and derandomization. Tech. Report UCB/ CSD95-880, Univ. of California at Berkeley (1995), Available electronically at h t t p : / / w w w . i c s i . b e r k e l e y . e d u / ' ^ l u b y / p a i r _ s u r . html. [78] J. Matousek, Construction ofe-nets. Discrete Comput. Geom. 5 (1990), 427^48. [79] J. Matousek, Cutting hyperplane arrangements. Discrete Comput. Geom. 6 (1991), 385^06. [80] J. Matousek, Randomized optimal algorithm for slope selection. Inform. Process. Lett. 39 (1991), 183187. [81] J. Matousek, Efficient partition trees. Discrete Comput. Geom. 8 (1992), 315-334. [82] J. Matousek, Reporting points in half spaces, Comput. Geom. 2 (3) (1992), 169-186. [83] J. Matousek, Linear optimization queries, J. Algorithms 14 (1993), AZl-AA^. [84] J. Matousek, Approximations and optimal geometric divide-and-conquer, J. Comput. and Syst. Sci. 50 (1995), 203-208. [85] J. Matousek, Geometric range searching, ACM Comput. Surveys 26 (1995), 4 2 1 ^ 6 1 . [86] N. Megiddo, Combinatorial optimization with rational objective functions. Math. Oper. Res. 4 (1979), 414-^24. [87] N. Megiddo, Applying parallel computation algorithms in the design of serial algorithms, J. ACM 30 (1983), 852-865. [88] N. Megiddo, Linear programming in linear time when the dimension is fixed, J. ACM 31 (1984), 114-127. [89] R. Motwani, J. Naor, and M. Naor, The probabilistic method yields deterministic parallel algorithms, J. Comput. Syst. Sci. 49 (1994), 478-516. [90] R. Motwani and P. Raghavan, Randomized Algorithms, Cambridge University Press (1995).

594

/. Matousek

[91] S. Mahajan, E.A. Ramos and K.V. Subrahmanyam, Solving some discrepancy problems in NC, Proc. Conf. Foundat. Softw. Technology and Theoret. Comput. Sci. (1997). [92] J. Matousek and O. Schwarzkopf, On ray shooting in convex polytopes. Discrete Comput. Geom. 10 (2) (1993), 215-232. [93] J. Matousek and O. Schwarzkopf, A deterministic algorithm for the three-dimensional diameter problem, Comput. Geom. 6 (1996), 253-262. [94] J. Matousek, R. Seidel and E. Welzl, How to net a lot with little: Small ^-nets for disks and half spaces, Proc. 6th Annu. ACM Sympos. Comput. Geom. (1990), 16-22. [95] J. Matousek, M. Sharir, and E. Welzl, A subexponential bound for linear programming, Algoritmica 16 (1996), 498-516. [96] K. Mulmuley, A fast planar partition algorithm, I, J. Symbolic Comput. 10 (1990), 253-280. [97] K. Mulmuley, A fast planar partition algorithm, II, J. ACM 38 (1991), 74^103. [98] K. Mulmuley, On levels in arrangements and Voronoi diagrams. Discrete Comput. Geom. 6 (1991), 307338. [99] K. Mulmuley, Computational Geometry: An Introduction Through Randomized Algorithms, Prentice-Hall, Englewood Cliffs, NJ (1994). [100] K. Mulmuley, Randomized geometric algorithms and pseudo-random generators, Algorithmica 16 (1996), 450-463. [101] J. Matousek, E. Welzl and L. Wemisch, Discrepancy and e-approximations for bounded VC-dimension, Combinatorica 13 (1993), 455-466. [102] N. Nissan, Pseudorandom generators for space-bounded computation, Combinatorica 12 (1992), 449461. [103] J. Naor and M. Naor, Small-bias probability spaces: Efficient construction and applications, SIAM J. Comput. 22 (1993), 838-856. [104] C.H. Norton, S.A. Plotkin and E. Tardos, Using separation algorithms infixed dimensions, J. Algorithms 13 (1992), 79-98. [105] M. Pellegrini, On point location and motion planning among simplices, SIAM J. Comput. 25 (1996), 1061-1081. [106] P. Raghavan, Probabilistic construction of deterministic algorithms: Approximating packing integer programs, J. Comput. Syst. Sci. 37 (1988), 130-143. [107] E. Ramos, An algorithm for intersecting equal radius balls in R^, Tech. Report UIUCDS-R-94-1851, Dept. of Computer Science, Univ. of lUinois at Urbana-Champaign (1994). [108] E. Ramos, Private communication (1996). [109] E. Ramos, Unpubhshed note, Max-Planck Institut fiir Informatik, Saarbrucken (1997). [110] E. Ramos, Construction ofl-d lower envelopes and applications, Proc. 13th Ann. ACM Sympos. Comput. Geom. (1997). [ I l l ] J.T. Rompel, Techniques for computing with low-independence randomness, PhD thesis, Dept. of EECS, M.I.T. (1990). [112] M. Sharir and PK. Agarwal, Davenport-Schinzel Sequences and Their Geometric Applications, Cambridge University Press, Cambridge (1995). [113] R. Seidel, Constructing higher-dimensional convex hulls at logarithmic cost per face, Proc. 18th Annu. ACM Sympos. Theory Comput. (1986), 404^13. [114] R. Seidel, A simple and fast incremental randomized algorithm for computing trapezoidal decompositions andfor triangulating polygons, Comput. Geom. 1 (1991), 51-64. [115] R. Seidel, Small-dimensional linear programming and convex hulls made easy. Discrete Comput. Geom. 6 (1991), 423-^34. [116] R. Seidel, Backwards analysis of randomized geometric algorithms. New Trends in Discrete and Computational Geometry, J. Pach, ed.. Algorithms and Combinatorics, Vol. 10, Springer-Verlag (1993), 37-68. [117] J. Spencer, Ten Lectures on the Probabilistic Method, CBMS-NSF, SIAM (1987). [118] A. Srivastav, Derandomized algorithms in combinatorial optimization, Habilitation-Thesis, Institut fiir Informatik, Freie Universitat Berlin (1995). [119] J. Schmidt, A. Siegel and A. Srinivasan, Chemoff-Hoejfending bounds for applications with limited independence, SIAM J. Discrete Math. 8 (1995), 223-250.

Derandomization

in computational

geometry

595

[120] V.N. Vapnik and A.Ya. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. Appl. 16 (1971), 264-280. [121] D.E. Willard, Polygon retrieval SIAM J. Comput. 11 (1982), 149-165. [122] A. Wigderson and D. Zuckerman, Expanders that beat the eigenvalue bound: Explicit construction and applications, Proc. 25th ACM Symposium on Theory of Computing (1993), 245-251. [123] A.C. Yao and F.F. Yao, A general approach to D-dimensional geometric queries, Proc. 17th Annu. ACM Sympos. Theory Comput. (1985), 163-168.

This Page Intentionally Left Blank

CHAPTER 14

Robustness and Precision Issues in Geometric Computation* Stefan Schirra Max-Planck-Institut fiir Informatik, Saarbrilcken, Germany

Contents 1. Introduction 1.1. Precision, correctness, and robustness 1.2. Attacks on the precision problem 2. Geometric computation 2.1. Geometric predicates 2.2. Arithmetic expressions in geometric predicates 2.3. Geometric computation with floating-point numbers 2.4. Heuristic epsilons 3. Exact geometric computation 3.1. Exact integer and rational arithmetic 3.2. Adaptive evaluation 3.3. Interval arithmetic 3.4. Exact sign of determinant 3.5. Certified epsilons 4. Geometric computation with imprecision 4.1. Representation and model approach 4.2. Epsilon geometry 4.3. Topology-oriented approach 4.4. Axiomatic approach 4.5. Tolerance-based approach 4.6. Further and more specific approaches 5. Related issues 5.1. Degeneracy 5.2. Inaccurate data 5.3. Rounding 5.4. Robustness in geometric algorithms libraries 6. Conclusion References .

599 599 601 601 602 602 603 605 606 607 610 612 612 613 615 616 618 619 620 620 622 623 623 624 625 625 626 627

*Work on this survey was partially supported by the ESPRIT IV Long Term Research Project No. 21957 (CGAL). HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V. All rights reserved

597

This Page Intentionally Left Blank

Robustness and precision issues in geometric computation

599

1. Introduction We give a survey^ on techniques that have been proposed and successfully used to attack robustness problems in the implementation of geometric algorithms. Our attention is directed to precision^, more precisely, on how to deal with the notorious problems that imprecise geometric calculations can cause in the implementation of geometric algorithms. Precision problems can make implementing geometric algorithms very unpleasant [36,37, 49,50,94], if no appropriate techniques are used to deal with imprecision.

1.1. Precision, correctness, and robustness Geometric algorithms are usually designed and proven to be correct in a computational model that assumes exact computation over the real numbers. In implementations of geometric algorithms, exact real arithmetic is mostly replaced by the fast finite precision floating-point arithmetic provided by the hardware of a computer system. For some problems and restricted sets of input data, this approach works well, but in many implementations the effects of squeezing the infinite set of real numbers into the finite set of floatingpoint numbers can cause catastrophic errors in practice. Due to rounding errors many implementations of geometric algorithms crash, loop forever, or in the best case, simply compute wrong results for some of the inputs for which they are supposed to work. Figure 1 gives an example. The conditionals in a program are most critical because they determine the flow of control. If in every test the same decision is made as if all computations would have been done over the reals, the algorithm is always in a state equivalent to that of its theoretical counterpart. In this case, the combinatorial part of the geometric output of the algorithm will be correct. Numerical data, however, computed by the algorithm might still be imprecise. In a branching step of a geometric algorithm, numerical values are compared. Without loss of generality we can assume that one of the values is zero, i.e. that the branching is on the sign of the value of an arithmetic expression. In the theoretical model of computation a real-valued expression is evaluated correctly for all real input data, but in practice only an approximation is computed. Thus a wrong sign might be computed and hence the algorithm might branch incorrectly. Such a wrong decision has been made in the computation of the "triangulation" shown in Figure 1. An incorrect result is one possible consequence of an incorrect decision. Program crashing is the other possibility. Decisions made in branching steps are usually not independent. Mutually contradicting decisions violating basic laws of geometry may take the algorithm to a state which could never be reached with correct decisions. Since the algorithm was not designed for such states, it crashes. Therefore s e g m e n t a t i o n f a u l t s and b u s e r r o r s are more likely than incorrect results. This survey is based on Precision and Robustness in Geometric Computations, Chapter 9 of Algorithmic Foundations of Geographic Information Systems, Lecture Notes in Comput. Sci. 1340, Springer-Verlag (1997). ^ The terms precision and accuracy are often used interchangeably. We mainly adopt the terminology used in [63]. Accuracy refers to the relationship between reality and the data representing it. Precision refers to the level of detail with which (numerical) data is represented.

600

S. Schirra

Fig. 1. Incorrect Delaunay triangulation. The error was caused by precision problems, see [130] for more details. The correct Delaunay triangulation is given in Figure 2. Courtesy ofJ.R. Shewchuk [130,132].

Fig. 2. Correct Delaunay triangulation. Courtesy ofJ.R. Shewchuk [130,132].

In general, robustness is a measure of the ability to recover from error conditions, e.g., tolerance of failures of internal components or errors in input data. Often an implementation of an algorithm is considered to be robust if it produces the correct result for some perturbation of the input. It is called stable if the perturbation is small. This terminology has been adopted from numerical analysis where backward error analysis is used to get bounds on the sizes of the perturbations. Geometric computation, however, goes beyond numerical computation. Since geometric problems involve not only numerical but also combinatorial data it is not always clear what perturbation of the input, especially of the combinatorial part, means. Perturbation of the input is justified by the fact that in many geometric problems the numerical data are real world data obtained by measuring and hence known to be inaccurate.

Robustness and precision issues in geometric computation

601

1.2. Attacks on the precision problem There are two obvious approaches for solving the precision problem. The first is to change the model of computation: design algorithms that can deal with imprecise computation. For a small number of basic problems this approach has been applied successfully but a general theory of how to design algorithms with imprecise primitives or how to adopt algorithms designed for exact computation with real numbers is still a distant goal [67]. The second approach is exact computation: compute with a precision that is sufficient to keep the theoretical correctness of an algorithm designed for real arithmetic alive. This is basically possible, at least theoretically, in almost all cases arising in practical geometric computing. The second approach is promising, because it allows exact implementations of numerous geometric algorithms developed for real arithmetic without modifications of these algorithms. However, exact computation slows down the computation and the overhead in running time can be tremendous, especially in cascaded computations, where the output of one computation is used as input by the next.

2. Geometric computation A geometric problem can be seen as a mapping from a set of permitted input data, consisting of a combinatorial and a numerical part, to a set of valid output data, again consisting of a combinatorial and a numerical part. A geometric algorithm solves a problem if it computes the output specified by the problem mapping for a given input. For some geometric problems the numerical data of the output are a subset of the data of the input. Those geometric problems are called selective. In other geometric problems new geometric objects are created which involve new numerical data that have to be computed from the input data. Such problems are called constructive. Geometric problems might have various facets, even basic geometric problems appear in different variants. We use two classical geometric problems for illustration, convex hull and intersection of line segments in two dimensions. In the two-dimensional convex hull problem the input is a set of points. The numerical part might consist of the coordinates of the input points; the combinatorial part is simply the assignment of the coordinate values to the points in the plane. The output might be the convex hull of the set of points, i.e., the smallest convex polygon containing all the input points. The combinatorial part of the output might be the sorted cyclic sequence of the points on the convex hull in counterclockwise order. The point coordinates form the numerical part of the output. In a variant of the problem only the extreme points among the input points have to be computed, where a point is called extreme if its deletion from the input set would change the convex hull. Note that the problem is selective according to our definition even if a convex polygon and hence a new geometric object is constructed. In the line segment intersection problem the intersections among a set of line segments are computed. The numerical input data are the coordinates of the segment endpoints, the combinatorial part of the input just pairs them together. The combinatorial part of the output might be a combinatorial embedding of a graph whose vertices are the endpoints of the segments and the points of intersection between the segments. Edges connect two

602

S. Schirra

vertices if they belong to the same Hne segment / and no other vertex Hes between them on /. Combinatorial embedding means that the set of edges incident to a vertex are given in cyclic order. The numerical part is formed by the coordinates of the points assigned to the vertices in the graph. Since the intersection points are in general not part of the input, the problem is constructive. A variant might ask only for all pairs of segments that have a point in common. This version is selective. 2.1. Geometric predicates Geometric primitives are the basic operations in geometric algorithms. There is a fairly small set of such basic operations that cover most of the computations in computational geometry algorithms. Geometric primitives subsume predicates and constructions of basic geometric objects, like line segments or circles. Geometric predicates test properties of basic geometric objects. They are used in conditional tests that direct the control flow in geometric algorithms. Well-known examples are: testing whether two line segments intersect, testing whether a sequence of points defines a right turn, or testing whether a point is inside or on the circle defined by three other points. Geometric predicates involve the comparison of numbers which are given by arithmetic expressions. The operands of the expressions are numerical data of the geometric objects that are tested and constants, usually integers. Expressions differ by the operations used, but many geometric predicates involve arithmetic expressions over +, —, * only, or can at least be reformulated in such a way. 2.2. Arithmetic expressions in geometric predicates One can think of an arithmetic expression as a labeled binary tree. Each inner node is labeled with a binary or unary operation. It has pointers to trees defining its operands. The pointers are ordered corresponding to the order of the operands. The leaves are labeled with constants or variables which are placeholders for numerical input values. Such a representation is called an expression tree. The numerical data that form the operands in an expression evaluated in a geometric predicate in the execution of a a geometric algorithm might be again defined by previously evaluated expressions. Tracing these expressions backwards we finally get expressions on numerical input data whose values for concrete problem instances have to be compared in the predicates. Since intermediate results are used in several places in an expression we get a directed acyclic graph (dag) rather than a tree. Without loss of generality we may assume that the comparison of numerical values in predicates is a comparison of the value of some arithmetic expression with zero. The depth of an expression tree is the length of the longest root-to-leaf path in the tree. For many geometric problems the depth of the expressions appearing in the predicates is bounded by some constant [151]. Expressions over input variables involving operations +, —, * only are called polynomial, because they define multivariate polynomials in the variables. If all constants in the expression are integral, a polynomial expression is called integral. The degree of a polynomial expression is the total degree of the resulting multivariate polynomial.

Robustness and precision issues in geometric computation

603

In [19,91] the notion of the degree of an expression is extended to expressions involving square roots. An expression involving operations + , — , • , / only is called rational. 2.3. Geometric computation with floating-point numbers Floating-point numbers are the standard substitution for real numbers in scientific computation. In some programming languages the floating-point number type is even called r e a l [81]. Since most geometric computations are executed with floating-point arithmetic, it is worth taking a closer look at floating-point computation. Goldberg [62] gives an excellent overview. A finite-precision floating-point system has a base B, a fixed mantissa length / (also called significant length or precision), and an exponent range [Cmin — ^max]±do.did2'--di-\

*B^,

0 ^di < B, represents the number ±(do -h Ji • B-^ + ^2 • B~^ + • • • -h J/-iB-^+^) . B^ A representation of a floating-point number is called normalized iff JQ 7^ 0. For example, the rational number 1/2 has representations 0.500 * 10^ or 5.000 * 10~^ in a floating-point system with base 10 and mantissa length 4 and normalized representation 1.00 * 2~^ in a floating-point system with base 2 and mantissa length 3. Since an infinite set of numbers is represented by finitely many floating-point numbers, rounding errors occur. A real number is called representable if it is zero or its absolute value is in the interval [B^^^i" ^ gemax+i j L^^ ^ 13^ some real number and fr be a floating-point representation for r. Then |r — /^ | is called absolute error and |r — /^ |/|r | is called relative error. The relative error of rounding a representable real toward the nearest floating-point number in a floating-point system with base B and mantissa length / is bounded by y • B"~^ which is called machine epsilon. Calculations can underflow or overflow, i.e., leave the range of representable numbers. Fortunately, the times where the results of floating-point computations could drastically differ from one machine to another, depending on the precision of the floating-point machinery, seem to be coming to an end. The IEEE standard 754 for binary floating-point computation [133] is becoming widely accepted by hardware-manufacturers. The IEEE standard 754 requires that the results of +,—,-, / and r are exactly rounded, i.e., the result is the exact result rounded according to the chosen rounding mode. The default rounding mode is round to nearest. Ties in round to nearest are broken such that the least significant bit becomes 0. Besides rounding toward nearest, rounding toward zero, rounding toward 00, and rounding toward —00 are rounding modes that have to be supported according to IEEE standard 754. The standard makes reasoning about correctness of a floating-point computation machine-independent. The result of the basic operations will be the same on different machines if both support IEEE standard and the same precision is used. Thereby code becomes portable.

604

S, Schirra

The IEEE standard 754 specifies floating-point computation in single, single extended, double, and double extended precision. Single precision is specified for a 32 bit word, double precision for two consecutive 32 bit words. In single precision the mantissa length is / = 24 and the exponent range is [—126.. 127]. Double precision has mantissa length / = 53 and exponent range [—1022.. 1023]. Hence the relative errors are bounded by 2~^^ and 2~^^. The single and double precision formats usually correspond to the number types f l o a t and d o u b l e in C+ + . Floating-point numbers are represented in normalized representation. Since the zeroth bit is always 1 in normalized representation with base 2, it is not stored. There are exceptions to this rule. Denormalized numbers are added to let the floating-point numbers underflow nicely and preserve the property ''x — y = 0 iff x = y'\ Zero and the denormalized numbers are represented with exponent Cmin — 1 • Besides these floating-point numbers there are special quantities +oo, —oo and NaN (Not a Number) to handle exceptional situations. For example —1.0/0.0 = —oo, NaN is the result of ^ / ^ , and oc is the result of overflow in positive range. Due to the unavoidable rounding errors, floating-point arithmetic is inherently imprecise. Basic laws of arithmetic like associativity and distributivity are not satisfied by floatingpoint arithmetic. Section 13.2 in [108] gives some examples. Since the standard (almost) fixes the layout of bits for mantissa and exponent in the representation of floating-point numbers, bit-operations can be used to extract information. Naively applied floating-point arithmetic can set axioms of geometry out of order. A classical example is Ramshaw's braided lines (see Figure 3 and [108,109]). Rewriting an expression to get a numerically more stable evaluation order can already help a lot: Goldberg [62] gives the following example due to Kahan. Consider a triangle with sides of length a^b^c respectively. The area of a such a triangle is ^s{s — a){s — b){s — c), where s = {a -\-b -\- c)/2. For a = 9.0, b = c = 4.53 the correct value of 5 in a floatingpoint system with base 10, mantissa length 3 and exact rounding is 9.03 while the computed value s is 9.05. The area is 2.34, the computed area, however, is 3.04, an error of nearly 30%. Using the expression y/(a + (b-\-c)) -(c-(a-

b)) • {c + {a-b))

• {a +

{b-c))/A

one gets 2.35, an error of less than 1%. For a less needle-like triangle with a = 6.9, b = 3.68, and c = 3.48 the improvement is not so drastic. Using the first expression, the result computed by a floating-point system with base 10, mantissa length 3 and exact rounding is 3.36. The second expression gives 3.3. The exact area is approximately 3.11. One can show that the relative error of the second expression is at most 11 times machine epsilon [62]. Rewriting also helps with the braided lines. If the abscissae are computed as (4.3/8.3) • x and (1.4/2.7) • x, there is no braiding anymore. The lines still do have more than one point in common, but besides the crossing at the origin there are no further crossings anymore.

Robustness and precision issues in geometric

605

computation



4.3x/83

„.,J

J

.38 .37 -

J" _J

\ Axil J

I .89

Fig. 3. Evaluation of the line equations y = A3 • x/8.3 and y = IA • x/2.1 in a floating-point system with base 10 and mantissa length 2 and rounding to nearest suggests that the lines have several intersection points besides the true intersection point at the origin.

As the examples above show, the way a numerical value is computed influences its precision. Summation of floating-point numbers is another classical example for such effects. Rearranging the summands can help to reduce imprecision due to extinction. 2.4. Heuristic epsilons A widely used method to deal with numerical inaccuracies is based on the rule of thumb If something is close to zero it is zero. Some trigger-value ^magic is added to a conditional test where a numerical value is compared to zero. If the absolute value of the computed approximation is smaller than emagic it is treated as zero. Adding such epsilons is popular folklore. What should the ^magic be? In practice, ^magic is usually chosen as some fixed tiny constant and hence not sensitive to the actual sizes of the operands in a concrete expression. Furthermore, the same epsilon is often taken for all comparisons, no matter which expression or which predicate is being evaluated. Usually, no proof is given that the chosen 6:inagic makes sense, ^magic is guessed and adjusted by trial and error until the current value works for the considered inputs, i.e., until no catastrophic errors occur anymore. Yap [150] suggests calling this procedure epsilon-tweaking. Adding epsilon is justified by the following reasoning: If something is so close to zero, then a small modification of the input, i.e., a perturbation of the numerical data by a small

606

S. Schirra

Fig. 4. A locally straight line.

amount, would lead to value zero in the evaluated expression. There are, however, severe problems with that reasoning. The size of the perturbation causes a problem. The justification for adding epsilons assumes that the perturbation of the (numerical) input is small. Even if such a small perturbation exists for each predicate, the existence of a global small perturbation of the input data is not guaranteed. Figure 4 shows a polyline, where every three consecutive vertices are colinear under the "close to zero is zero" rule. In each case, a fairly small perturbation of the points exists that makes them colinear. There is, however, no small perturbation that makes the whole polyline straight. The example indicates that colinearity is not transitive. Generally, equality is not transitive under epsilon-tweaking. This might be the most serious problem with this approach. Another problem is that different tests might require different perturbations, e.g., predicate Pi might require a larger value for input variable JC56 while test P2 requires a smaller value, such that both expressions evaluate to zero. There might be no perturbation of the input data that leads to the decisions made by the "close to zero is zero" rule. Finally, a result computed with "close to zero is zero" is not the exact result for the input data but only for a perturbation of it. For some geometric problems this might cause trouble, since the computed output and the exact output can be combinatorially very different [22].

3. Exact geometric computation An obvious approach to the precision problem is to compute "exactly". In this approach the computation model over the reals is mimicked in order to preserve the theoretical correctness proof. Exact computation means to ensure that all decisions made by the algorithm are correct decisions for the actual input, not only for some perturbation of it. As we shall see, it does not mean that in all calculations exact representations for all numerical values have to be computed. Approximations that are sufficiently close to the exact value can often be used to guarantee the correctness of a decision. Empirically it turns out to be true for most of the decisions made by a geometric algorithm that approximations are sufficient. Only degenerate and nearly degenerate situations cause problems. That is why most implementations based on floating-point numbers work very well for the majority of the considered problem instances and fail only occasionally. This is made possible by the fact that the numerical input data for geometric algorithms are hardly arbitrary real numbers. In almost all cases the numerical input data are rationals given as floating-point numbers or even integers. If an implementation of an algorithm does all branchings the same way as its theoretical counterpart, the control flow in the implementation corresponds to the control flow of the algorithm proved to be correct under the assumption of exact computation over the reals.

Robustness and precision issues in geometric computation

607

and hence the vaHdity of the combinatorial part of the computed output follows. Thus, for selective geometric problems, it is sufficient to guarantee correct decisions, since all numerical data are already part of the input. For constructive geometric problems, new numerical data have to be computed "exactly". A representation of a real number r should be called exact only if it allows one to compute an approximation of r to whatever precision, i.e. no information has been lost. According to Yap [150] a representation of a subset of the reals is exact if it allows the exact comparison of any two real numbers in that representation. This reflects the necessity for correct comparisons in branchings steps in the exact geometric computation approach. Examples of exact representations are the representation of rationals by numerator and denominator, where both are arbitrary precision integers, and the representation of algebraic numbers by an integral polynomial P having root a and an interval that isolates a from the other roots of P. Further examples are symbolic and implicit representations. For example, rather than compute the coordinates of an intersection point of line segments explicitly, one can represent them implicitly by maintaining the intersecting segments. Another similar example is the representation of a number by an expression dag, which reflects the computation history. Allowing symbolic or implicit representation can be seen as turning a constructive geometric problem into a selective one. As suggested in the discussion above, there are different flavors of exact geometric computation. In the last decade, much progress has been made in improving the efficiency of exact geometric computation (see also [151] and [150] for an overview).

3.1. Exact integer and rational arithmetic A number of geometric predicates in basic geometric problems include only integral expressions in their tests. Thus, if all numerical input data are integers, the evaluation of these predicates involves integers only. With the integer arithmetic provided by the hardware only overflow may occur, but no rounding errors. The problem with overflow in integral computation is abolished if arbitrary precision integer arithmetic is used. There are several software packages for arbitrary or multiple precision integers, e.g., BigNum [129], GNU MP [65], LiDIA [90], or the number type i n t e g e r in LEDA [95,96]. Fortune and van Wyk [57,58] report on experiments with such packages. Since the integral input data are usually bounded in size, e.g., by the maximal representable i n t , there is not really a need for arbitrary precision integers. Multiple precision integer arithmetic with a fixed precision adjusted to the maximum possible integer size in the input and the degree of the integral polynomial expression arising in the computation is adequate. If the input integers have binary representation with at most b-bits, then an integer arithmetic for integers with db + logm + 0(1) bits suffices to evaluate an integral polynomial expression with m monomials of degree at most d, where we assume that the coefficients of the monomials are bounded by a constant. If v is the number of numerical input data involved in a polynomial expression, then m is bounded by (f + 1)*^. The degree of polynomial expressions in geometric predicates has recently gained attention as an additional measure of algorithmic complexity in the design of geometric algorithms. Liotta et al. [91] investigate the degree involved in some proximity problems in 2-

608

S. Schirra

and 3-dimensional space, Boissonnat and Preparata [15] investigate the degree involved in line segment intersection. Many predicates include only expressions involving operations + , — , • , / . In most of the problems discussed in textbooks on computational geometry [16,31,40,85,88,92,107, 112,117] all predicates are of this type. Such problems are called rational [151]. A rational number can be exactly stored as a pair of arbitrary precision integers representing numerator and denominator respectively. Let us call this exact rational arithmetic. The intermediate values computed in rational problems are often solutions to systems of linear equations Uke the coordinates of the intersection point of two straight lines. Division can be avoided in rational predicates, e.g., exact rational arithmetic postpones division. With exact rational arithmetic, numerator and denominator of the result of the evaluation of a rational expression are integral polynomial expressions in the numerators and denominators of the rational operands. A sign test for a rational expression can be done by two sign tests for integral polynomial expressions. Hence rational expressions in conditional tests in geometric predicates can be replaced by tests involving integral polynomial expressions. Homogeneous coordinates known from projective geometry and computer graphics can be used to avoid division, too. In homogeneous representation, a point in J-dimensional affine space with Cartesian coordinates (JCQ, JCi,..., jc^_i) is represented by a vector {hxQ, hx\,..., hxd-\, hxd) such that JC, = hxi/hxd for all 0 ^ / ^ ^ — 1. Note that the homogeneous representation of a point is not unique; multiplication of the homogeneous representation vector with any X^O gives a representation of the same point. The homogenizing coordinate hxd is a common denominator of the coordinates. For example, homogeneous representation allows division-free representation of the intersection point of two straight lines given bya'X-\-b'Y — c = 0 and d-X-\-e-Y -\- f = 0. The intersection point can be represented by homogeneous coordinates {b-f — c-e.a-f — c-d^a-e — b-d). A test including rational expressions in Cartesian coordinates transforms into a test including only polynomial expressions in homogeneous coordinates after multiplication with an appropriate product of homogenizing coordinates. Since all monomials appearing in the resulting expressions have the same degree in the homogeneous coordinates, the resulting polynomial is a homogeneous polynomial. For example, the test "a • JCQ + ^ • xi +c = 0?", which tests whether point (xo, xi) is on the line given by the equation a'X-\-b'Y -\-c = 0, transforms into "a • hxo -\-b - hx\ + c • hx2 = 0?". Many geometric predicates that do not obviously involve only integral polynomial expressions can be rewritten so that they do. Above, we have illustrated this for rational problems. Even sign tests for expressions involving square roots can be turned into a sequence of sign tests of polynomial expressions by repeated squaring [21,91]. Therefore, multiple or arbitrary precision integer arithmetic is a powerful tool for exact geometric computation, but such integer arithmetic has to be supplied by software and is therefore much slower than the hardware-supported integer arithmetic. The actual cost of an operation on arbitrary precision integers depends on the size of the operands, more precisely on the length of their binary representation. If expressions of large depth are involved in the geometric calculations the size of the operands can increase drastically. In the literature huge slow down factors are reported if floating-point arithmetic is simply replaced by exact

Robustness and precision issues in geometric computation

609

rational arithmetic. Karasick, Lieber, and Nackman [84] report slow-down factors of about 10000. While in most rational problems the depth of the involved rational expressions is a small constant, there are problems where the size of the numbers has a linear dependence on the problem size. An example is computing minimum link paths inside simple polygons [82]. Numerator and denominator of the knick-points on a minimum link path can have superquadratic bitlength with respect to the number of polygon vertices [82]. This is by the way a good example of how strange the assumption of constant time arithmetic operations in theory may be in practice. Fortune and van Wyk [57,58] noticed that in geometric computations the sizes of the integers are small to medium compared to those arising in computer algebra and number theory. Multiple precision integer packages are mainly used in these areas and hence tuned for good performance with larger integers. Consequently Fortune and van Wyk developed LN [56], a system that generates efficient code for integer arithmetic with fairly "little" numbers. LN takes an expression and a bound on the size of the integral operands as input. The generated code is very efficient if all operands are of the same order of magnitude as the bound. For much smaller operands the generated code is clearly not optimal. LN can be used to trim integer arithmetic in an implementation of a geometric algorithm for special applications. On the other hand, LN is not useful for generating general code. Chang and Milenkovic report on the use of LN in [27]. For integral polynomial expressions, modular arithmetic [2,86] is an alternative to arbitrary precision integer arithmetic. Let /?o, Pi, • • •, py^-i be a set of integers that are pairwise relatively prime and let p be the product of the pt. By the Chinese remainder theorem there is a one-to-one correspondence between the integers r with —LfJ ^ ^ < Ffl and the ktuples (ro, r i , . . . , rk-\) with — [^ J ^ri < [^"1. By the integer analog of the Lagrangian interpolation formula for polynomials [2], we have k-i

r = 2_^riSiqi modp, i=0

where r/ = r mod pt, qt = p/ Pi, and st = qf mod pi. Note that si exists because of the relative primality and can be computed with an extended Euclidean gcd algorithm [86]. To evaluate an expression, a set of relatively prime integers is chosen such that the product of the primes is at least twice the absolute value of the integral value of the expression. Then the expression is evaluated modulo each pt. Finally Chinese remaindering is used to reconstruct the value of the expression. Modular arithmetic is frequently used in number theory, but not much is known about its application to exact geometric computation. Fortune and van Wyk [57,58] compared modular arithmetic with multiple precision integers provided by software packages for a few basic geometric problems without observing much of a difference in the performance. Recently, however, Bronnimann et al. reported on promising results concerning the use of modular arithmetic in combination with single precision floating-point arithmetic for sign evaluation of determinants [17] and Emiris reported on the use of modular arithmetic in the computation of general dimensional convex hulls [42].

610

5. Schirra

Modular arithmetic is particularly useful if intermediate results can be very large, but the final result is known to be relatively small. The drawback is that a good bound on the size of the final result must be known in order to choose sufficiently many relatively prime integers, but not too many.

3.2. Adaptive evaluation Replacing exact arithmetic, on which the correctness of a geometric algorithm is based, by imprecise finite-precision arithmetic usually works in practice for many of the given input data and fails only occasionally. Thus always computing exact values would put a burden on the algorithm that is rarely really needed. Adaptive evaluation is guided by the rule Why compute something that is never used, so why compute numbers to high precision, before you know that this precision is actually needed. The simplest form of adaptive evaluation is di floating-point filter. The idea of floatingpoint filters is to filter out those computations where floating-point computation gives the correct result. This technique has been successfully used in exact geometric computation [34,57,58,84,93,94]. Floating-point filters make use of the fast hardware-supported floating-point arithmetic. A filter simply takes a bound on the error of the floating-point computation and compares the absolute value of the computed numerical value to the error bound. If the error bound is smaller, the computed approximation and the exact value have the same sign. Only if it is not certified by the error bound that the floating-point evaluation has led to a correct decision, the expression considered in the branching step is reevaluated, for instance, with exact arithmetic. Error bounds can be computed a priori if specific information on the input data is available, e.g., if all input data are integers from a bounded range, for instance, the range of integers representable in a computer word. Such so-called static filters require only little additional effort at run time, just one additional test per branching, plus the refined reevaluation in the worst case. Dynamic filters compute an error bound on the fly parallel to the evaluation in floating-point arithmetic. Since they take the actual values of the operands into account and not only bounds derived from the bounds on the input data, the estimates for the error involved in the floating-point computation can be much tighter than in a static filter. In the error computation one can put emphasis on speed or on precision. The former makes arithmetic operations more efficient while the latter lets more floating-point computations pass a test. Semi-dynamic filters partially precompute the error bound a priori. Mehlhom and Naher [93] use such semi-dynamic filters in their implementation of the Bentley-Ottmann plane sweep algorithm [13] for computing the intersections among a set of line segments in the plane. Note the difference between static filters and heuristic epsilons. In both cases approximations to a numerical values are compared to some small values. If the computed approximation is larger than the error bound or ^magic, respectively, the behavior is identical. The

Robustness and precision issues in geometric computation

611

program continues based on the (in the former case verified) assumption that the computed floating-point value has the correct sign. If, however, the computed approximate value is too small, the behavior is completely different. Epsilon-tweaking assumes that the actual value is zero, which might be wrong, while a floating-point filter invokes a more expensive computation finally leading to a correct decision. Using only error bounds, a floating-point filter rarely works for expressions whose value is actually zero, because both the computed approximation and the error bound have to be zero to certify sign zero. To detect sign zero, one can use "certified epsilons" described in Section 3.5, or use a special procedure to test an expression for zero, e.g. [14], or one might use exact arithmetic. If a filter fails, a refined filter can be used. A refined filter might compute a tighter error bound or use a floating-point arithmetic with larger mantissa and thereby get better approximations and smaller error bounds. This step can be iterated. Composition of more and more refined filters leads to an adaptive evaluation scheme. Such schemes are called adaptive, because they adapt the used precision to the size of the value of the expression to be evaluated. For orientation predicates and encircle tests in two- and three-dimensional space Shewchuk [130,131] presents such an adaptive evaluation scheme. It uses an exact-^ representation of values resulting from expressions over floating-point numbers involving only additions, subtractions, and multiplications as a symbolic sum of floating-point numbers. Computation with numbers in this representation, called expanded doubles in [130], is based on the interesting results of Priest [118,119] and Dekker [33] on extending the precision of floating-point computation. An adapted combination of these techniques allows one to reuse values computed in previous filtering steps in later filtering steps. For integral expressions scalar products delivering exactly rounded results can be used in floating-point filters to get best possible floating-point approximations. Ottmann et al. [113] first used exactly rounding scalar products to solve precision problems in geometric computation. Number representations supporting recomputation with higher precision are very useful for adaptive evaluation. The LEA^ system [12] provides "lazy evaluation" for rational computation. In this system, numbers are represented by intervals and expression dags that reflect their creation history. Initially only a low precision representation is calculated using interval arithmetic, cf. Section 3.3. Only if decisions can't be made with the current precision, repeatedly representations with increased precision are computed by redoing the computation along the expression dag with refined intervals for the operands. If the interval representation can't be refined anymore with floating-point evaluation, exact rational arithmetic is used to solve the decision problem. Another approach based on expression dags is described by Yap and Dube [39,150,151]. In this approach the precision used to evaluate the operands is not systematically increased, but the increase is demanded by the intended increase in the precision of the result. The data type r e a l in LEDA [23] stores the creation history in expression dags, too, and uses floating-point approximations and errors bounds as first approximations. The strategy of ^ If neither underflow nor overflow occurs. "^ LEA [12] should not to be confused with LEDA [95,96].

612

S. Schirra

repeatedly increasing the precision is similar to [39,150,151 ]. In both approaches softwarebased multiple precision floating-point arithmetic with a mantissa length that can be arbitrarily chosen and an unbounded exponent is used to compute representations with higher precision. Furthermore, both approaches include square root operations besides + , — , • , / . The r e a l s now provide k-\h root operations as well [96].

3.3. Interval arithmetic Approximation and error bound define an interval that contains the exact value. Interval arithmetic [3,104,105] is another method to get an interval with this property. In interval arithmetic real numbers are represented by intervals, whose endpoints are floating-point numbers. The interval representing the result of an operation is computed by floating-point operations on the endpoints of the intervals representing the operands. For example, the lower endpoint of the interval representing the result of an addition is the sum of the lower endpoints of the intervals of the summands. Since this floating-point addition might be inexact, either the rounding mode is changed to rounding toward — oo before addition or a correction term is subtracted. For interval arithmetic, rounding modes toward oc and toward —oc are very useful. See, for example, [106,137] for applications of interval methods to geometric computing. The combination of exact rational arithmetic with interval arithmetic based on fast floating-point computation has been pioneered by Karasick, Lieber and Nackman [84] to geometric computing. A refinement of standard interval arithmetic is the so-called affine arithmetic proposed by Comba and Stolfi [30]. While standard interval arithmetic assumes that the unknown values of operands and subexpressions can vary independently, affine arithmetic keeps track of first-order dependencies and takes these into account. Thereby error explosion can often be avoided and tighter bounds on the computed quantities can be achieved. An extreme example is computing x —x where for x some interval [jc./o, x.hi] is given. Standard interval arithmetic would compute the interval [x.lo — x.hi, x.hi — x./o], while affine arithmetic gives the true range [0,0].

3.4. Exact sign of determinant Many geometric primitives can be formulated as sign computations of determinants. The classical example of such a primitive is the orientation test, which in two-dimensional space determines whether a given sequence of three points is a clockwise or a counterclockwise turn or whether they are colinear. Another example is the encircle test used in the construction of Voronoi diagrams of points. Recently some effort has been focused on exact sign determination. Clarkson [29] gives an algorithm to evaluate the sign of a determinant of a d x d matrix with integer entries using floating-point arithmetic. His algorithm is a variant of the modified Graham-Schmidt orthogonalization. In his variant, scaling is used to improve the conditioning of the matrix. Since only positive scaling factors are used, the sign of the determinant does not change. Clarkson shows that only b + 0(d) bits are required, if all entries are Z^-bit integers. Hence,

Robustness and precision issues in geometric computation

613

for small dimensional matrices his algorithm can be used to evaluate the sign of the determinant with fast hardware floating-point arithmetic. Avnaim et al. [5] consider determinants of small matrices with integer entries, too. They present algorithms to compute the sign of 2 x 2 and 3 x 3 matrices with ^-bit integer entries using precision b and b -\-\ only, respectively. Bronnimann and Yvinec [18] extend the method of [5] to J x J matrices and present a variant of Clarkson's method. The new version of Clarkson's method allows for a simplified analysis. Furthermore, Shewchuk's work on adaptive evaluation [131] is focused on predicates evaluated by sign of determinant computation. We already mentioned the use of modular arithmetic combined with floating-point arithmetic to compute the sign of determinants of integer matrices [17].

3.5. Certified epsilons While the order of two different numbers can be found by computing sufficiently close approximations, it is not so straightforward to determine whether two numbers are equal or, equivalently, whether the value of an expression is zero. From a theoretical point of view arithmetic expressions arising in geometric predicates are expressions over the reals. Hence the value of an expression can in general get arbitrarily close to zero if the variable operands are replaced by arbitrary real numbers. In practice the numerical input data originate from a finite, discrete subset of the reals, namely a finite subset of the integers or a finite set of floating-point numbers, i.e., a finite subset of the rational numbers. The finiteness of such input excludes arbitrarily small absolute non-zero values for expressions of bounded depth. There is a gap between zero and other values that a parameterized expression can take on. A separation bound for an arithmetic expression £ is a lower bound on the size of this gap. Besides the finiteness of the number of possible numerical inputs, the coarseness of the input data can generate a gap between zero and other values taken on. A straightforward example is integral expressions. If all operands are integers the number 1 is clearly a separation bound. Once a separation bound is available it is clear how to decide whether the value of an expression is zero or not. Representations with repeatedly increased precision are computed until either the error bound on the current approximation is less than the absolute value of the approximation or their sum is less than the separation bound. In the phrasing of interval arithmetic, it means to refine the interval until neither zero nor the separation bound nor its negative are contained in the interval. How can we get separation bounds without computing the exact value or an approximation and an error bound? Most geometric computations are on linear objects and involve only basic arithmetic operations over the rational numbers. In distance computations and operations on nonlinear objects like circles and parabolas, square root operations are used as well. For the rational numerical input data arising in practice, expressions over the operations + , — , • , / , / " take on only algebraic values. Let E be an expression involving square roots. Furthermore we assume that all operands are integers. We use a{E) to denote the algebraic value of expression E. Computer algebra provides bounds for the size of the roots of polynomials with integral coefficients. These bounds involve quantities used to describe the complexity of an integral polynomial, e.g..

614

S. Schirra Table 1 Automatic computation of separation bounds for expressions involving square roots based on the measure of a polynomial

Integers

M(E) \n\

desJE) 1

Ei-\-E2

2'^^SiE0deg{E2)M(^E^^deg(E2)M(E2)'^'^^^^^

deg{Ex)

Ex - E2

2d^S^E\)deg{E2)M{Ex)'^'S^^2)M{E2)'^'^^^^^

deg{Ex) - deg{E2)

• deg{E2)

Ex . E2

M(£I)^^^^^2)M(£2)'^^^^^'^

deg(Ei)-deg(E2)

Ex/E2 4E[

M(£I)^^^(^2)M(£:2)'^''^^^'^ M{E\)

deg(Ei)deg(E2) 2deg{Ex)

degree, maximum coefficient size, or less well-known quantities like height or measure of a polynomial. Once an integral polynomial with root a{E) is known the root bounds from computer algebra give us separation bounds. In general, however, we don't have a polynomial having root a{E) at hand. Fortunately, all we need to apply the root bounds are bounds on the quantities involved in the root bounds. Upper bounds on these quantities for some polynomial having root «(£) can be derived automatically from an expression E. Recursive formulas leading to separation bounds for an expression involving square root operations are given in [ 151 ]. The formulas deliver a bound on the maximum absolute value of the coefficients of an integral polynomial having root a{E). By a result of Cauchy, this gives a separation bound. In [151], this bound is called height-degree bound. Mignotte discusses identification of algebraic numbers given by expressions involving square roots in [97]. The measure of a polynomial [98] can also be used for automatic computation of a root bound. Table 1 gives the rules for (over)estimating measure and degree of an integral polynomial having root Of(£). WehaveaCf") = 0 o r \ot{E)\ ^ M(£')~^This bound, called degree-measure bound, is never worse than the height-degree bound. In [24] Canny considers isolated solutions of systems of polynomial equations in several variables with integral coefficients. He gives bounds on the absolute values of the non-zero components of an isolated solution vector. The bound depends on the number of variables, the maximum total degree d of the multivariate integral polynomials in the system and their maximum coefficient size c. Canny shows that the absolute value of a component of an isolated solution of a system of n integral polynomial equations in n variables is either zero or at least (3dc)~"^ [24,25]. Although Canny solves a much more general problem, his bounds can be used to get fairly good separation bounds for expressions involving square roots, cf. [20]. Bumikel et al. [20] have shown that

a(E) ^

{uiEf""'''''''liE)y\

where k(E) is the number of (distinct) square root operations in E and the quantities u(E) and 1(E) are defined as given in Table 2. Note that M(^) and 1(E) are simply the numerator and denominator of an expression obtained by replacing in E all — by + and all integers by their absolute value. If E is division-free and a (E) is non-zero, then a (E) > u (E) ^ ~^ It is shown in [20] that this bound is never worse than the degree-measure bound and the polynomial system bound for division-free expressions.

Robustness and precision issues in geometric computation

615

Table 2 Recursive formulas for quantities u{E) and 1{E) of an arithmetic expression involving square roots

Integers Ex +E2 Ex -E2 Ex £2 Ex/E2

Mm

iiEi_

\n\

u{Ex)-l{E2) uiEx)-l(E2) u(Ex)-u(E2) u{Ex)-l{E2)

1

+ l{Ex)-u{E2) + l(Ex)-u(E2)

l{Ex)-liE2) l(Ex)-l(E2) l{Ex)-l{E2) l(Ex)-u(E2)

The bound given in [20] as well as the bound given in [151] involve square root operations. Hence they are not easily computable. In practice one computes ceilings of the results to get integers [151] or maintains integer bounds logarithmically [20,23]. The number type r e a l [23,96] in LEDA and the Real/Expr-package [38,114] provide exact computation (in C++) for expressions with operations +, —, •, / and f and initially integral operands, using techniques described above. In particular, the recent version of the r e a l s in LEDA [96] uses the bounds given in [20]. Note the difference between separation bounds and ^magicS in epsilon tweaking. In epsilon-tweaking a test for zero is replaced by the test ''\E\ < 6^magic?"- ^Vith Separation bounds it becomes "l^"! < sep(E) — £'error?" where sep(E) is a separation bound and £^error is a bound on the error accumulated in the evaluation of E. The difference is that the latter term is self-adjusting, it is based on an error bound, and justified; it is guaranteed that the result is zero, if the condition is satisfied. While 6:magic is always positive, it might happen that the accumulated error is so large that sep(E) — £^error is negative. Last but not least, the conclusion is different if the test is not satisfied. Epsilon-tweaking concludes that the number is non-zero if it is larger than ^magic while the use of separation bounds allows for this conclusion only if l^"! ^ £^error-

4. Geometric computation with imprecision In this section we look at the design and implementation of geometric algorithms with imprecision calculations. With potentially imprecise computations we cannot hope to always get the exact result. But even if the result is not the exact result for the considered problem instance, it still can be meaningful. An algorithm that computes the exact result for a very similar problem instance can be sufficient for an application, since the input data might be known not to be accurate either. This observation motivates the definition of robustness given in Section 1.1 and below in Section 4.1. In addition to the existence of a perturbation of the input data, for which the computed result is correct, Fortune's definition of robustness and stability [51] requires that the implementation of an algorithm would compute the exact result, if all computations were precise. His definition reflects the attempt to save the (theoretical) correctness proof. It implies that all degenerate cases have to be handled. In contrast to this, Sugihara [140] avoids handling degenerate cases at all, see also Section 4.3.

616

S. Schirra

Even if a degeneracy is detected it is treated like a non-degeneracy by changing the sign from zero to positive or negative. The justification for this approach is again inaccuracy of the input data. The output of an algorithm might be useful although it is not a correct output for any perturbation of the input. In some situations it might be feasible to allow perturbation of the output as well. For example, for some applications it might be sufficient that the output of a two-dimensional convex hull algorithm is a nearly convex polygon while other applications require convexity. Sometimes requirements are relaxed to allow "more general" perturbations of the input data. Robustness and stability are then defined with respect to the weaker problem formulation, cf. Section 4.1. For example. Fortune's and Milenkovic's fine arrangement algorithm [55] computes a combinatorial arrangement that is reahzable by pseudolines but not necessarily by straight lines. Shewchuk [130] suggests calling an algorithm quasi-robust if it computes useful information but not a correct output for any perturbation of the input. For many implementations of geometric primitives it is easy to show that the computed result is correct for some perturbation of the input. The major problem in the implementation with imprecise predicates is their combination. The basic predicates evaluated in an execution of an algorithm operate on the same set of data and and hence might be dependent. Furthermore, the results of dependent geometric predicates might be mutually exclusive, i.e., there might be no small perturbation leading to correctness for all predicates. Hence an algorithm might get into an inconsistent state, a state that could not be reached from any input with correct evaluation. That is where a relaxation of the problem helps. An illegal state can be a legal state for a similar problem with weaker restrictions, e.g., a state illegal for an algorithm computing an arrangement of straight lines can be legal for arrangements of pseudolines. Avoiding inconsistencies among the decisions is a primary goal in achieving robustness in implementations with imprecise predicates. Consistency is a non-issue if an algorithm never evaluates a basic predicate whose outcome is implied by the results of previous evaluations of basic predicates. Such an algorithm is called parsimonious [51,87]. In general it can be very hard to achieve consistency with previous decisions by detecting whether the outcome of a predicate can be deduced from previously evaluated predicates. A well known illustration for this fact is Pappus theorem, cf. Figure 5. Indeed, checking whether the outcome of an orientation test is implied by previous tests on the given set of points is as hard as the existential theory of the reals [51,67]. The following sections present some design principles for robustness under computation with imprecision.

4.1. Representation and model approach The representation and model view formalizes the "compute the correct solution for a related input" idea. It distinguishes real mathematical objects, the models, and their computer representations. A geometric problem V defines a mapping between models, while a computer program A leads to a mapping between representations. For instance, subtraction maps a pair of real numbers to a real number while its counterpart on a computer maps a

Robustness and precision issues in geometric

computation

617

Fig. 5. Pappus theorem is an example where the result of some orientation tests for points in the plane is determined by the result of other orientation tests. Colinearity of the points on the top line and the bottom line implies colinearity of the three intersection points in the middle.

pair of computer representations of the mathematical object real number, namely floatingpoint numbers, to a representation, a floating-point number. For the ideal one-to-one correspondence between representations and models a computer algorithm is correct if the model corresponding to the computed output representation is the solution to the problem for the model corresponding to the input representation. As with real numbers and floating-point numbers, the correspondence between mathematical models and computer representations is normally not one-to-one because of the finite nature of computer representations. To take this approximation behavior into account correctness is replaced by robustness as follows: A computer algorithm A : Irep -> Orep for a geometric problem V '.X -> O h called robust, if for every computer representation Xrep in the set of inputs Xrep, there is a model x in X corresponding to Xrep, such that V{x) is among the models in O corresponding to the computed output A(xrep), see Figure 6. The obvious way to prove robustness of a computer algorithm in the sense above is to show that there is always a model for which the computer algorithm takes the correct decisions. But this is often a highly non-trivial task.

V

A ^rep

~m- O

»»•»-

(_/

rep

Fig. 6. A geometric problem is defined on models, while a computer algorithm works on representations.

Of course, this definition of robustness depends to a large extent on the interpretation of "correspondence" between representations and models for the input and the output part.

618

S. Schirra

Generous definitions of correspondence in the output part make it easier to prove "robustness" of an algorithm. Following Shewchuk's suggestion, algorithms with a fairly generous interpretation of robustness should rather be called quasi-robust, because the output they compute might be less useful than expected. Hoffmann, Hopcroft, and Karasick introduced the "representation and model" formalization in [74], our exposition follows Stewart [136]. Hoffmann, Hopcroft, and Karasick gave an algorithm for intersection of polygons and proved its robustness. However, the underlying correspondence between computer representations and models of polygons was fairly loose. The edges of a model need not be close to the edges of the representation. Furthermore, both simple and non-simple polygons could model a representation. Thus for simple polygons the computed intersection polygon(s) need not be simple. In [77] Hopcroft and Kahn consider robust intersection of a convex polyhedron with a halfspace. Again, the computed output can be arbitrarily far away from the real intersection polyhedron. Milenkovic's hidden variable method [99] fits into the representation and model scheme as well. In the hidden variable method the representation provides a structure with certain topological properties (plus finite precision approximations of the numerical values). A corresponding model provides the hidden (infinite precision) numerical data and has the same topological structure as the representation. In [99], Milenkovic applies the hidden variable method to the computation of line arrangements. An arrangement representation in Orep consists of combinatorial data describing the topology of the arrangement and approximate representations for the vertices of the arrangement. A model has the same topology as the corresponding representation, but may have different vertex locations. Since the computed topology might not be realizable by straight lines, the lines in a model need not be straight, but they must have certain monotonicity properties and be close to the straight lines inXrepIn [136] Stewart proposes local robustness as an alternative for problems for which robustness (in the representation and model sense) is inherently difficult to achieve. Local robustness no longer requires that an algorithm is robust with respect to all problem instances. An algorithm is called locally robust for a set of features, if it is robust for all inputs consisting of exactly those features. Stewart claims, that appropriate feature sets can be chosen such that algorithms which are locally robust algorithms with respect to these feature sets are very unlikely to fail in practice. He presents locally robust algorithms for polyhedral intersection and polyhedral arrangements.

4.2. Epsilon geometry An interesting theoretical framework for the investigation of imprecision in geometric computation is epsilon geometry introduced by Guibas, Salesin, and Stolfi [69]. Instead of a Boolean value, an epsilon predicate returns a real number that gives some information "how much" the input satisfies the predicate. Epsilon geometry assumes that the size of a perturbation can be measured by a non-negative real number and that only the identity has size zero. If an input does not satisfy a predicate, the "truth value" of an epsilon predicate is the size of the smallest perturbation producing a perturbed input that satisfies the predicate. If

Robustness and precision issues in geometric computation

619

the input satisfies a predicate, the "truth value" is the non-positive number Q if the predicate is still satisfied after applying any perturbations of size at most —Q. In [69] epsilon predicates are combined with interval arithmetic. Imprecise evaluations of epsilon predicates compute a lower and an upper bound on the "truth value" of an epsilon predicate. Guibas, Salesin, and Stolfi compose basic epsilon predicates to less simple predicates. Unfortunately epsilon geometry has been applied successfully only to a few basic geometric primitives [69] and the computation of planar convex hulls [70]. Reasoning in the epsilon geometry framework seems to be difficult.

4.3. Topology-oriented approach In order to avoid inconsistent decisions the topology-oriented approach places higher priority on topological and combinatorial data than on numerical values. Whenever numerical computations would lead to decisions violating topology, the decision is replaced by a topology-conforming decision. Usually, violation of topology is not tested directly, but a set of rules is given and it is shown that following these rules ensures the desired topological properties. This approach guarantees topologically consistent output, i.e. valid combinatorial data of the output, but the computed numerical values of the output might not be corresponding to the combinatorial data. For instance, in [144] the computed graph structure representing the Voronoi diagram will always be planar, but the computed coordinates of the vertices might not give a planar embedding. Typically topology-oriented approaches do not treat degeneracies explicitly. They assume sign computations not to produce sign zero. If the numerical value computed in a sign computation is zero, it is replaced by a positive or a negative value, whatever is consistent with the current topology. The topology-oriented approach can lead to amazingly robust algorithms. The algorithms never crash or loop for ever and they compute output having essential combinatorial properties. For instance, the Voronoi diagram algorithm presented in [144] produces some planar graph even if in all decision steps involving sign computations the sign is chosen at random! Of course, "closeness" of the computed output to the correct solution is not guaranteed in this case. Usually it is argued that the computed output comes closer to the correct one if higher precision is used, and, furthermore, that it is the correct one, if the precision is sufficiently high and there are no degeneracies. Sugihara et al. used the topological-oriented approach in several algorithms for computing Voronoi diagrams [79,111,144-146], polyhedral modeling problems [141-143], and 3-dimensional convex hull [103]. Results on computation with imprecision are usually not unequivocally classifiable under the set of design principles described in Sections 4.1 to 4.5. For example, Milenkovic's hidden variable method can be seen as an topology-oriented approach, too, because the topological structure of the output representation has to be respected by every model corresponding to this representation. Thereby, topology gets priority over numerical data, which is characteristic for the topology-oriented approach as well.

620

S. Schirra

4.4. Axiomatic approach In [122,123] Schom proposes what he calls the axiomatic approach. The idea is to investigate which properties of primitive operations are essential for a correctness proof of an algorithm and to find algorithm invariants that are based on these properties only. One of the algorithms considered in [122] is computing a closest pair of a set of points S by plane sweep [72]. Instead of a closest pair, the distance 8$ of a closest pair is computed. In his implementation Schom uses distance functions d{p, q), dx(p, q), dy(p, q), and dy(p,q) on points p = (Px, Py) and q = (q^^^qy) in the plane. In an exact implementation these functions would compute J(px — ^x)^ + (Py — ^y)^, Px — qx, Py — qy, and qy — Py, respectively. Schom lists properties for these functions that are essential for a correctness proof: First, they must have some monotonicity properties, dx must be monotone with respect to the x-coordinate of its first argument, i.e., [px ^ p'^^ dx(p, q) ^ dx(p\q)] holds, and inverse monotone in the jc-coordinate of its second argument, i.e. i^x ^qx=> ^x(p, q) ^ dx{p, q')] holds. Similarly, [qy ^q'y^ dy(p, q) ^ dy{p, q')] and V^y ^ ^y ^ d'yip^^) ^ dy{p,q')] must hold for dy and dy, respectively. Second, dx, dy, and d' must be "bounded by d'\ more precisely, [px ^ qx => d(p,q) ^ dx(p,q)], [py ^qy=> d(p, q) ^ dy{p, q)\ and [py ^qy=> d(p, q) ^ dy(p, q)] must hold. Finally, d must be symmetric, i.e., d(p,q) = d(q, p). These properties, called axioms in [122], are sufficient to prove that for the 8 computed by Schom's plane sweep implementation 8 = min d(s, t) sjeS

holds. No matter what d, dx, dy, and dy are, as long as they satisfy all axioms, T^^s,tesd(s, t) is computed by the sweep. In particular, if exact distance functions are used, the correct distance of a closest pair would be computed. Schom uses floating-point implementations of the distance functions d, dx, dy, and d' He shows that they have the desired properties and that they guarantee a relative error of at most S^prec in the computed approximation for 5^, where ^prec is machine epsilon. Further geometric problems to which the axiomatic approach is applied in [122,123] to achieve robustness are: finding pairs of intersecting line segments and computing the winding number of a point with respect to a not necessarily simple polygon. The latter involves point in polygon testing as a special case.

4.5. Tolerance-based approach This approach associates tolerances to geometric objects in order to represent uncertainties. This is a generalization of the representation of a numerical value by an approximation and an error bound or an interval. Tolerance-based approaches can be seen as a special variant of the representation and model design principle. The tolerances associated with geometric objects restrict the correspondence between representation and model. A model can correspond to a representation only if it satisfies the tolerance constraints associated to the representation.

Robustness and precision issues in geometric

computation

621

Fig. 7. Coincidence inconsistency of points with tolerance regions. If points are considered to be coincident if there tolerance regions overlap, then p\ and P2 are coincident and so are p2 and 773, but p\ and />3 are not.

Fig. 8. Processing points with tolerance regions requires backtracking if points p2 and p-^ are merged after p\ has been processed.

A goal in processing geometric data with a tolerance-based approach is to keep the data in a consistent state in order to ensure the existence of a model. For example, points with associated tolerance regions should have a coincidence relation that is reflexive and transitive, see Figure 7. If inconsistencies arise, the tolerance regions have to be adjusted, either by shrinking them through recomputation of the relevant data with higher precision, or by splitting or merging objects and their tolerance regions. Tolerance-based approaches usually maintain additional neighborhood information on the location of the objects to enable consistency checking. In the example given in Figure 8 one has to detect that after merging points p2 and p^ into one point with an enlarged tolerance region an inconsistency with p\ arises. Pullar [120] discusses consequences of using tolerance circles to point coincidence and point clustering problems. Segal [125] uses a tolerance-based approach in the boundary evaluation in constructive solid geometry. Fang and Briiderlin [48] consider polyhedral modeling as well. They present two versions, a more strict version called the linear model where the models corresponding to a representation must be linear as well, and the less strict curve model that allows for curved models as well and hence requires less efforts to ensure consistency.

622

S. Schirra

4.6. Further and more specific approaches

For modeling polygonal regions in the plane Milenkovic [99] uses a technique called data normalization to modify the input such that it can be processed with imprecise arithmetic, more precisely such that all finite precision operations on the normalized data give the correct result. The permitted modification operations are vertex shifting (given a polygon P and a vertex v, move all vertices of P with distance less than a certain e onto v) and edge cracking (given a segment s = AB and a set V of points, each point with distance at most a certain e io s, replace s = AB by di polyline from A io B whose vertex set is VU{A,5}). For some basic geometric problems there are stable, robust, or quasi-robust computer algorithms. In Table 3 we group results on robustness with imprecise computation in a problem-oriented way.

Table 3 Some robustness results for basic geometric problems with imprecise computation. Note that exact methods are not listed here Convex Hull: 2-dimensional 3-dimensional d-dimensional

[28] [59] [70] [80] [89] [103] [9,10]

Operations on Polygonal Objects: Hne arrangements intersection of polygons intersection of polyhedra 2-d modeling 3-d modeling polyhedral decomposition point location hne segment intersection triangulation

[55] [99] [100] [74] [75] [77] [136] [142] [48] [99] [101] [83] [ 125] [ 127] [ 135] [ 141 ] [ 143] [6,7] [126] [11] [49] [ 122] [ 134] [66] [ 100] [ 113] [ 122] [ 139] [51 ]

Delaunay and Voronoi Diagrams: points in 2-d points in 3-d

[53] [78] [79] [111] [140] [144,145] [146] [35] [79]

The techniques used in the algorithms cited in this section and the reasoning processes used to prove robustness are fairly problem specific and it seems unlikely that they can be easily transferred to other geometric problems.

Robustness and precision issues in geometric computation

623

5. Related issues In this section we first look at some issues that are closely related to precision and robustness: degeneracies, inaccurate data, and rounding. Finally, we briefly address precision and robustness in computational geometry libraries.

5.1. Degeneracy Degeneracy is closely related to precision and robustness, since precision problems are caused by degenerate and nearly degenerate configurations in the input. Typical cases of degeneracy are four cocircular points, three colinear points, or two points with the same ordinate. Theoretical papers on computational geometry often assume the input in general position and leave the "straightforward" handling of special cases to the reader. This might make the presentation of an algorithm more readable, but it can put a huge burden on the implementor, because the handling of degeneracies is often less straightforward than claimed. In Section 2 we viewed a geometric problem as a mapping from a set of permitted input data, consisting of a combinatorial and a numerical part, to a set of valid output data, consisting of a combinatorial and a numerical part. We now assume that the combinatorial part of the input is trivial, i.e. just a sequencing of data or so, such that we can view a geometric problem P as a function from W^ to Cout x ^"^^ where n,m, and d are integers and Cout is some discrete space, modeling the combinatorial part of the output, e.g., a planar graph or a face incidence lattice. A problem instance x e M^^, which for concreteness we view as n points in J-dimensional space, is called degenerate if V is discontinuous at x. For example, if J = 2 and V{x) is the Voronoi diagram of x, i.e., a straight-line planar graph together with coordinates for its vertices, then x is degenerate iff x contains four cocircular points defining a vertex of the diagram. An instance x is called degenerate with respect to some algorithm A if the computation of A on input x contains a sign test with outcome zero. Clearly^, if A solves V and x is a degenerate problem instance then x is also degenerate for A. Symbolic perturbation schemes introduced to computational geometry by Edelsbrunner and Mucke [41], refined by Emiris and Canny [44,43] and Emiris, Canny, and Seidel [45] and extended by Yap [147,148], have been proposed to abolish the handling of degeneracies, see also [128]. With these schemes, the input is perturbed symbolically, e.g., Emiris and Canny [43] propose to replace the j-th coordinate xtj of the /-th input point by Xij -\- 8 • i^, where £ is a positive infinitesimal, and the computation is carried out on the perturbed input. All intermediate results are now polynomials in e. It can be shown that the Emiris and Canny scheme removes many geometric degeneracies, e.g., colinearity of three points, at only a constant factor increase in running time. The same statement holds for the other perturbation schemes, although with a larger constant of proportionality. Exact computation is a prerequisite for applying these techniques [151]. This assumes all functions evaluated in sign tests to be continuous functions of the inputs.

624

S. Schirra

The handling of degeneracies and the use of symbolic perturbation schemes are a point of controversy in the computational geometry literature, see [22,124,128]. Symbolic perturbation is a fairly general technique that abolishes the handling of degenerate cases and it can be very useful [45]. However, for degenerate input x, not V{x), but the limit of V{x{s)) for e ^- 0 is computed. This may or may not be sufficient. The complexity of the postprocessing required to retrieve the answer V{x) for a degenerate input x from the answer V{x{e)) to the perturbed input x{e) can be significant. Bumikel et al. claim in [22] that for many geometric problems algorithms handling degeneracies directly are only moderately more complex than algorithms assuming non-degenerate inputs. Furthermore, they show that perturbation schemes may incur a significant loss in efficiency, since the computed output for the symbolically perturbed input may be significantly larger than the actual solution. Bumikel et al. use line segment intersection and convex hull (in arbitrary dimensions) as examples. Halperin and Shelton [71] combine a (non-symbolic) perturbation scheme with floatingpoint arithmetic to compute an arrangement of circles on a sphere, where the circles on the sphere result from intersection of the sphere with other spheres. They use their algorithm in molecular modeling. Since the given sphere locations are not accurate anyway in the molecular modeling application, perturbation doesn't harm. Sometimes, the term robustness is also used with respect to degeneracies. Dey et al. [35] define robustness as the ability of a geometric algorithm to deal with degeneracies and "inaccuracies" during various numerical computations. The definition of robustness in [122] is similar.

5.2. Inaccurate data In practice, many geometric data is known to be inaccurate, for instance geometric data obtained by measuring real world data. Since imprecise arithmetic also introduces uncertainty, processing geometric objects computed with imprecise computation and processing of real world data known to be potentially inaccurate are highly related issues. Treating inaccurate data as exact data works with exact geometric computation as long as the input data are consistent. If not, we are in a situation similar to computation with imprecision. An algorithm might get into states it was not supposed to get in and which it therefore cannot handle. This similarity has led researchers to advocate imprecise computation and to attack both inconsistencies arising from imprecise computation and inconsistencies due to inaccurate data uniformly. In this approach, however, it is not clear whether errors in the output are caused by precision problems during computation or inaccuracies in the data. Source errors and processing errors become indistinguishable. Exact computation, on the other hand, only assures that inconsistencies are due to faulty data. But knowing that an error was caused by a source error does not at all tell you how to proceed. Tolerance-based approaches discussed in Section 4.5 are a natural choice to deal with inaccurate data. As with computation with imprecision, a lot of research on modeling and handling uncertainty in geometric data is still needed.

Robustness and precision issues in geometric computation

625

Fig. 9. Snap-rounding line segments.

5.3. Rounding The complexity, e.g., the bit-length of integers, of numerical data in the output of algorithms for constructive geometric problems is usually higher than that of the input data. Thus cascading geometric computations can result in expensive arithmetic operations. If the cost caused by increased precision resulting from cascaded computation is not tolerable, precision must be decreased by rounding the geometric output data. The goal in rounding is not to deviate too much from the original data both with respect to geometry and topology while reducing the precision. Rounding geometric objects is related to simultaneous approximation of reals by rationals [138]. However, rounding geometric data is more complicated than rounding numbers and can be very difficult [102], because combinatorial and numerical data have to be kept consistent. An intensively studied example is rounding an arrangement of line segments. Greene and Yao [66] were the first to investigate rounding line segments consistently to a regular grid. Note that simply rounding each segment endpoint to its nearest grid point can introduce new intersections and hence significantly violate the original topology. Greene and Yao break line segments into polylines such that all endpoints lie on the grid and the topology is largely preserved. Largely means, incidences not present in the original arrangement might arise, but it can be shown that no additional crossings are generated. Currently the most promising approach is ''snap-rounding", also called ''hot-pixel" rounding, usually attributed to Greene and Hobby. A pixel in the regular grid is called hot if it contains an endpoint of an original line segment or an intersection point of the original segments. In the rounding process all line segments intersecting a hot pixel are snapped to the pixel center, cf. Figure 9. Snap-rounding is used in [64,68,73]. Rounding can be done as a postprocessing step after exact computation, but it can also be seen as part of the problem and be incorporated into the algorithmic solution, as e.g. in [64] and [68].

5.4. Robustness in geometric algorithms libraries Library components should come with a precise description what they compute and for which inputs they are guaranteed to work. Correctness means that a component behaves

626

S. Schirra

according to such a specification. Exactness should not be confused with correctness in the sense of rehability. There is nothing wrong with approximation algorithms or approximate solutions as long as they do what they profess to do. Correctness can have unlike appearances: An algorithm handhng only non-degenerate cases can be correct in the above sense. Also, an algorithm that guarantees to compute the exact result only if the numerical input data are integral and smaller than some given bound can be correct as well as an algorithm that computes an approximation to the exact result with a guaranteed error bound. Correctness in the sense of reliability is a must for (re)usability and hence for a geometric algorithms library. Among the library and workbench efforts in computational geometry [4,32,61,46,96, 110] the XYZ-Geobench and LEDA deserve special attention concerning precision and robustness. In XYZ-Geobench [110,121] the axiomatic approach to robustness, described in Section 4.4, is used. In LEDA [95,96] arbitrary precision integer arithmetic is combined with the floating-point filter technique to yield efficient exact components for rational problems. Recently, in Europe and the US, new library projects called CGAL (Computational Geometry Algorithms Library) [26,47,115] and GeomLib [1,8] have been started. The goal of both projects is to enhance the technology transfer from theory to practice in geometric computing by providing reliable, reusable implementations of geometric algorithms.

6. Conclusion Over the past decade much progress has been made on the precision and robust problem, but no satisfactory general-purpose solution has been found. If exact predicates or exact number types are available, exact geometric computation is the more convenient approach. Algorithms designed for the real RAM model [117] can be implemented in a straightforward way; a redesign to deal with imprecision is not necessary. Moreover, exact computation is a prerequisite for the use of symbolic perturbation schemes. However, even with adaptive evaluation, exact geometric computation has its costs. Concerning efficiency, practitioners often ask for the impossible. Reliable algorithms based on exact geometric computation are requested to be competitive in performance to algorithms that sometimes crash or exhibit otherwise unexpected behavior. It should be clear, however, that one has to pay for the detection of degenerate and nearly degenerate situations in order to get reliability. Exact geometric computation is not a panacea; it has limits. For cascaded computations with large depth, i.e. computations where the result of an arithmetic operation is an operand in another arithmetic operation many times in a row, the increase on required precision with the depth of computation makes exact geometric computation less suited. In this case, rounding intermediate results becomes important. Next, there are applications where speed is much more an issue than accuracy. As long as the computed outputs are useful, a fast robust algorithm dealing with imprecise computation will be more appropriate. Unfortunately, implementation with imprecision is much less straightforward. There is no general, widely applicable theory on how to deal with imprecision. Related surveys on the problem of precision and robustness in geometric computation are given by Fortune [52], Hoffmann [76], and Yap [149]. Yap [150] and Yap and Dube

Robustness and precision issues in geometric computation

627

[151] address exact geometric computation. Franklin [60] especially discusses cartographic errors caused by precision problems. Dobkin and Silver [36] illustrate the effect of cascading geometric computation on the numerical accuracy of the computed result. Furthermore, robustness and precision issues were discussed at the ACM Workshop on Applied Computational Geometry at FCRC'96 in Philadelphia, see [54,67,116].

Acknowledgment The author would like to thank Christoph Bumikel, Kurt Mehlhom, Greg Perkins, and Michael Seel for their comments on earlier versions of this survey.

References [1] P.K. Agarwal, M.T. Goodrich, S.R. Kosaraju, F.R Preparata, R. Tamassia and J.S. Vitter, Applicable and robust geometric computing (1995). See h t t p : / /www. c s . b r o w n . e d u / c g c / . [2] A. V. Aho, J.E. Hopcroft and J.D. UUman, The Design and Analysis of Computer Algorithms, AddisonWesley (1974). [3] G. Alefeld and J. Herzberger, Introduction to Interval Computation, Academic Press, New York (1983). [4] F. Avnaim, C+ + GAL: A C++ Library for Geometric Algorithms, INRIA Sophia-Antipolis (1994). [5] F. Avnaim, J.-D. Boissonnat, O. Devillers, F. Preparata and M. Yvinec, Evaluating signs of determinants using single-precision arithmetic, Algorithmica 17 (1997), 111-132. [6] C.L. Bajaj and T.K. Dey, Robust decompositions ofpolyhedra, Proc. 9th FSTTCS, Lecture Notes in Comput. Sci. 405, Springer-Verlag (1989), 267-279. [7] C.L. Bajaj and T.K. Dey, Convex decomposition ofpolyhedra and robustness, SIAM J. Comput. 21 (1992), 339-364. [8] J.E. Baker, R.Tamassia and L. Vismara, GeomLih: Algorithm engineering for a geometric computing library (1997). (Preliminary report.) [9] C.B. Barber, Computational geometry with imprecise data and arithmetic, PhD thesis. Technical Report CS-TR-377-92, Princeton University (1992). [10] C.B. Barber, D.P Dobkin and H. Huhdanpaa, The Quickhull algorithm for convex hulls, ACM Trans. Math. Software 22 (4) (Dec. 1996), 469-483. [11] C.B. Barber and M. Hirsch, A robust algorithm for point in polyhedron, Proc. 5th Canad. Conf. Comput. Geom. (1993), 479^84. [12] M. Benouamer, P. Jaillon, D. Michelucci and J.-M. Moreau, A lazy solution to imprecision in computational geometry, Proc. 5th Canad. Conf. Comput. Geom. (1993), 13-1^. [13] J.L. Bentley and T.A. Ottmann, Algorithms for reporting and counting geometric intersections, IEEE Trans. Comput. C-28 (1979), 643-647. [14] J. Blomer, Computing sums of radicals in polynomial time, Proc. 32nd Annu. IEEE Sympos. Found. Comput. Sci. (1991), 670-677. [15] J.-D. Boissonnat and F. Preparata, Robust plane sweep for intersecting segments. Technical Report 3270, INRIA, Sophia-Antipolis, France (September 1997). [16] J.-D. Boissonnat and M. Yvinec, Algorithmic Geometry, Cambridge University Press, Cambridge, UK (1997). [17] H. Bronnimann, I.Z. Emiris, V.Y Pan and S. Pion, Computing exact geometric predicates using modular arithmetic with single precision, Proc. 13th Annu. ACM Sympos. Comput. Geom. (1997), 174-182. [18] H. Bronnimann and M. Yvinec, Efficient exact evaluation of signs of determinants, Proc. 13th Annu. ACM Sympos. Comput. Geom. (1997), 166-173. [19] C. Bumikel, Exact computation ofVoronoi diagrams and line segment intersections, PhD thesis, Universitat des Saarlandes (March 1996).

628

S. Schirra

[20] C. Bumikel, R. Fleischer, K. Mehlhom and S. Schirra, A strong and easily computable separation hound for arithmetic expressions involving square roots, Proc. of the 8th ACM-SIAM Symp. on Discrete Algorithms (1997), 702-709. [21] C. Bumikel, K. Mehlhom and S. Schirra, How to compute the Voronoi diagram of line segments: Theoretical and experimental results, Proc. 2nd Annu. European Sympos. Algorithms, Lecture Notes in Comput. Sci. 855, Springer-Verlag (1994), 227-239. [22] C. Bumikel, K. Mehlhom and S. Schirra, On degeneracy in geometric computations, Proc. 5th ACMSIAM Sympos. Discrete Algorithms (1994), 16-23. [23] C. Bumikel, K. Mehlhom and S. Schirra, The LEDA class r e a l number. Technical Report MPI-I-96-1001, Max-Planck-Institut fur Informatik (1996). [24] J.F. Canny, The Complexity of Robot Motion Planning, ACM Doctoral Dissertation Award 1987. MIT Press (1987). PhD thesis. [25] J.F. Canny, Generalised characteristic polynomials, J. Symbolic Comput. 9 (1990), 241-250. [26] CGALproject. See h t t p : / /www . c s . u u . n l / C G A L /. [27] J. Chang and V. Milenkovic, An experiment using LNfor exact geometric computations, Proc. 5th Canad. Conf. Comput. Geom. (1993), 67-72. [28] W. Chen, K. Wada and K. Kawaguchi, Parallel robust algorithms for constructing strongly convex hulls, Proc. 12th Annu. ACM Sympos. Comput. Geom. (1996), 133-140. [29] K.L. Clarkson, Safe and effective determinant evaluation, Proc. 33rd Annu. IEEE Sympos. Found. Comput. Sci. (1992), 387-395. [30] J.L.D. Comba and J. Stolfi, Affine arithmetic and its applications to computer graphics (1993). Presented at SIBGRAPr93, Recife (Brazil), October 20-22. [31] M. de Berg, M. van Kreveld, M. Overmars and O. Schwarzkopf, Computational Geometry, SpringerVerlag(1997). [32] P. de Rezende and W. Jacometti, Geolab: An environment for development of algorithms in computational geometry, Proc. 5th Canad. Conf. Comput. Geom., Waterloo, Canada (1993), 175-180. [33] T.J. Dekker, A floating-point technique for extending the available precision, Numer. Math. 18 (1971), 224-242. [34] O. Devillers and F. P. Preparata, A probabilistic analysis of the power of arithmetic filters. Technical Report CS-96-27, Center for Geometric Computing, Dept. Computer Science, Brown Univ. (1996). [35] T.K. Dey, K. Sugihara and C.L. Bajaj, Delaunay triangulations in three dimensions with finite precision arithmetic, Comput. Aided Geom. Design 9 (1992), 457^70. [36] D.P Dobkin and D. Silver, Applied computational geometry: Towards robust solutions of basic problems, J. Comput. Syst. Sci. 40 (1989), 70-87. [37] D. Douglas, It makes me so CROSS, Introductory Readings in Geographic Information Systems, D.J. Peuquet and D.F. Marble, eds, Taylor & Francis, London (1990), 303-307. [38] T. Dube, K. Ouchi and C. K. Yap, Tutorial for R e a l / E x p r Package (1996). [39] T. Dube and C.K. Yap, A basis for implementing exact computational geometry. Extended abstract (1993). [40] H. Edelsbmnner, Algorithms in Combinatorial Geometry, Springer-Verlag (1986). [41] H. Edelsbmnner and E. Mticke, Simulation of simplicity: A technique to cope with degenerate cases in geometric algorithms, ACM Trans, on Graphics 9 (1990), 66-104. [42] I. Emiris, A complete implementation for computing general dimensional convex hulls, Research Report 2551, INRIA, Sophia-Antipolis, France (1996). [43] I. Emiris and J. Canny, An efficient approach to removing geometric degeneracies, Proc. of the 8th ACM Symp. on Comput. Geom. (1992), 74-82. [44] I. Emiris and J. Canny, A general approach to removing degeneracies, SIAM J. Comput. 24 (1995), 650664. [45] I.Z. Emiris, J.F. Canny and R. Seidel, Efficient perturbations for handling geometric degeneracies, Algorithmica 19 (1-2) (September 1997), 219-242. [46] P. Epstein, J. Kavanagh, A. Knight, J. May, T. Nguyen and J.-R. Sack, A workbench for computational geometry, Algorithmica 11 (1994), 404-^28. [47] A. Fabri, G.-J. Giezeman, L. Kettner, S. Schirra and S. Schonherr, The CGAL kernel: A basis for geometric computation. Applied Computational Geometry: Towards Geometric Engineering (WACG96), M.C. Lin and D. Manocha, eds. Springer LNCS 1148 (1996), 191-202.

Robustness and precision issues in geometric computation

629

[48] S. Fang and B. Briiderlin, Robustness in geometric modeling — tolerance-based methods. Computational Geometry — Methods, Algorithms and Applications: Proc. Intemat. Workshop Comput. Geom. CG '91, Lecture Notes in Comput. Sci. 553, Springer-Verlag (1991), 85-101. [49] A.R. Forrest, Computational geometry in practice. Fundamental Algorithms for Computer Graphics, R.A. Eamshaw, ed., NATO AST, Vol. F17, Springer-Verlag (1985), 707-724. [50] A.R. Forrest, Computational geometry and software engineering: Towards a geometric computing environment. Techniques for Computer Graphics, D.F. Rogers and R.A. Eamshaw, eds. Springer-Verlag (1987), 23-37. [51] S. Fortune, Stable maintenance of point set triangulations in two dimensions, Proc. 30th Annu. IEEE Sympos. Found. Comput. Sci. (1989), 494-505. [52] S. Fortune, Progress in computational geometry. Directions in Geometric Computing, R. Martin, ed.. Information Geometers Ltd. (1993), 81-128. [53] S. Fortune, Numerical stability of algorithms for 2-d Delaunay triangulations, Intemat. J. Comput. Geom. Appl. 5(1) (1995), 193-213. [54] S. Fortune, Robustness issues in geometric algorithms. Applied Computational Geometry: Towards Geometric Engineering (WACG96), M.C. Lin and D. Manocha, eds. Springer LNCS 1148 (1996), 9-14. [55] S. Fortune and V. Milenkovic, Numerical stability of algorithms for line arrangements, Proc. 7th Annu. ACM Sympos. Comput. Geom. (1991), 334-341. [56] S. Fortune and C.J. van Wyk, LN user manual (1993). [57] S. Fortune and C.J. van Wyk, Efficient exact arithmetic for computational geometry, Proc. 9th Annu. ACM Sympos. Comput. Geom. (1993), 163-172. [58] S. Fortune and C.J. van Wyk, Static analysis yields efficient exact integer arithmetic for computational geometry, ACM Trans. Graph. 15 (3) (July 1996), 223-248. [59] P.G. Franciosa, C. Gaibisso, G. Gambosi and M. Talamo, A convex hull algorithm for points with approximately known positions, Intemat. J. Comput. Geom. Appl. 4 (2) (1994), 153-163. [60] W.R. Franklin, Cartographic errors symptomatic of underlying algebra problems, Proc. Intemat. Sympos. Spatial Data Handling, Vol. 1, 20-24 August (1984), 190-208. [61] G.-J. Giezeman, PlaGeo, a Library for Planar Geometry and SpaGeo, a Library for Spatial Geometry, Utrecht University (1994). [62] D. Goldberg, What every computer scientist should know about floating-point arithmetic, ACM Comput. Surv. 32 (1) (March 1991), 5 ^ 8 . [63] M.F. Goodchild, Issues of quality and uncertainty. Advances in Cartography, J.C. MuUer, ed., Elsevier Applied Science, London (1991), 113-139. [64] M. Goodrich, L. Guibas, J. Hershberger and P. Tanenbaum, Snap rounding line segments efficiently in two and three dimensions, Proc. 13th Annu. ACM Sympos. Comput. Geom. (1997), 284-293. [65] T. Granlund, GNU MP, The GNU Multiple Precision Arithmetic Library, 1.Q2 edition (June 1996). [66] D.H. Greene and F.F. Yao, Finite-resolution computational geometry, Proc. 27th Annu. IEEE Sympos. Found. Comput. Sci. (1986), 143-152. [67] L. Guibas, Implementing geometric algorithms robustly, AppUed Computational Geometry: Towards Geometric Engineering (WACG96), M.C. Lin and D. Manocha, eds. Springer LNCS 1148 (1996), 15-22. [68] L. Guibas and D. Marimont, Rounding arrangements dynamically, Proc. 11th Annu. ACM Sympos. Comput. Geom. (1995), 190-199. [69] L. Guibas, D. Salesin and J. Stolfi, Epsilon geometry: Building robust algorithms from imprecise computations, Proc. 5th Annu. ACM Sympos. Comput. Geom. (1989), 208-217. [70] L. Guibas, D. Salesin and J. Stolfi, Constructing strongly convex approximate hulls with inaccurate primitives, Proc. 1st Annu. SIGAL Intemat. Sympos. Algorithms, Lecture Notes in Comput. Sci. 450, SpringerVerlag (1990), 261-270. [71] D. Halperin and C. Shelton, A perturbation scheme for spherical arrangements with application to molecular modeling, Proc. 13th Annu. ACM Sympos. Comput. Geom. (1997), 183-192. [72] K. Hinrichs, J. Nievergelt and P. Schom, An all-round sweep algorithm for 2-dimensional nearest-neighbor problems. Acta Informatica 29 (1992), 383-394. [73] J.D. Hobby, Practical line segment intersection with finite precision output. Technical Report 93/2-27, Bell Laboratories (Lucent Technologies) (1993).

630

S. Schirra

[74] CM. Hoffmann, J.E. Hopcroft and M. Karasick, Towards implementing robust geometric computations, Proc. 4th Annu. ACM Sympos. Comput. Geom. (1988), 106-117. [75] CM. Hoffmann, J.E. Hopcroft and M.T. Karasiclc, Robust set operations on polyhedral solids, IEEE Comput. Graph. Appl. 9 (6) (November 1989), 50-59. [76] CM. Hoffmann, The problem of accuracy and robustness in geometric computation, IEEE Computer (March 1989), 3 1 ^ 1 . [77] J.E. Hopcroft and P.J. Kahn, A paradigm for robust geometric algorithms, Algorithmica 7 (1992), 339380. [78] H. Inagaki and K. Sugihara, Numerically robust algorithm for constructing constrained Delaunay triangulation, Proc. 6th Canad. Conf. Comput. Geom. (1994), 171-176. [79] H. Inagaki, K. Sugihara and N. Sugie, Numerically robust incremental algorithm for constructing threedimensional Voronoi diagrams, Proc. 4th Canad. Conf. Comput. Geom. (1992), 334-339. [80] J.W. Jaromczyk and G.W. Wasilkowski, Computing convex hull in a floating point arithmetic, Comput. Geom. 4 (1994), 283-292. [81] K. Jensen and N. Wirth, PASCAL- User Manual and Report. Revised for the ISO Pascal Standard, 3rd edn. Springer-Verlag (1985). [82] S. Kahan and J. Snoeyink, On the bit complexity of minimum link paths: Superquadratic algorithms for problems solvable in linear time, Proc. 12th Annu. ACM Sympos. Comput. Geom. (1996), 151-158. [83] M. Karasick, On the representation and manipulation of rigid solids, PhD thesis, Dept. Comput. Sci., McGill Univ., Montreal (1989). [84] M. Karasick, D, Lieber and L.R. Nackman, Efficient Delaunay triangulations using rational arithmetic, ACM Trans. Graph. 10 (1991), 71-91. [85] R. Klein, Algorithmische Geometric, Addison-Wesley (1997) (in German). [86] D.E. Knuth, The Art of Computer Programming. Vol. 2: Seminumerical Algorithms, 2nd edn, AddisonWesley(1981). [87] D.E. Knuth, Axioms and Hulls, Lecture Notes in Comput. Sci. 606, Springer-Verlag, Heidelberg, Germany (1992). [88] M.J. Laszlo, Computational Geometry and Computer Graphics in C+ + , Prentice-Hall, Upper Saddle River, NJ( 1996). [89] Z. Li and V. Milenkovic, Constructing strongly convex hulls using exact or rounded arithmetic, Algorithmica 8 (1992), 345-364. [90] LiDIA-Group, Fachbereich Informatik Institut ftir Theoretische Informatik TH Darmstadt, LiDIA Manual A library for computational number theory, 1.3 edition (April 1997). [91] G, Liotta, F. Preparata and R. Tamassia, Robust proximity queries: An illustration of degree-driven algorithm design, Proc. 13th Annu. ACM Sympos. Comput. Geom. (1997), 156-165. [92] K. Mehlhom, Data Structures and Algorithms 3: Multi-dimensional Searching and Computational Geometry, Springer-Veriag (1984). [93] K. Mehlhom and S. Naher, Implementation of a sweep line algorithm for the straight line segment intersection problem, Report MPI-I-94-160, Max-Planck-Institut Inform., Saarbriicken, Germany (1994). [94] K. Mehlhom and S. Naher, The implementation of geometric algorithms, Proc. 13th World Computer Congress IFIP94, Vol. 1 (1994), 223-231. [95] K. Mehlhom and S. Naher, LEDA, a platform for combinatorial and geometric computing, Comm. ACM 38 (1995), 96-102. [96] K. Mehlhom, S. Naher and C Uhrig, The LEDA User manual, 3.5 edition (1997). See http://www.mpi-sb.mpg.de/LEDA/leda.html. [97] M. Mignotte, Identification of algebraic numbers, J. Algorithms 3 (1982), 197-204. [98] M. Mignotte, Mathematics for Computer Algebra, Springer-Verlag (1992). [99] V. Milenkovic, Verifiable implementations of geometric algorithms using finite precision arithmetic, Artif. Intell. 37(1988), 377^01. [100] V. Milenkovic, Double precision geometry: A general technique for calculating line and segment intersections using rounded arithmetic, Proc. 30th Annu. IEEE Sympos. Found. Comput. Sci. (1989), 500-505. [101] V. Milenkovic, Robust polygon modeling, Comput. Aided Design 25 (9) (1993). (Special issue on Uncertainties in Geometric Design.)

Robustness and precision issues in geometric computation

631

102] V. Milenkovic and L.R. Nackman, Finding compact coordinate representations for polygons and polyhedra, Proc. 6th Annu. ACM Sympos. Comput. Geom. (1990), 244-252. 103] T. Minakawa and K. Sugihara, Topology oriented vs. exact arithmetic - experience in implementing the three-dimensional convex hull algorithm, ISAAC97 (1997). 104] R.E. Moore, Interval Analysis, Prentice-Hall, Englewood Cliffs, NJ (1966). 105] R.E. Moore, Methods and Applications of Interval Analysis, SIAM, Philadelphia (1979). 106] S.P. Mudur and PA. Koparkar, Interval methods for processing geometric objects, IEEE Comput. Graph. Appl. 4 (2) (1984), 7-17. 107] K. Mulmuley, Computational Geometry: An Introduction through Randomized Algorithms, Prentice-Hall, Englewood Cliffs, NJ (1994). 108] J. Nievergelt and K.H. Hinrichs, Algorithms and Data Structures: With Applications to Graphics and Geometry, Prentice-Hall, Englewood Cliffs, NJ (1993). 109] J. Nievergelt and P. Schom, Das Rdtsel der verzopften Geraden, Informatik Spektrum 11 (1988), 163-165. (in German). 110] J. Nievergelt, P. Schom, M. de Lorenzi, C. Ammann and A. Brtingger, XYZ: Software for geometric computation. Technical Report 163, Institut fur Theorische Informatik, ETH, Zurich, Switzerland (1991). I l l ] Y. Oishi and K. Sugihara, Topology oriented divide and conquer algorithm for Voronoi diagrams. Graph. Models Image Process. 57 (4) (1995), 303-314. 112] J. O'Rourke, Computational Geometry in C, Cambridge University Press, Cambridge (1994). 113] T. Ottmann, G. Thiemt and C. Ullrich, Numerical stability of geometric algorithms, Proc. of the 3rd ACM Symp. on Computational Geometry (1987), 119-125. 114] K. Ouchi, Real/Expr: Implementation of exact computation, Master thesis, Courant Institute, New York University (1997). 115] M. Overmars, Designing the computational geometry algorithms library CGAL, Applied Computational Geometry: Towards Geometric Engineering (WACG96), M.C. Lin and D. Manocha, eds. Springer LNCS 1148(1996), 53-58. 116] F. Preparata, Robustness in geometric algorithms. Applied Computational Geometry: Towards Geometric Engineering (WACG96), M.C. Lin and D. Manocha, eds. Springer LNCS 1148 (1996), 23-24. 117] F. Preparata and M.I. Shamos, Computational Geometry, Springer-Verlag (1985). 118] D.M. Priest, Algorithms for arbitrary precision floating point arithmetic, 10th Symposium on Computer Arithmetic, IEEE Computer Society Press (1991), 132-143. 119] D.M. Priest, On properties of floating-point arithmetic: Numerical stability and the cost of accurate computations, PhD thesis. Department of Mathematics, University of California at Berkeley (1992). 120] D. PuUar, Consequences of using a tolerance paradigm in spatial overlay, Proc. of Auto-Carto 11 (1993), 288-296. 121] P. Schorn, An object-oriented workbench for experimental geometric computation, Proc. 2nd Canad. Conf. Comput. Geom. (1990), 172-175. 122] P. Schorn, Robust Algorithms in a Program Library for Geometric Computation, Informatik-Dissertationen ETH Zurich, Vol. 32, Verlag der Fachvereine, Zurich (1991). 123] P. Schom, An axiomatic approach to robust geometric programs, J. Symbolic Comput. 16 (1993), 155165. 124] P. Schorn, Degeneracy in geometric computation and the perturbation approach, Comput. J. 37 (1) (1994), 35-42. 125] M.G. Segal, Using tolerances to guarantee valid polyhedral modeling results, Comput. Graph. 24 (4) (August 1990), 105-114. 126] M.G. Segal and C.H. Sequin, Partitioning polyhedral objects into nonintersecting parts, IEEE Comput. Graph. Appl. 8 (1) (January 1988), 53-67. 127] M.G. Segal and C.H. Sequin, Consistent calculations for solids modelling, Proc. 1st Annu. ACM Sympos. Comput. Geom. (1985), 29-38. 128] R. Seidel, The nature and meaning of perturbations in geometric computations, STACS94 (1994). 129] B. Serpette, J. Vuillemin and J.C. Herve, BigNum, a portable and efficient package for arbitrary-precision arithmetic. Technical Report 2, Digital Paris Research Laboratory (1989). [130] J.R. Shewchuk, Adaptive precision floating-point arithmetic and fast robust geometric predicates. Technical Report CMU-CS-96-140, School of Computer Science, Carnegie Mellon University (1996).

632

S. Schirra

[131] J.R. Shewchuk, Robust adaptive floating-point geometric predicates, Proc. 12th Annu. ACM Sympos. Comput. Geom. (1996), 141-150. [132] J.R. Shewchuk, Triangle: Engineering a 2D quality mesh generator and Delaunay triangulator. Applied Computational Geometry: Towards Geometric Engineering (WACG96), M.C. Lin and D. Manocha, eds. Springer LNCS 1148 (1996), 203-222. [133] IEEE Standard, 754-1985 for binary floating-point arithmetic, SIGPLAN 22 (1987), 9-25. [134] A.J. Stewart, Robust point location in approximate polygons, Proc. 3rd Canad. Conf. Comput. Geom. (1991), 179-182. [135] A.J. Stewart, The theory and practice of robust geometric computation, or, how to build robust solid modelers, PhD thesis, Dept. Comput. Sci., Cornell Univ., Ithaca, NY (August 1991). Technical Report TR 911229. [136] A.J. Stewart, Local robustness and its application to polyhedral intersection, Intemat. J. Comput. Geom. Appl. 4(1) (1994), 87-118. [137] K.G. Suffem and E.D. Fackerell, Interval methods in computer graphics, Comput. Graphics 15 (3) (1991), 331-340. [138] K. Sugihara, On finite-precision representations of geometric objects, J. Comput. Syst. Sci. 39 (1989), 236-247. [139] K. Sugihara, An intersection algorithm based on Delaunay triangulation, IEEE Comput. Graph. Appl. 12 (2) (March 1992), 59-67. [140] K. Sugihara, A simple method for avoiding numerical errors and degeneracy in Voronoi diagram construction, lEICE Trans. Fundamentals E75-A (4) (April 1992), 468^77. [141] K. Sugihara, Topologically consistent algorithms related to convex polyhedra, Proc. 3rd Annu. Intemat. Sympos. Algorithms Comput., Lecture Notes in Comput. Sci. 650, Springer-Verlag (1992), 209-218. [142] K. Sugihara, A robust and consistent algorithm for intersecting convex polyhedra, Comput. Graph. Forum 13 (3) (1994), 45-54. Proc. EUROGRAPHICS '94. [143] K. Sugihara and M. Iri, A solid modelling system free from topological inconsistency, J. Inform. Process. 12 (4) (1989), 380-393. [144] K. Sugihara and M. Iri, Construction of the Voronoi diagram for 'one million' generators in singleprecision arithmetic, Proc. IEEE 80 (9) (September 1992), 1471-1484. [145] K. Sugihara and M. Iri, A robust topology-oriented incremental algorithm for Voronoi diagrams, Intemat. J. Comput. Geom. Appl. 4 (1994), 179-228. [146] K. Sugihara, Y. Goishi and T. Imai, Topology-oriented approach to robustness and its applications to several Voronoi-diagram algorithms, Proc. 2nd Canad. Conf. Comput. Geom. (1990), 36-39. [147] C.K. Yap, A geometric consistency theorem for a symbolic perturbation scheme, Proc. of the 4th ACM Symp. on Computational Geometry (1988), 134-141. [148] C.K. Yap, Symbolic treatment of geometric degeneracies, J. Symbohc Comput. 10 (1990), 349-370. [149] C.K. Yap, Robust geometric computation, CRC Handbook of Discrete and Computational Geometry, J.E. Goodman and J. O'Rourke, eds, CRC Press (1997), 653-668. [150] C.K. Yap, Towards exact geometric computation, Comput. Geom. 7(1-2) (1997), 3-23. Preliminary version appeared in Proc. of the 5th Canad. Conf. on Comp. Geom. (1993), 405^19. [151] C.K. Yap and T. Dube, The exact computation paradigm. Computing in Euclidean Geometry, D.-Z. Du and F.K. Hwang, eds, Lecture Notes Series on Comput., Vol. 1, World Scientific, Singapore (1995), 452^92.

CHAPTER 15

Geometric Shortest Paths and Network Optimization Joseph S.B. Mitchell* Department of Applied Mathematics and Statistics, State University of New York, Stony Brook, NY 11794-3600 E-mail: jsbm @ ams. sunysb. edu; http://www. ams. sunysb. edu/~jsbm/

Contents 1. Introduction 1.1. Shortest paths in graphs 1.2. Approximation algorithms 1.3. Geometric preliminaries 2. Geodesic paths in a simple polygon 2.1. Special structure: Linear-time algorithms 2.2. Other geodesic distance problems 3. Geodesic paths in a polygonal domain 3.1. Searching the visibility graph 3.2. Continuous Dijkstra method 3.3. Approximation algorithms 3.4. Two-point queries 3.5. Other geodesic distance problems 4. Shortest paths in other metrics 4.1. L\ metric 4.2. Link distance 4.3. The weighted region metric 4.4. Minimum-time paths: Kinodynamic motion planning 4.5. Curvature-constrained shortest paths 4.6. Optimal motion of non-point robots 4.7. Multiple criteria optimal paths 4.8. Other optimal path problems 5. On-line algorithms and navigation without maps 6. Shortest paths in higher dimensions 6.1. Complexity 6.2, Special cases

635 636 637 637 639 639 642 643 644 645 646 646 647 648 648 649 650 652 653 655 656 658 659 665 666 667

*This research was largely conducted while the author was a Fulbright Research Scholar at Tel Aviv University, The author is also partially supported by NSF grant CCR-9504192, and by grants from Boeing Computer Services, Bridgeport Machines, Hughes Aircraft, and Sun Microsystems.

HANDBOOK OF COMPUTATIONAL GEOMETRY Edited by J.-R. Sack and J. Urrutia © 1999 Elsevier Science B.V. All rights reserved

633

634

J.S.B. Mitchell

6.3. Approximation algorithms 6.4. Other metrics 7. Other network optimization problems 7.1. Optimal spanning trees 7.2. Traveling salesperson problem 7.3. Approximation schemes 7.4. TSP variants and related geometric problems References

668 671 671 672 676 678 679 686

Geometric shortest paths and network optimization

635

1. Introduction A natural and well-studied problem in algorithmic graph theory and network optimization is that of computing a "shortest path" between two nodes, s and r, in a graph whose edges have "weights" associated with them, and we consider the "length" of a path to be the sum of the weights of the edges that comprise it. Efficient algorithms are well known for this problem, as briefly summarized below. The shortest path problem takes on a new dimension when considered in a geometric domain. In contrast to graphs, where the encoding of edges is explicit, a geometric instance of a shortest path problem is usually specified by giving geometric objects that implicitly encode the graph and its edge weights. Our goal in devising efficient geometric algorithms is generally to avoid explicit construction of the entire underlying graph, since the full induced graph may be very large (even exponential in the input size, or infinite). Computing an optimal path in a geometric domain is a fundamental problem in computational geometry, having many applications in robotics, geographic information systems (GIS) (see [135]), wire routing, etc. The most basic form of the problem is: Given a collection of obstacles, find a Euclidean shortest obstacle-avoiding path between two given points. A much broader collection of problems is defined by considering the several parameters that define the problem, including the objective function: How do we measure the "length" of a path? Options include the Euclidean length, Lp length, "link distance", etc. constraints on the path: Are we simply to get from point s to point t, or must we also visit other points or other regions along a path or cycle? input geometry: What types of "obstacles" or other entities are specified in the input map? dimension of the problem: Are we in 2-space, 3-space, or higher dimensions? type of moving object: Are we moving a single point along the path, or is the robot specified by some more complex geometry? single shot vs. repetitive mode queries: Do we want to build an effective data structure for efficient queries? static vs. dynamic environments: Do we allow obstacles to be inserted or deleted, or do we allow obstacles to be moving along known trajectories? exact vs. approximate algorithms: Are we content with an answer that is guaranteed to be within some small factor of optimal? known vs. unknown map: Is the complete geometry of the map known in advance, or is it discovered on-line, using some kind of sensor? In this survey chapter, we discuss several forms of the geometric shortest path problem, primarily for a single point moving in a 2- or 3-dimensional space. We assume that the map of the environment is known, except in Section 5, where we discuss on-line path planning problems.

636

J.S.B. Mitchell

We also discuss other geometric network optimization problems, including minimum spanning trees, Steiner trees, and the traveling salesperson problem. Many versions of these problems are known to be NP-hard; thus, much of our attention is devoted to approximation algorithms. We focus mostly on sequential algorithms in this survey, listing only a few results on parallel algorithms. See the surveys by Atallah and Chen [45], Goodrich [179], and by Reif and Sen [335] for more extensive lists of results on parallel algorithms in geometry. We will freely use the "big-Oh" notation for upper bounds on time and space requirements. We also use "big-Omega" notation for lower bounds. (See [124] for definitions.) We use "0(- • •)" to indicate an upper bound in which we suppress polylogarithmic factors. Many of the results discussed in this survey are also reported, in a more tabular form, in a survey chapter [288] of the recently released CRC Handbook, edited by Goodman and O'Rourke [178]. Finally, we make a disclaimer that our survey concentrates primarily on theoretical results. Some of these results may well imply practical algorithms that may be implementable and useful; however, in many cases, the algorithms are too complex or have too large of a constant buried in the big-Oh notation to be of practical significance. We hope that a future survey will address the important choices and issues facing practitioners in the implementation of geometric shortest path and network optimization algorithms. One of the major issues facing an implementer of any geometric algorithm is, of course, robustness', see the survey by Schirra [352] — Chapter 14 in this Handbook.

1.1. Shortest paths in graphs Shortest paths in graphs and networks are well studied; see, e.g., Ahuja, Magnanti, and Orlin [10]. Here, we mention the case in which all edge weights are non-negative, as this is the most relevant for geometric instances. Then, a standard algorithm given by Dijkstra [139] allows one to compute a tree of shortest paths from any one source node to all other nodes of the graph. Early implementations of Dijkstra's algorithm required time 0(i;^) or 0(^logi;), where v denotes the number of vertices and e the number of edges. Using Fibonacci heaps, Fredman and Tarjan [161] gave an 0{e + v log v) time implementation, and argued that this is optimal in a comparison-based model of computation. Exploiting planarity, Henzinger, Klein, and Rao [198] have obtained a linear-time algorithm for computing all shortest paths from a single source in planar graphs having nonnegative edge weights. There has been some recent progress too in devising new algorithms that differ from Dijkstra's algorithm in that they do not necessarily visit nodes in increasing order of distance from the source node. Thorup [374] has in fact obtained an optimal 0(^)-time algorithm for computing a tree of shortest paths in a graph having integer edge weights; see his paper, as well as the recent article of Raman [327], for a survey of other recent results that led up to this one.

Geometric shortest paths and network optimization

637

1.2. Approximation algorithms Several of the problems we will discuss in this survey are "provably hard" (e.g., NP-hard), meaning that no polynomial-time algorithm is known to exist to solve it. An increasingly popular approach to "solving" NP-hard optimization problems is to obtain provably-good approximation algorithms, which are guaranteed, in polynomial time, to produce an answer that is close to optimal — say, whose objective function value at most some factor c > 1 times optimal, for a minimization problem. Such an approximation algorithm is then called a c-approximation algorithm. (For a maximization problem, a c-approximation algorithm produces a solution whose objective function value is at least (1/c) times optimal.) A polynomial time approximation scheme (PTAS) is a method that allows one to compute a (1 + e)-approximation to the optimal (minimum), in time that is polynomial in n, for any fixed ^ > 0. (In general, the dependence on e may be exponential in (1/6:).) The recent book edited by Hochbaum ([210]) contains several articles surveying the state of knowledge on approximation algorithms for NP-hard problems. In particular, the survey of Bern and Eppstein [65] gives an excellent overview of the subject of approximating NP-hard geometric optimization problems. Approximation algorithms can also be quite useful for problems that are not necessarily NP-hard. First, an approximation algorithm may be considerably simpler and easier to implement than an algorithm that solves the problem to optimality. Further, the running time (both worst-case and average-case) for the approximation algorithm may be much better than the best known for the exact solution, even when the exact algorithm has polynomial running time. Further, approximation algorithms are known for some problems whose complexity status is still open, such as the MAX TSP in the plane and the minimum-weight triangulation problem; see Section 7.

1.3. Geometric preliminaries Throughout the survey, we will have need of some basic terminology, which we outline in this section. First, a path is a continuous image of an interval. A polygonal s-t path is a path from point s to point t consisting of a finite number of line segments {edges, or links) joining a sequence of points {vertices). The length of an s-t path is a nonnegative number associated with the path, measuring its total cost according to some prescribed metric. Unless otherwise specified, the length will be the Euclidean length of the path. A shortest path is then a path of minimum length among all paths that are feasible (satisfying all imposed constraints). We often refer to a shortest path also as an "optimal path" or a "geodesic path". (The word "geodesic" is sometimes used differently, to refer to paths that are "locally optimal", as defined below.) The shortest-path problem induces a metric, the shortest path metric, in which the distance between two points s and t is given by the length of a shortest s-t path; in many geometric contexts, this metric is also referred to as geodesic distance.

638

J.S.B. Mitchell

A simple polygon, P, having n vertices, is a closed, simply-connected region whose boundary is a union of n (straight) line segments (edges), whose endpoints are the vertices of P. A polygonal domain, P, having n vertices and h holes, is a closed, multiplyconnected region whose boundary is a union of n Une segments, forming h + 1 closed (polygonal) cycles. (A simple polygon is a polygonal domain with h = 0.) A triangulation of P is a decomposition of P into triangles such that any two triangles either intersect in a common vertex, a common edge, or not at all. A triangulation of a simple polygon P can be computed in 0(n) time [92]; a polygonal domain can be triangulated in time 0{n \ogn) [326] or 0{n + h log'"^^ h) [55] time. (See the chapter of Bern and Plassman [67] in this handbook, or the survey by Bern [64] for more information on triangulations.) We will use the term obstacle to refer to any region of space whose interior is forbidden to paths. The complement of the set of obstacles is the free space. If the free space is a polygonal domain P, the obstacles are the /i -f 1 connected components of the complement of P (h holes, plus tht face at infinity). A path that cannot be improved by making a small change to it that preserves its combinatorial structure (e.g., the ordered sequence of triangles visited, for some triangulation of a polygonal domain P) is called a locally shortest or locally optimal path. It is also known as a taut-string path in the case of a shortest obstacle-avoiding path. The visibility graph, y G ( P ) , is a graph whose nodes are the vertices of P and whose edges join pairs of nodes for which the corresponding segment lies inside P. An example is shown in Figure 2. Given a source point, s, a shortest path tree, SPT(5', P), is a spanning tree of s and the vertices of P such that the (unique) path in the tree between s and any vertex of P is a shortest path in P. A single-source query is a type of shortest path problem in which a source point, s, is fixed, and for each query (goal) point, t, one requests the length of a shortest path from the source point s to t. The query may also require the retrieval of an actual instance of a shortest s-t path; in general, this can be reported in additional time 0(k), where k is the complexity of the output (e.g., number of edges). One method of handling the single-source query problem is to construct a shortest path map, SPM(5'), which is a decomposition of free space into regions (cells) according to the "combinatorial structure" of shortest paths from a fixed source point s to points in the regions. Specifically, for shortest paths in a polygonal domain, SPM(5') is a decomposition of P into cells such that for all points t interior to a cell, the sequence of obstacle vertices along an s-t path is fixed. In particular, the last obstacle vertex along a shortest s-t path is the root of the cell containing t. Each cell is star-shaped with respect to its root, which lies on the boundary of the cell, meaning that the root can "see" all points within the cell. Typically, we will store with each vertex, v,of P the geodesic distance, d(s,v), from stov, as well as a pointer to the predecessor of v, which is the vertex (possibly s) preceding u in a shortest path from s to v. (The predecessor pointers provide an encoding of the SPT(^, P).) Note that v will appear on the boundary of the star-shaped cell rooted at its predecessor. The boundaries of cells consist of portions of obstacle edges, extension segments (extensions of visibility graph edges incident on the root), and bisector curves. The bisector curves are, in general, hyperbolic arcs that are the locus of points p that are (geodesically) equidistant

Geometric shortest paths and network optimization

639

from two roots, u and v: they satisfy d{s, u) + d2(u, p) = d(s, v) + d2(v, p), where d2(', •) denotes EucHdean distance. (Extension segments can be considered to be degenerate cases of bisector curves.) In Figure 1, the root of the cell containing t is labeled r. If SPM(^) is preprocessed for point location (see the chapter by Goodrich [180] in this handbook), then single-source queries can be answered efficiently by locating the query point t within the decomposition: If t lies in the cell rooted at r, the geodesic distance to t is given by d(s, t) = d(s, r) + d2ir, t). A shortest s-t path can then be output in time 0{k), where k is the number of vertices along the path, by simply following predecessor pointers back from r to s. In a two-point query problem, we are asked to construct a data structure that allows us to answer efficiently a query that specifies two points, s and t, and requests the length of a shortest path between them. In all cases discussed here, an actual instance of a shortest path can be reported in additional time 0(/c), where k is the complexity of the output (e.g., number of edges). A geodesic Voronoi diagram (VD) is a Voronoi diagram for a set of sites, in which the underlying metric is the geodesic distance. See the chapter of Aurenhammer and Klein [46] in this handbook for details about Voronoi diagrams. The geodesic center of P is a. point within P that minimizes the maximum of the shortest-path lengths to any other point in P. The geodesic diameter of P is the maximum of the lengths of the shortest paths joining pairs of vertices of P. Finally, we remark that in most of the algorithmic results reported here, the model of computation assumed has been the real RAM, which assumes that exact operations on real numbers can be done in constant time per operation. We acknowledge that this model is not, in general, realistic. At a couple places in the survey, we will point to results involving bit complexity models.

2. Geodesic paths in a simple polygon We begin by considering the most basic geometric shortest-path problem, that of finding a shortest s-t path inside a simple polygon, P (having no "holes"). The complement of P serves as an "obstacle" through which the path is not allowed to travel.

2.1. Special structure: Linear-time algorithms In this case, simple local optimality arguments, based on the triangle inequality, yield: 1. There is a unique shortest s-t path in a simple polygon P; consequently, SPT(s, P) is unique.

PROPOSITION

We now sketch an 0(n) time algorithm for computing a shortest s-t path within a simple polygon P. We begin with a triangulation of P (0(n) time; [92]), whose dual graph is a tree. The sleeve is comprised of the triangles that correspond to the (unique) path in the dual that joins the triangle containing s to that containing t. By considering the effect of adding

640

J.S.B. Mitchell

Fig. 1. A shortest path map with respect to source point s within a polygonal domain with h = ?>. The heavy dashed path indicates the shortest s-t path, which reaches t via the root r of its cell. Bisector curves are shown in narrow solid curves; extension segments are shown thin and dashed.

Fig. 2. The visibility graph VG{P)\ Edges of VG(P) are of two types — (1) the heavy dark boundary edges of P, and (2) the edges that intersect the interior of P, shown with thin dashed segments. A shortest s-t path is highlighted.

Geometric shortest paths and network optimization

641

Fig. 3. Computing a shortest path in a simple polygon: Splitting a funnel.

the triangles in order along the sleeve, [90,253] have shown how to obtain an 0(n)-time algorithm for collapsing the sleeve into a shortest path. At a generic step of the algorithm, the sleeve has been collapsed to a structure called a "funnel" (with "base" ab and "root" r) consisting of the shortest path from ^ to a vertex r, and two (concave) shortest paths joining r to the endpoints of the segment ab that bounds the triangle abc that is about to be considered (see Figure 3). In adding triangle abc, we "split" the funnel in two according to the taut-string path from r to c, which will, in general, include a segment, uc, joining c to some (vertex) point of tangency, M, along one of the two concave chains of the funnel. After the split, we keep that funnel (with base ac or be) that contains the s-t taut-string path. The work needed to search for u can easily be charged off to those vertices that are discarded from further consideration. The end result is that a shortest s-t path is found in time 0(n), which is worst-case optimal. In order to answer single-source query problems, we are interested in also computing the shortest path map in P. SPM(5') has a particularly simple structure, as the boundaries between cells in the map are simply (line segment) chords of P obtained by extending appropriate edges of the visibility graph VG{P). Guibas et al. [186] have shown how it can be computed in time 0(n), by using somewhat more sophisticated data structures to do funnel splitting efficiently (since, in this case, we cannot discard one side of each split funnel). Then, after storing the SPM(5') in an appropriate 0(n)-size point location data structure (see, e.g., [180]), single-source queries can be answered in 0(log/2) time. Hershberger and Snoeyink [203] have substantially simplified the original algorithm of [186]. The above result can be strengthened even further to the case of two-point queries. Guibas and Hershberger [185] have shown how a simple polygon can be preprocessed in time 0(n), into a data structure of size 0(n), to support shortest-path queries between any two points sjeP.ln time O(logn) the length of the shortest path can be reported, and in additional time 0{k), the shortest path can be reported, where k is the number of vertices

642

J.S.B. Mitchell

in the output path. The method has been simpUfied with a new data structure introduced by Hershberger[200]. 2 ([185,200]). For a simple polygon P having n vertices, there is a data structure of size 0(n) that can be built in time 0(n) so that the length of the shortest path between any two points s,t e P can be reported in time 0(log«), and the shortest path itself can be reported in additional time proportional to its number of vertices.

THEOREM

We should emphasize that the above methods all rely on starting with a triangulation of the simple polygon. Given the complexity of linear-time triangulations of polygons, we pose the following open problem: OPEN PROBLEM 1. Can one devise a simple 0(n) time algorithm for computing the shortest path between two points in a simple polygon, without resorting to a {complicated) linear-time triangulation algorithm!

In the dynamic version of the shortest path problem, one allows the polygon P to change, with the addition or deletion of edges and vertices. If the changes are always made in such a way that the set of all edges yields a connected planar subdivision of the plane into simple polygons (i.e., no "islands" are created), then one can maintain a data structure of size 0(/i) that supports two-point query time of 0(log^ n) (plus 0(k) if the path is to be reported), and update time of 0(log^ n) for each addition/deletion of an edge/vertex [183]. (The result of [183] improves the first results on the dynamic problem, obtained by Chiang, Preparata, and Tamassia [109,110], who gave a data structure achieving 0(\og^n) query and update bounds, using 0(n\ogn) space. The same data structure also gives the best known dynamic point location solution for connected maps, with optimal O(logn) query time.) We turn briefly to some results on parallel algorithms. ElGindy and Goodrich [148] gave a parallel algorithm to compute a shortest path in a simple polygon in time 0(\ogn), using 0(n) processors (in the CREW PRAM model). Goodrich, Shauck, and Guha [181,182] show how, with 0(n/ logn) processors and O(logn) time, one can compute a data structure that supports 0(\ogn) (sequential) time shortest-path queries between pairs of points in a simple polygon. They also give an 0(logn)-time algorithm using 0(n) processors to compute a shortest path tree. Hershberger [201] builds on the results of [181,182] and gives an algorithm for shortest path trees requiring only 0(\ogn) time and 0(n/logn) processors (CREW); he also obtains optimal parallel algorithms for related visibility and geodesic distance problems.

2.2. Other geodesic distance problems The geodesic Voronoi diagram of k sites inside P can be constructed in time 0((n -\k) \og{n + k)), using 0{n) space [320]; this improves an earlier result of Aronov [32] that required time 0((n + k) log(n + k) logn). The furthest-site Voronoi diagram for geodesic distance can also be computed in time 0{in -\-k) \og{n + A:)), and space 0(n + A:), using

Geometric shortest paths and network optimization

643

an algorithm of Aronov, Fortune, and Wilfong [33]. Given that shortest paths in simple polygons require only linear time, it is natural to ask if the superlinear portion of the complexities of these algorithms can be moved to the "^" term; the only lower bound known is Q{n^k\ogk). OPEN PROBLEM 2. Can the geodesic Voronoi diagram {closest-site or furthest-site) for k sites within a simple polygon P be computed in time 0{n-{-k \ogk)l

The geodesic diameter of a simple polygon can be computed in time 0{n), using the method of "matrix searching" in the geodesic distance, as developed by Hershberger and Suri [209]. This algorithm improves an earlier 0{n logn)-time solution given by Suri [366, 185]. Matrix searching also provides a powerful tool for obtaining linear-time solutions to other geodesic distance problems, such as all nearest neighbors and all furthest neighbors. The geodesic center of a simple polygon P can be computed in time 0{n log^ n) [325] (see also [42]); however, it is believed that this bound can be improved. OPEN PROBLEM

3. Can the geodesic center of a simple polygon be computed in 0{n)

timel Shortest paths within simple polygons give a wealth of structural information about the polygon. In particular, they have been used to give an output-sensitive algorithm for constructing the visibility graph of a simple polygon ([199]) and can be used for constructing a geodesic triangulation of a simple polygon, which allows for efficient ray-shooting (see [93,207]). They also form a crucial step in solving link distance problems (Section 4.2).

3. Geodesic paths in a polygonal domain In contrast to the situation in simple polygons, where there is a unique taut-string path between any two points, in a general polygonal domain P, there can be an exponential number of taut-string (locally optimal) simple paths between two points. A special case of the shortest path problem in polygonal domains is that in which the "homotopy type" of the desired path is specified, e.g., by giving the sequence (possibly with repetitions) of the N visited triangles, in some triangulation of P. In this case, Hershberger and Snoeyink [203] have shown how to compute a shortest path of the given homotopy type in time 0(N), using a generalization of the linear-time methods in simple polygons. This problem is of interest in applications to VLSI routing problems; see [123, 163,256]. To compute a shortest path in general polygonal domains, with no constraints on the homotopy type, we must efficiently search over all possible "threadings" of paths. We discuss two methods below that have been used to do so: searching the visibility graph (see Figure 2), and performing a "continuous Dijkstra" search of the domain.

644

J.S.B. Mitchell

3.1. Searching the visibility graph Since we can make "point" holes in P at 5 and t, we can assume, without loss of generality, that s and t are vertices of P. Using simple local optimality arguments, it is easy to show: PROPOSITION 3. Any locally optimal s-t path in a polygonal domain P must lie on the visibility graph VG{P)\ it consists of a union of straight line segments joining pairs of visible vertices.

Early algorithms to construct the visibihty graph required time 0{rp-\ogn) [252], and were based on a radial sweep about each vertex of P. The time complexity came from the use of n independent radial sortings of the vertices. Later improvements by Welzl [385] and by Asano et al. [38] gave a time bound O(n^). These methods were based on the use of point-line duality, which allowed the n sortings to be done more efficiently, in 0{n^) time overall, by constructing the arrangement of the n lines that are dual to the vertices of P. But, given that the number. Eye, of edges in the visibility graph may be much smaller than its worst-case quadratic size (in particular. Eye may be only linear in n), researchers pursued "output-sensitive" algorithms to compute it in time that is a function of EyC' Hershberger [199] studied the special case of visibility graphs in simple polygons, obtaining an 0(£'vG)-time and 0(n)-space algorithm to compute the visibility graph of a simple polygon. Overmars and Welzl [313] obtained a relatively simple 0{EyG^ogn)time method, requiring 0{n) space. Then, Ghosh and Mount [173] obtained an algorithm with worst-case optimal running time, 0{EyG + n log^z), using OiEyc) working storage space. More recently, Pocchiola and Vegter [324] and Riviere [345] have given algorithms to compute the visibility graph in optimal time ( 0 ( £ ' V G + n\ogn)) and optimal space (0(n)). Once we have computed the graph VG{P), whose edges are weighted by their Euclidean lengths, we can use Dijkstra's algorithm' to construct a tree of shortest paths from s to all vertices of P, in time 0{EyG + n \ogn) [161,142]. Thus, Euclidean shortest paths among obstacles in the plane can be computed in time 0{EyG-\-n\ogn). This bound is worst-case quadratic in n, since Eye ^ (2)^ r»ote too that domains exist with Eye = ^(n^). If our goal is to obtain the shortest path map, then, given the tree of shortest paths from s, we can compute SPMC^y) in time 0(n \ogn) [282]. Another method based on visibility graphs leads to an algorithm whose running time is only linear in n, while being quadratic in the number, /i, of holes in P. Kapoor, Maheshwari, and Mitchell [238] have given an 0(n -h /i^ logn)-time, 0(n)-space algorithm, using visibility graph techniques developed by Rohnert [347,346] for convex obstacles, and visibility "corridor" structure developed by Kapoor and Maheshwari [237]. There has been an effort for many years to characterize which graphs correspond to visibility graphs of some geometric domain. For example, it is an interesting open problem to characterize the class of graphs that can be realized as the visibility graph of a simple ' In practice, it may be faster to apply the A* heuristic search algorithm (e.g., see Pearl [322]), using the straightline Euclidean distance as heuristic function, /i() (which is a lower bound, so it implies an "admissible" algorithm).

Geometric shortest paths and network optimization

645

polygon; see, e.g., Abello and Kumar [1], Ghosh [172], and O'Rourke and Streinu [312] for some recent results and some pointers to related work. For further information on visibility, visibility graphs, and their use in shortest path problems, we refer the reader to the survey of Alt and Welzl [20], the survey (on visibility) by O'Rourke [310], and the Chapter 19 on visibility by Asano, Ghosh, and Shermer [41] in this Handbook. 3.2. Continuous Dijkstra method Instead of searching the visibility graph (which may have quadratic size), an alternative paradigm for shortest-path problems is to construct the (linear-size) shortest path map directly. The continuous Dijkstra method [278-280,282,283,291,292] was developed for this purpose. Building on the success of the method in solving (in nearly linear time) the shortest-path problem for the L\ metric (see Section 4.1), Mitchell [284,286] developed a version of the continuous Dijkstra method applicable to the Euclidean shortest-path problem, obtaining the first subquadratic (0(n^/^+^)) time bound. Subsequently, this result was improved by Hershberger and Suri [205,206], who achieve a nearly optimal algorithm based also on the continuous Dijkstra method. They give an 0{n logn) time and 0{n logn) space algorithm, coming close to the lower bounds of Q{n-\-h log/i) time and 0{n) space. The continuous Dijkstra paradigm involves simulating the effect of a "wavefront" propagating out from the source point, s. The wavefront at distance S from s is the set of all points of P that are at geodesic distance 8 from 5. It consists of a set of curve pieces, called wavelets, which are arcs of circles, centered at obstacle vertices that have already been reached. At certain critical "events," the structure of the wavefront changes due to one of the following possibilities: (1) (2) (3) (4)

a wavelet disappears (due to the "closure" of a cell of the SPM); or a wavelet collides with an obstacle vertex; or a wavelet collides with another wavelet; or a wavelet collides with an obstacle edge at a point interior to that edge.

It is not difficult to see from the fact that SPM(^) has linear size that the total number of such events is 0{n). The challenge in applying this propagation scheme is in devising an efficient method to know what events are going to occur and in being able to process each event as it occurs (updating the combinatorial structure of the wavefront). One approach, used in [284,286], is to track a "pseudo-wavefront," which is allowed to run over itself, and "clip" only when a wavelet collides with a vertex that has already been labeled due to an earlier event. Detection of when a wavelet collides with a vertex is accomplished with range searching techniques, at a cost of 0(^^-^+^) per query. This leads to an overall running time of 0(n-^/^+^), for any fixed 6: > 0, using 0{n) space. An alternative approach, used in [205,206], simplifies the problem by first decomposing the domain P using a "conforming subdivision," which allows one to propagate an "approximate wavefront" on a cell-by-cell basis. A key property of a conforming subdivision is that for any edge (of length L) of the subdivision, there are only a constant number of (constant-sized) cells within geodesic distance L of it.

646

J.S.B. Mitchell

While the algorithm of [206] is optimal worst-case time when there are a large number of obstacles (e.g., h = Q{n)), it fails to be optimal in its space complexity {0{n\ogn)) and in its complexity as a function of n and h. One of the most intriguing open problems here is to obtain an (optimal) algorithm whose running time asymptotically matches the lower bound ofQ{n-^h log h), while using only 0{n) space. Currently, the only algorithm known that is linear in n is also quadratic in h [238]. OPEN PROBLEM 4. Can one solve the Euclidean shortest-path problem in 0{n + h \ogh) time and 0(n) spacel

3.3. Approximation algorithms Efficient methods to approximate the Euclidean shortest path, in time O(nlogn), have existed for some time. Clarkson [120] gave an algorithm that spent 0{{n\ogn)/e) time to build a data structure of size 0(n), after which a (1 -h 6:)-approximate shortest path query could be answered in time 0{n logn -h n/e). (These bounds rely also on an observation in [94].) Using a related approach, based on approximating Euclidean distance with fixed orientation distances (see Section 4.1), Mitchell [279,283] gave a method requiring 0{(n logn)/v^) time and 0{n/yfe) space to give an approximate Euclidean shortest path. Chen, Das, and Smid [96] have shown an Q{n \ogn) lower bound, in the algebraic computation tree model, on the time required to compute a (1 -h £)-approximate shortest path; they also give Q{n \ogn) lower bounds on computing various types of "r-spanners," which are graphs that, for every pair of points, contain a path whose length is at most t times the interpoint distance (Euclidean, geodesic, etc.); see the survey on spanners in this handbook by Eppstein [151], as well as [80,107,350].

3.4. Two-point queries Two-point queries in a polygonal domain are much more challenging than the case of simple polygons, where optimal algorithms are known. One approach, observed by Chen, Daescu, and Klenk [95], is to proceed as follows. Using O(n^) space, we can store the shortest path map, SPM(i;, P), rooted at all n vertices. Then, for any s and r, we can use the visibility complex of Pocchiiola and Vegter [323] to compute the set of ks vertices visible to s and kt vertices visible to f, in time 0{K\ogn), where K = min{ks, kt} (using a standard "lock step" computation of the visibility from the two points). Then, assuming that K = ks, we simply locate t in each of the ks SPM's rooted at the vertices visible from s. This permits two-point queries to be answered in time 0 ( ^ logn), which is ^(n \ogn) in the worst case, making this method no better than starting the computation from scratch. However, this approach may be effective in cases in which K may be expected to be small. A recent study by Chiang and Mitchell [108] has yielded more efficient query times, with various tradeoffs between preprocessing time and space. They use a visibility-based approach to achieve query time 0(logn + h) using 0(n^) preprocessing time and space.

Geometric shortest paths and network optimization

647

They also achieve optimal query time, O(logn), using high polynomial space (roughly n^^), and they achieve slightly sublinear query time, using 0(n^'^^) space. These results utilize an "equivalence decomposition" of the domain P, so that for all points z within a cell of the decomposition, the shortest path maps with respect to z are topologically equivalent. Then, for given query points s and t, one locates s within the decomposition, and then uses the resulting SPM, along with a parametric point location data structure, to locate t within the SPM with respect to s. Unfortunately, the complexity of the decomposition can be quite high; there can be ^(n^) topologically distinct shortest path maps with respect to points within P. Unfortunately, the upper bound is still considerably higher than this; obtaining tight bounds remains an interesting open question. Approximations have also been useful in attacking the two-point query problem. As observed in [94], the method of Clarkson [120] can be used to construct a data structure of size O(n^), in 0(n^logn) time, so that two-point (1 + 6:)-optimal queries can be answered in time O(logn), for any fixed e > 0. Chen [94] was the first to obtain nearly linear-spsice data structures for approximate shortest path queries; these were obtained, though, at the cost of a higher approximation factor. He obtains a (6 + 6^)-approximation, using 0(r?l^I log^/^ n) time to build a data structure of size 0(n logn), after which queries can be answered in time O(logn). (Within this time bound, the approximate length is reported; in additional time proportional to the number of vertices, a path can be reported that achieves the length bound.) These results have been improved recently by Arikati et al. [21], who give a family of results, based on planar spanners (see [151]), with tradeoffs among the approximation factor and the preprocessing time, storage space, and query time. One such result obtains a (3\/2 + 6:)-approximation using 0(r?l^ j log^^^ n) time to build a data structure of size 0{n logn), after which queries are performed in time O(logn). For other results, and for bounds that apply to other metrics {Lp metrics), we refer the reader to the paper. OPEN PROBLEM 5. How efficiently, and using what size data structure, can one preprocess a polygonal domain for exact two-point queries? Can exact two-point queries be done in sublinear query time using subquadratic storage? Can O(I)-approximate twopoint queries be done in polylogarithmic time, using nearly linear storagel

3.5. Other geodesic distance problems The geodesic Voronoi diagram of k sites inside P can be constructed in time 0((n -1k) \og{n + k)), using the continuous Dijkstra method, simply starting with multiple source points [206]. While the geodesic center/diameter problem has been carefully examined for the case of simple polygons (Section 2), we are unaware of results (other than brute force) for polygonal domains: OPEN PROBLEM 6. How efficiently can one compute a geodesic center/diameter for a polygonal domain!

648

J.S.B. Mitchell

4. Shortest paths in other metrics So far, we have considered only shortest path problems in the Euclidean metric. We turn now to other possible objective functions for measuring the length of a path.

4.1. L\ metric The Lp metric defines the distance between q = (q^^qy) and r = (rx,ry) by dp(q, r) = [|^x —^x\^ -^\^y — fy\^V^^' The Lp length of a polygonal path is the sum of the Lp lengths of each edge of the path. Special cases of the Lp metric include the L\ metric (Manhattan metric) and the Loo metric {dooiq, r) = niax{|^jc — fxlA^y — ^yl))A polygonal path with each edge parallel to a coordinate axis is called a rectilinear (or isothetic) path. (For a rectilinear path, the L\ and L2 lengths are identical.) A natural generahzation of the notion of a rectilinear path is that of C-oriented paths, having each edge parallel to one of a set C of c = |C| fixed orientations. (See Widmayer, Wu, and Wong [386], who initiated the study of fixed orientation metrics in computational geometry.) As with Euclidean shortest paths, algorithms for computing shortest paths in the L\ metric fall into two general categories: searching a sparse "path preserving graph" (analogous to a visibility graph), or applying the continuous Dijkstra paradigm or tracking a wavefront. Clarkson, Kapoor, and Vaidya [122] showed how to construct a sparse graph, having 0{n\ogn) nodes and O(nlogn) edges, that is path preserving in that it is guaranteed to contain a shortest path between any two vertices. Applying Dijkstra's algorithm then gives an 0{n \o^n) time {0{n \ogn) space) algorithm for L\ shortest paths. (Alternatively, one gets 0{n\o^^^n) time and OinXog'^^n) space.) Using observations in [98,99] the timespace tradeoff has been improved to yield somewhat improved bounds of 0(nlog^/^n) time and 0{n \ogn) space. The continuous Dijkstra paradigm has also been applied to the L \ shortest path problem, resulting in the computation of the SPM(^) in time 0(/i logn), using 0{n) space [279,283]. The special property of the L \ metric that is exploited in this algorithm is the fact that the wavefront in this case is piecewise-linear, with "wavelets" that are line segments of slope di 1, so that the first vertex hit by a wavelet can be determined efficiently using rectangular range searching techniques (e.g., see [91]). Two-point query problems have also been studied for the L \ geodesic metric. In a simple rectilinear polygon, Lingas, Maheshwari, and Sack [259] and Schuierer [353] give optimal algorithms, achieving O(logn) query time (0(1) for vertex-to-vertex queries), using 0{n) preprocessing time and space; an optimal path can be reported in additional 0{k) time, where k is the number of links. (A previous algorithm of de Berg [131] achieved optimal query time using O(nlogn) space and preprocessing.) Their methods are based upon a histogram decomposition of the polygon and yield a path that is "smallest" — simultaneously optimal in both the L\ and rectilinear link metric (see also [158,274], as well as Section 4.7). They also yield an 0{n) algorithm for computing the L\ geodesic diameter and furthest neighbors for all vertices. Further, the algorithm of Lingas, Maheshwari, and

Geometric shortest paths and network optimization

649

Sack [259] is actually based on an optimal parallel (EREW PRAM) algorithm that preprocesses a polygon (with a given trapezoidization) in time 0(log/t), using 0(n/\ogn) processors. Two-point queries in a polygonal domain, under the L i metric, have been studied by Chen, Klenk, and Tu [98,99], who have shown how a polygonal domain can be preprocessed, using 0{n^\o^n) time and 0{n^\ogn) space, so that two-point queries can be answered in time 0(log^ n). The special case in which obstacles are disjoint axis-aligned rectangles has been studied by Atallah and Chen [44,43] and by ElGindy and Mitra [149]; 0(\ogn) query time is achievable, using 0{n^) preprocessing time and space, or O(v^) query time is achievable, using 0{Tr'f^) preprocessing time and space. In fact, they give parallel algorithms: with 0(n^/logn) CREW processors, a data structure of size 0{n^) can be built that permits two-point queries to be answered in time 0(\o^n) on a single processor ([43]). Mitra and Bhattacharya [298] and Chen and Klenk [97] have obtained approximation in the special case of disjoint rectangular obstacles; [97] describe a method achieving O(logn) query time for a 3-approximate query, using 0{n\ogn) space and 0{n log^ n) preprocessing time. (If the query points are both obstacle vertices, then the query time is only 0(1).) Arikati et al. [21] have recently obtained approximation results for two-point queries in polygonal domains, as we mentioned already in the Euclidean case. Their results apply also to L^ metrics, where they obtain various tradeoffs between space and time resources, to achieve approximation factors that are c + e, 2c -f e, or 3c + e, where c = 2*^^~^^/^ (so c = 1 for the L\ metric). Methods for finding L\ shortest paths often generalize to the case of C-oriented paths, in which c = |C| fixed directions are given. Shortest C-oriented paths can be computed in time 0{cn logn) [279,283]. Two-point queries can be answered in query time 0(c^ log^ n), after 0(c^n^ log^ n) time and space preprocessing [95]. Since the Euclidean metric is approximated to within accuracy 0(l/c^) if we use c equally spaced orientations, this results in an algorithm to compute, in time 0 ( ( n / ^ ) l o g n ) , a path guaranteed to have length within a factor (1 + e) of the Euchdean shortest path length [279,283]. Clarkson [120] gave an alternative approximation algorithm based also on discretizing directions that computes an e-optimal (Euclidean) shortest path in time 0(n/£ + nlogn), after spending 0((n/6) logn) time to build a data structure of size 0(n/6).

4.2. Link distance The link distance within P from 5 to n s the minimum number of edges in an s-t path in P. If the paths are restricted to be rectilinear or C-oriented, then we speak of the rectilinear link distance or C-oriented link distance. A min-link s-t path is a polygonal path from s to t that achieves the link distance. In many problems, the link distance provides a more natural measure of path complexity than the Euclidean length. The link distance also has applications to curve simplification [187,222,295]. Since this handbook contains a chapter by Maheshwari, Sack, and Djidjev [271] devoted entirely to the subject of link distance, we refer the reader to that survey for further information.

650

JS.B. Mitchell

4.3. The weighted region metric In the "weighted region problem", we are given a piecewise-constant function, f '.W?' ^^ 9^, that is defined by assigning a nonnegative weight to each face of a given triangulation in the plane. The weighted length of an s-t path n is the path integral, /^ /(jc, y) da, of the weight function along n. The weighted region metric associated with / defines the distance df{s, t) to be the infimum over all s-t paths n of the weighted length of jr. The weighted region problem (WRP) asks for an s-t path of minimum weighted length. The WRP is a natural generalization of the shortest-path problem in a polygonal domain: Consider a weight function that assigns weight 1 to F and weight oo (or a sufficiently large constant) to the obstacles (the complement of P). The WRP models the minimumtime path problem for a point robot moving in a terrain of varied types (e.g., grassland, brushland, blacktop, bodies of water, etc.), where each type of terrain has an assigned weight equal to the reciprocal of the maximum speed of traversal for the robot. We usually assume that / is specified by a triangulation having n vertices, with each face assigned an integer weight a G { 0 , 1 , . . . , W, +00}. (We can allow edges of the triangulation to have a weight that is possibly distinct from that of the triangular facets on either side of it; in this way, "linear features" such as "roads" can be modeled.) Using an algorithm based on the continuous Dijkstra method, Mitchell and Papadimitriou [292] show how to find a path whose weighted length is guaranteed to be within a factor of (1 + 6:) of optimal, where e > 0 is any user-specified degree of precision. The time complexity of their algorithm is 0{E • 5), where E is the number of "events" in the continuous Dijkstra algorithm, and S is the complexity of performing a numerical search to solve the following subproblem: Find a (1 + £)-shortest path from s io t that goes through a given sequence of k edges of the triangulation. It is shown that E = Oin"^) and that there are examples where E can actually achieve this upper bound. The numerical search can be done using a form of binary search that exploits the local optimality condition: An optimal path bends according to "Snell's Law of Refraction" when crossing a region boundary. (The earliest reference we have found to the use of Snell's Law in optimal route planning applications is to the work of Warntz [384].) This leads to a bound ofS = 0(k^ loginNW/e)) on the time needed to perform a search on a A:-edge sequence, where N is the largest integer coordinate of any vertex of the triangulation. Since one can show that k = O(w^), this yields an overall time bound of O(^i^L), where L = \og(nNW/e) can be thought of as the bit complexity of the problem instance. Various special cases of the weighted region problem admit faster and simpler algorithms. In the case that region weights are restricted to {0, 1, 00] (while edges may have arbitrary (nonnegative) weights), then an 0(n^)-time algorithm can be based on constructing a path-preserving graph similar to a visibility graph, as shown by Gewali et al. [169]. This also leads to an efficient method for performing lexicographic optimization, in which one prioritizes various types of regions according to which is most important for path length minimization. Lee, Yang, and Chen [254] consider the case in which the plane has weight 1, while each of a set of pairwise-disjoint rectilinear polygonal "obstacles" has a weight greater than 1, indicating that it is more costly to travel through it than to go around it. They apply the techniques of [122], searching a path-preserving graph, to obtain an algorithm for minimum-cost rectilinear paths that takes time 0(nlog^n) (with space 0(nlogn)) or

Geometric shortest paths and network optimization

651

0(n log-^/^ n) (with 0{n log^/^ n) space). A path-preserving graph approach can also be appHed to the more general case of rectilinear paths in an arbitrarily weighted rectilinear subdivision, to yield efficient algorithms for single-source and two-point queries. Specifically, Chen, Klenk, and Tu [98] give an 0(nlog^/^n)-time algorithm to construct a data structure of size 0{n logn), permitting 0(logn)-time single-source queries to be answered; for two-point queries, they use 0(n^ log^ n) space and preprocessing time, and answer queries in time O(log^n). In recent experimental investigations, Mata and Mitchell [273] and Lanthier, Maheshwari, and Sack [247], have shown the practicality of solving the WRP using very simple methods based on searching a discrete graph which is assured of containing an approximately optimal path. One graph is based on discretizing the edges of the subdivision, placing evenly-spaced new (Steiner) vertices along each edge, with separation at most weighted length 5. The vertices on the boundary of each (convex) facet are interconnected (possibly implicitly) with a complete graph. Searching the resulting graph for a shortest path results in an approximate shortest path; the error is at most KS, where K is the number of segments in the path. Another option (in [273]) is to construct a "pathnet" graph, based on tracing k evenly-spaced "refraction rays" (that obey Snell's Law) out of each original vertex, and linking that vertex to one vertex (or "critical entry point") within each of the k refraction cones defined by the rays. As k increases, the pathnet more closely approximates a complete set of optimal paths connecting pairs of vertices. The experimental studies suggest that these methods are practical and are readily implementable, and that the observed dependence of the approximation factor on the algorithm parameters (5 or A:) is better in practice that the worst-case bounds may suggest. Further, the graphs that are searched can be precomputed and stored, allowing reasonably efficient solutions to two-point queries. The reported path can also be postprocessed with a local optimality procedure that results in a solution even closer to optimal. Using a slightly different discrete graph than the edge subdivision graph of [247,273], Aleksandrov et al. [11] give alternative time bounds that depend on other parameters related to the "fatness" of the triangular facets of a weighted polyhedral surface. They place Steiner points along edges in a geometric progression, as Papadimitriou [317] has done for approximating shortest paths in three dimensions (Section 6.3). This allows one to compute a (1 + £)-approximate shortest path from s io t in time 0{MnlogMn + nM^) (and space O(nM^)), where M = 0 ( ^ ^ ^ log ^ ) , X is the length of a longest edge, h is the minimum altitude of a triangular facet, 6 is the smallest angle of any triangular facet, W is the maximum (resp., minimum) weight of a facet, and 0 < ^ < ^ + ^ - (See also Section 6.3, where the same method is mentioned in the unweighted case.) Note that, while the dependence on e and on geometric precision parameters is substantially worse than in the algorithm of Mitchell and Papadimitriou [292], the worst-case dependence on n is much better. (If, as in [292], the coordinates have integral values at most N, then sin 0 = 0(1/N^) and h = 0(l/N), making the time bound roughly 0 ( ^ ^ ^).) An improved variant of their result ([12]) searches a reduced subgraph, allowing them to remove the additive term nM^ in the complexity, resulting in time bound 0{Mn log Mn) (roughly 0 ( ^ ^^))' Several other papers have also addressed practical and effective (possibly heuristic) methods for the WRP; see the work by Alexander and Rowe [13-15] and a recent pair

652

J.S.B. Mitchell

of papers by Kindl, Shing, and Rowe [239,240], which report practical experience with a simulated annealing approach to the WRP. Johansson [229] has implemented a version of the edge subdivision method (also investigated by [247,273]) and studied its use in fluid flow computations for injection molding. Papadakis and Perakis [315,314] have generalized the WRP to the case of time-varying maps, where both the weights and the region boundaries may change over time; they obtain generalized local optimality conditions for this case and propose a search algorithm to find good paths.

4.4. Minimum-time paths: Kinodynamic motion planning Our discussion so far has focussed on path planning problems with holonomic constraints — those that are completely specified in terms of the robot's configuration, which is described by a A:-vector, if the robot has k degrees of freedom. In non-holonomic motion planning, the constraints on the robot are specified in terms of a non-integrable equation involving also the derivatives of the configuration parameters. For example, non-holonomic constraints may specify bounds on the robot's velocity, acceleration, or the curvature of its path. See Latombe [248] and Li and Canny [258] for more a more detailed discussion of non-holonomic constraints and motion planning. The kinodynamic motion planning problem (also known as the minimum-time path problem) is a non-holonomic motion planning problem in which the objective is to compute a trajectory (a time-parameterized path, (JC(0, >'(^))) within a domain P that minimizes the total time necessary to move from an initial configuration (position and initial velocity) to a goal configuration (position and velocity), subject to bounds on the allowed acceleration and velocity along the path. The problem formulation is intended to model the fact that real mobile robots have a bounded acceleration vector and a maximum speed. In its general form, it is a difficult optimal control problem; optimal paths will be complicated curves given by solutions to differential equations. The bounds on acceleration and velocity are most often given by upper bounds on the Loo norm (the "decoupled case") or the L2 norm (the "coupled case"). Exact solutions to the kinodynamic motion planning problem are known in one dimension (O'Dunlaing [306]) and in two dimensions (Canny, Rege, and Reif [82]). The algorithm of [82] is for the decoupled case (Loo bounds on velocity and acceleration); it requires exponential time and polynomial space. Their method is based on characterizing a set of "canonical solutions" (related to "bang-bang" controls) that are guaranteed to include an optimal solution path. This leads to an expression in the first-order theory of the reals, which can then be solved exactly in exponential time. It remains open, however, whether or not a polynomial-time algorithm exists in two dimensions. For three or more dimensions, the problem is at least NP-hard, as implied by the lower bounds of Canny and Reif [83]. Approximation methods have been developed by Donald et al. [141], who have given a polynomial-time algorithm that produces a trajectory requiring time at most (1+6:) times optimal, for the decoupled case. Their approach is to discretize (uniformly) the fourdimensional phase space that represents position and velocity, with special care to ensure that the size of the grid is bounded by a polynomial in \/s and n. They prove that shortest

Geometric shortest paths and network optimization

653

paths in the induced grid graph are guaranteed to be close to optimal. The running time of their algorithm has been improved by Donald and Xavier [140]. Approximation algorithms for the coupled case have been given independently by Donald and Xavier [140] and by Reif and Tate [340]. By using a non-uniform discretization of J-dimensional configuration space, Reif and Wang [337] have obtained an approximation algorithm with a time complexity that improves that of [140], reducing the dependency on £ from 0((l/6:)^^~^) to 0((lA)4^-2).

4.5. Curvature-constrained shortest paths Related to the kinodynamic motion planning problem is the problem of finding shortest paths subject to a bound on their curvature. The curvature-constrained shortest-path problem is to compute a shortest obstacle-avoiding smooth (C^) path joining point s, with prescribed orientation, to point t, with prescribed orientation, such that for every subinterval of the path, the average curvature is at most 1. (The average curvature of a path p : I -^ ?il^ in the interval [u\,U2]^ I is defined to ho \\pXu\) — p\u2)\\/\ui — U2\, where the parameter u denotes arc length.) Placing a bound on the curvature can be thought of as a means of handling an upper bound on the acceleration vector of a point robot (e.g., an idealized aircraft) whose speed is constant, or can be thought of as the constraint imposed when modeling a car-like mobile robot having a minimum turning radius. The complexity of solving the general problem in a polygonal domain has been open until very recently; Reif and Wang [338] have shown that it is NP-hard in a polygonal domain having n vertices, each having coordinates specified by n^^^^ bits. Since the general problem is difficult to solve exactly, algorithms for restricted versions of the problem, as well as approximation algorithms, have been the topic of recent investigations. Early investigations into the problem were by Dubins [145], who characterized shortest curvature constrained paths in the absence of obstacles: a shortest path consists of a sequence of at most three segments, each of which is a straight line segment ("S") or an arc of a unit radius circle ("C"), with the allowable sequences being CCC, CSC, or a subsequence of one of these two. Reeds and Shepp [333] extended this result, obtaining a characterization of shortest paths in the case in which the robot is allowed to move in reverse, as well as forward. Boissonnat, Cerezo, and Leblond [76] give an alternative method of obtaining characterizations in both cases, based on optimal control theory. (See also [368].) Approximation algorithms for a shortest "e-robust" path were given by Jacobs and Canny [227,228]. (See also Barraquand and Latomb [56].) Here, "e-robust" roughly means that small perturbations of certain points along the path do not cause the path to penetrate an obstacle. They place points that discretize the boundaries of the polygonal obstacles and connect these points by paths ("jumps") of standard shapes (circular arcs and straight 3

2

segments); the resulting algorithm takes time 0 ( ( ^ ) l o g n + TI)' where 8 is the spacing of the discretization points on the boundary; 8 controls the robustness of the path as well as the degree of approximation. They also give an alternative quadtree-based algorithm, having complexity 0(n^ logn -h (|)^). Wang and Agarwal [383] give time bounds that do not depend on the length parameter 8: they give (1) an 0((^)^ log n)-time algorithm that

654

J.S.B. Mitchell

produces a feasible path (not necessarily 6:-robust) that is at most {l-\-e) times the length of a shortest 6:-robust path; and (2) an 0((|)^-^ logn)-time algorithm that produces a feasible path that is (e/2)-robust, with length at most (1 + e) times the length of a shortest 6:-robust path. For the special case in which the obstacles are "moderate" (have differentiable boundary curves, with radius of curvature at least 1), Agarwal, Raghavan, and Tamaki [8] give an algorithm requiring time 0(/i^logn) to compute exactly a shortest curvature-constrained path from a starting configuration (position-orientation pair) to a goal location (no orientation specified), and an algorithm requiring time 0{n^ \ogn -h ^) for computing an approximate shortest path (having length at most e greater than optimal) between two configurations. Boissonnat and Lazard [78] obtain exact algorithms between two configurations for moderate obstacles whose boundaries consist of unit-radius circular arcs and straight segments. If the boundary arcs (straight or curved) are each of length at least some constant, then their algorithm requires time 0(«^log/2); otherwise, the complexity is 0{n^\ogn). (Their algorithm remains polynomial even if the obstacles are not pairwise disjoint.) Sellen [357] uses a simple discretization of the unit square to search, in 0(6:"^) time, for a path among a set of constant-complexity obstacles that is "e-approximate" (which roughly means that it is within factor (1 -f e) of being shortest, while maintaining an sclearance from obstacles and obeying an approximate (up to e) curvature constraint). He also provides a decision procedure to determine the existence of a curvature-constrained path, in time polynomial in the reciprocal of a parameter that measures the difference between the radius of curvature in the constraint and the supremum of all radii for which a constrained path exists. For the special case of curvature-constrained paths inside a convex polygon having n vertices, Agarwal et al. [4] use a careful characterization of the structure of shortest paths to obtain an algorithm with running time 0{n log^ n). Their result may be an important first step towards the solution of the more general problem inside a simple polygon: OPEN PROBLEM 7. How efficiently can one compute a curvature-constrained shortest path in a simple polygon?

Boissonnat et al. [77] examine curvature-constrained motion in a convex polygon (with m vertices), having a single simple polygonal hole (with n vertices). They compute, in time 0(m + n), a cycle surrounding the hole having the minimum possible curvature. Wilfong [387,388] considers the case in which the robot is to follow a given network of lanes, specified by a set of m line segments in free space, among a set of obstacles (having a total of n vertices). The robot is allowed to turn from one segment to another along a circular arc, of radius ^ rmin, if the two lanes intersect and the robot does not collide with the obstacles. In Wilfong [387], a polynomial-time (0(m^(n^ -\- logm))) algorithm is given for preprocessing, after which, in O(m^) time, one can report a path (if one exists) having a minimum number of turns. (See also Mirtich and Canny [277].) Wilfong [388] shows that the problem of finding a minimum-length curvature-constrained path on a set of lanes is NP-complete; however, he also gives a dynamic programming algorithm to compute a shortest path (in time 0(m^n^)) for a given (feasible) sequence of turns (e.g., to optimize, locally, the path produced by the algorithm in [387]).

Geometric shortest paths and network optimization

655

Fortune and Wilfong [160] give an exponential-time algorithm for determining if a curvature-constrained path exists between two configurations, assuming the robot is not allowed to reverse; their algorithm solves this reachability question in time and space 20(poiy(n,m))^ where n is the number of vertices in the polygonal obstacles, and m is the total number of bits required to specify the vertices. Sellen [356] shows that the existence of a curvature-constrained path can be decided in time that is polynomial in J~.|^ and W~ ^, where dmm is the smallest distance between obstacle features and W = |7? — /?^|//? is the "relative width" of the problem, relating the maximal curvature, R~^, with the critical curvature, R~\ which is the infimum over the curvatures R~^ for which a curvatureconstrained path (with constraint R~^) exists. Sellen also shows how to approximate the critical curvature R~^ to within any relative error ^ > 0, and to produce a corresponding path; the algorithm is polynomial in n and Rc/s. If the robot following the path is allowed to reverse direction, then Laumond [249] has shown that it is always possible to obtain a curvature-constrained path from ^ to Mf the s and t lie in the same open, path-connected component of free space. Further, when allowing reversals, Laumond et al. [250] give an algorithm that determines a path (if one exists), producing a path having a local optimality property. Desaulniers [138] shows that, in the presence of reversals, in may be that no shortest path exists, even when there is a feasible path. Svestka and Overmars [369] also study problems of planning routes for car-like robots, using a "probabilistic learning paradigm." All of the discussion so far has been for paths in a two-dimensional environment. For three-dimensional spaces, Sussmann [367] gives a characterization of curvatureconstrained shortest paths. Polynomial-time approximation algorithms for three and higher dimensions are given by Reif and Wang [337], by applying their discretization techniques developed for the kinodynamic motion planning problem. Another interesting open area of research on curvature-constrained optimal paths is to consider network optimization problems in the curvature-constrained model. For example, we may desire a traveling salesperson tour (cycle) of minimum length, subject to the curvature constraint (see Section 7.2): OPEN PROBLEM 8. What is the complexity of the curvature-constrained TSP for points in the unit squarel What is the best approximation algorithm that can be given for the probleml

4.6. Optimal motion of non-point robots So far, we have considered only the problem of optimally moving si point robot. If the robot is modeled as a circle, or as a nonrotating polygon, then many of the results carry over by simply applying the standard configuration space approach in motion planning: "shrink" the robot to a (reference) point, and "grow" the obstacles (using a Minkowski sum) so that the complement of the grown obstacles model the region of the plane for which there is no collision with an obstacle if the robot has its reference point placed there. Chew [106] has examined the specific case of a circular robot; Hershberger and Guibas [202] have

656

J.S.B. Mitchell

considered more general convex robots, obtaining essentially quadratic-time algorithms for optimal paths under translation. Optimal motion of rotating non-circular robots is a much harder problem. Even the simplest case of moving a (unit) line segment (a ladder) in the plane is highly nontrivial. One notion of "optimal" motion requires that we minimize the average distance traveled by a set of k fixed points, evenly distributed along the ladder. This "J^-distance" in fact defines a metric (for k^2). The special case of A: = 2 is the well-known Ulam'sproblem, for which optimal motions have been fully characterized, in the absence of obstacles, by Ickingetal. [217]. The case of /: = oo is an especially interesting case, requiring that we compute a minimum work motion of a ladder; however, no results are known yet for this problem. (The work measures the integral (over A. € [0,1]) of the path length, L(A), for each infinitesimal subsegment of length dX.) O'Rourke [308] has studied a restricted case of the ^oo-optimal motion problem. 9. Characterize the doo-optimal (minimum-work) motion for a ladder that is allowed to translate and rotate in the plane. What if it is restricted to move within a polygonal domain! OPEN PROBLEM

While d\ does not define a metric, several cases of d\ -motion, and its generalization of measuring the distance traveled by any fixed "focus" F on the ladder, have been studied. In particular, if F is restricted to move on the visibility graph of a polygonal environment, Papadimitriou and Silverberg [318] (see also Sharir [360]) have obtained polynomial-time algorithms. Without restrictions, minimizing the d\ -distance, for any F not at an endpoint of the ladder, is NP-hard, but there exists an approximation algorithm; see Asano, Kirkpatrick, and Yap [40]. OPEN PROBLEM 10. Does minimizing the d\-distance of a ladder endpoint remain NPhardl Also, is it NF-hard to obtain a di-optimal motion of a ladder in a polygonal domain!

Chen and lerardi [101] have studied a velocity-constrained version of the problem of moving a ladder, such that no point of the ladder is allowed to have its speed exceed a given bound, and the objective is to minimize the time required to move the ladder from one configuration to another. For the case of no obstacles, they give a complete characterization of the optimal motion and give an explicit construction. See also the related work of Reister and Pin [343], who study time-optimal motion of mobile robots having independently controlled wheels.

4.7. Multiple criteria optimal paths The standard shortest-path problem asks for paths that minimize some one objective (length) function. Frequently, however, an application requires us to find paths to minimize two or more objectives; the resulting problem is a bicriteria (or multi-criteria) shortestpath problem. A path is called efficient or Pareto optimal if no other path has a better value for one criterion without having a worse value for the other criterion.

Geometric shortest paths and network optimization

657

For example, in mobile robotics applications, we may wish to find a path that simultaneously is short in (Euclidean) length and has few turns. Note that a minimum-link path may be far from optimal with respect to Euclidean length; similarly, a shortest Euclidean length path may have thousands of links, while there exists a path joining start and goal that has only 2 links. Multi-criteria optimization problems tend to be difficult. Even the bicriteria path problem in a graph is NP-hard [165]: Does there exist a path from s io t whose length is less than L and whose weight is less than Wl Pseudo-polynomial time algorithms are known, such as the algorithm of Hansen [191], who finds all Pareto-optimal paths in a graph, in time polynomial in the number of paths and n. Experimental studies suggest that the average number of Pareto-optimal paths remains very small in practice, although in theory this number may be exponential. Various heuristics have also been devised; e.g., see Handler and Zang [190] and Henig [197]. In geometric problems, various optimality criteria are of interest, including any pair from the following list: Euclidean (L2) length, rectilinear (Li) length, other Lp metrics, link distance, total turn, etc. NP-hardness lower bounds are known for several versions, including: [30] (1) Find a path in a polygonal domain whose L2 length is at most L, and whose "total turn" is at most 7; (2) Find a path in a polygonal domain whose Lp length is at most Xp and whose Lq length is at most Xq {p^q)\ and (3) Given a subdivision of the plane into red and blue polygonal regions, find a path whose length within blue regions is at most B and whose length within red regions is at most R. One problem of particular interest is to compute a Euclidean shortest path within a polygonal domain, constrained to have at most k links. No exact solution is currently known for this problem. Part of the difficulty is that a minimum-link path will not, in general, lie on the visibility graph (or any simple discrete graph). Furthermore, the computation of the turn points of such an optimal path appear to require the solution to high-degree polynomials. OPEN PROBLEM 11. For a polygonal domain {with holes) what is the complexity of computing a shortest k-linkpath between two given points?

For a given k (k ^ di, where di is the s-t link distance), one can compute a path in a simple polygon P whose length is guaranteed to be within a factor (1 + 6:) of the length of a shortest /:-link path, for any tolerance s > 0. The algorithm runs in time 0(n^k^ log {Nk/e^'^)), polynomial in n and k, and logarithmic in \/s and the largest integer coordinate N of any vertex of P [294]. Within the same time bound, one can compute an ^-optimal path under any (single) combined objective, f(L,G), where L and G denote link distance and Euclidean length, and / is an increasing function in G for each L. Aside from the problem of computing a shortest ^-link path, one may ask if there always exists Sin s-t path that is simultaneously close to Euclidean shortest and minimum-link? In a simple polygon, such a path always exists and can be computed efficiently (in time 0(n)): There is an s-t path whose link length is within a factor of 2 of the link distance from s to t, while also having Euclidean length within a factor of A/2 of the Euclidean shortest-path length [31]. A corresponding result is not possible for polygons with holes. However, in

658

JS.B. Mitchell

0(kEyQ) time, one can compute a path in a polygonal domain having at most 2k links and length at most that of a shortest A:-link path [294]. In a rectilinear polygonal domain, some of these bicriteria path problems become easier, since there is a path-preserving graph (grid). In particular, efficient algorithms are known for the bicriteria path problem that combines rectilinear link distance and L i length. Yang, Lee, and Wong [390] and Chen, Daescu, and Klenk [95] give efficient algorithms for computing a shortest ^-link rectilinear path, a minimum-link shortest rectilinear path, or any combined objective that uses a monotonic function of rectilinear link length and Li length in a rectilinear polygonal domain. Single-source queries can be answered in time O(logn), after 0(n log^^^ n) preprocessing time to construct a data structure of size 0(n logn) [95]; two-point queries can be answered in time O(log^n), using 0(n^\og^n) preprocessing time and space [95]. (See also the survey article of Lee, Yang, and Wong [255] on the subject of rectilinear path problems.) A related problem is studied by de Berg et al. [133, 134], who give efficient algorithms in two or more dimensions for computing optimal paths among a set of axis-parallel (possibly crossing) line segment obstacles according to a "combined metric," defined to be a linear combination of rectilinear link distance and L\ path length: In the plane, using O(n^) preprocessing time and 0(n logn) space, a data structure for a fixed source point can be computed, so that path length queries to a goal point can be answered in time 0{\ogn). (Note, however, that optimal paths in this metric are not equivalent to the Pareto-optimal solution paths.) It would be interesting to study the complexity of the problem in a more general setting: OPEN PROBLEM 12. How efficiently can one compute a (general) polygonal path in a polygonal domain, under a combined metric cost function that takes into account Euclidean length, the number of turns, and possibly the amount of turningl

4.8. Other optimal path problems We briefly mention some various other optimal path problems: (1) In the sailor's problem, the goal is to compute a minimum-cost path, where the cost of motion is direction-dependent, and there is a cost L per turn (in a polygonal path). For L = 0, Sellen [355] gives an algorithm for computing optimal paths in a polygonal domain, in time O(n^) times a bit complexity term. Sellen also considers the case in which L > 0, obtaining a (1 + ^)-approximation algorithm that requires time polynomial in n and If a. See also the study by Rowe [349] on anisotropic weighted regions. (2) In the maximum concealment path problem, the goal is to determine a path within a polygonal domain P that minimizes the length during which the robot is exposed to a given set of v "enemy" observers. This problem is a special case of the weighted region problem, in which weights are 0 (for travel in concealed free space), 1 (for travel in exposed free space), or oo (for travel through obstacles). Gewali et al. [169] use visibility graph methods, based on the local optimahty conditions, to obtain polynomial-time algorithms for this problem. In a simple polygon, their time bound is 0(i;^(i; + n)^)\ in a polygonal domain, the bound becomes 0(i;^w^).

Geometric shortest paths and network optimization

659

(3) In the minimum total turn problem, the goal is to compute a polygonal s-t path that minimizes the sum of the absolute values of the turn angles at its vertices. This problem is solved in polynomial time {0{EyG logn) time, 0{EyG) space) by reducing it to a shortest path problem in an augmentation of a visibility graph [30]. (See also Section 7.4, on angular-metric traveling salesperson problems.) (4) In iho^ fuel-consuming problem, one is given a set of n point sites in the plane and the goal is to find a "cheap" polygonal path from one site to another, with the vertices of the path being restricted to the set of point sites. The cost of a path, though, is not measured in terms of its Euclidean length, but in terms of a more general cost function, l(p,q), which assigns a nonnegative cost to a flight from ptoq. Naturally, one can compute a minimumcost path in time 0(n^) simply by searching the complete graph for a shortest path. However, it turns out that more efficient algorithms that exploit geometry are possible, if we assume that /(•, •) has some simple properties: its description is of size 0(1) and l(p, q) can be evaluated in 0(1) time, and l(p, q) < l(p, q') if and only if d2{p, q) < d2(p, q') (where d2{-, •) denotes Euclidean distance). Efrat and Har-Peled [147] show that a cheapest route can be computed in time 0(n^-^+^), for any fixed s > 0). Further, they show that if the cost function grows with at least a quadratic rate as a function of Euclidean distance (i.e., l(p, q) = (d2(p, q))^ • f(d2(p, q)), where /(•) is a positive, nondecreasing function), then it suffices to search the Gabriel graph (a subgraph of the Delaunay triangulation) of the point sites; thus, cheapest routes can be found in time 0(n logn) in this case. (5) In the problem of shortest paths in an arrangement, one is given a set of n lines in the plane, and points s and t on the lines, and must compute a shortest s-t path that is contained within the union of the lines. Since the arrangements can be computed in time O(n^) (see the chapter on arrangements by Agarwal and Sharir [2]), and shortest paths in planar graphs can be computed in linear time ([198]), the problem is trivially solved in time O(w^). It is an intriguing open question if there exists a subquadratic-time algorithm. There has been partial progress towards addressing this question: Bose et al. [79] give a 2-approximation algorithm that requires 0{n logn) time, and Eppstein and Hart [152] give an algorithm for computing an exact shortest path in time 0{n-\- k^), where k is the number of different line orientations. (6) In the asteroid avoidance problem, one is given a set of obstacles, each moving along a fixed (known) trajectory, and the problem is to find a minimum-time obstacle-avoiding path for a point robot that is subject to a velocity bound. This problem was first studied by Reif and Sharir [336], who show that the general problem is PSPACE-hard in three dimensions and that the two-dimensional problem can be solved in exponential time in the case of pure translational motion. Canny and Reif [83] prove that the two-dimensional problem is NP-hard, even for convex translating obstacles, moving with fixed velocity, that do not collide. (Effectively, the fact that the obstacles are moving lifts the dimension of the problem from two to three, making it substantially more difficult; see Section 6.) Canny [81] has given a PSPACE algorithm to solve the asteroid avoidance problem. 5. On-line algorithms and navigation without maps In all of the optimal path problems we have discussed so far, we have assumed that we know in advance the exact layout of the environment in which the robot moves; i.e., we assume

660

J.S.B. Mitchell

we are given a perfect map. In many situations, the robot does not have prior information about the obstacles in the environment; e.g., the robot may be placed in a completely new environment, or it may roam on a factory floor or an office building where there are frequent changes in the positions of obstacles. In such cases, we may have perfect information about the robot's current location, as well as the location of the goal, but we acquire information about the environment on-line, as the robot encounters or senses obstacles. Common assumptions about the sensory capabilities of the robot include (1) a tactile robot, in which the robot learns of the boundary of an obstacle only as it encounters it, and moves along it; or a (2) vision-based robot, in which the robot learns of obstacles only as it is able to see them. (It is common to assume that the robot has 360-degree vision, allowing it to look in all directions; however, this assumption may be relaxed as well.) For a visionbased robot, there are also different assumptions that can be made about the nature of the sensor: (a) it may be that it knows only about that portion of the obstacle boundaries that it has seen; or (b) it may be that it has recognition capabilities, so that as soon as it sees any part of the boundary of an obstacle, it is able to determine the shape, size, and position of the obstacle, thereby learning the entire obstacle boundary. Our goal is to obtain a navigation strategy that controls the motion of the robot, while utilizing sensory input, in order to minimize some notion of length (e.g., Euclidean length) of the path of the robot, which is to get from a start point, 5, to a goal (target) location, t (which may be a point, a line, a region, etc.). The environment is assumed to be a polygonal domain, P, that is unknown to us. Often, there is very special structure assumed about the obstacles that constitute the holes of P, Some of the first work that obtained worst-case bounds on the length of a path produced by a navigation strategy was that of Lumelsky and Stepanov [268,269]. They give navigation strategies for a tactile robot moving among a set of arbitrary obstacles. The robot is assumed to know, at any given time, its own position, the position of the goal, and whether or not it is in contact with an obstacle; it is assumed to have only a small constant-size memory for recording other information that is learned along the way. One simple strategy ("BUGl"), attempts to head towards the goal until an obstacle is encountered; then, the robot follows the boundary of the obstacle, all the way around the perimeter, keeping track of the point p that is closest to the goal; finally, the robot returns to point p (by following the boundary) and heads again towards the goal. This strategy finds a path whose length is at most d2{s, t) -\-\L, where L is the sum of the perimeters of the obstacles that intersect a disk of radius diis, t) centered at t. Within their model, they also prove a lower bound, showing that no strategy can guarantee a path length better than d2{s, t) -\- L — e, fox any ^ > 0. A second strategy ("BUG2") attempts to stay on the straight segment ~st, at the cost of possibly visiting obstacles more than once. BUG2 is shown to produce a path of length at most d2{s, t) -j- ^ - ^ ^ , where ni is the number of times Ji crosses the /th obstacle, and Li is the perimeter of the /th obstacle. For convex obstacles, BUG2 is essentially optimal in their model. See also [130] for some further work on an extension of the LumelskyStepanov model. Other papers on maze traversal strategies include [75,328], as well as the surveys of Lumelsky [265-267]. While the Lumelsky-Stepanov result gives a worst-case additive error bound on the robot's path length, it does not give a bound on the ratio between the robot's path length and the (true) shortest path length, d{s,t', P), in P An order to evaluate the effectiveness

Geometric shortest paths and network optimization

661

of a navigation strategy, a, in an on-line setting, it is now common to use the notion of a competitive ratio, p(n), where n = d2(s,t) here denotes the EucHdean distance between s and t, and the ratio p(n) is defined by where da(s, t; P) is the length of the s-t path produced by strategy or in P, and we assume that a unit diameter circle can be inscribed in each obstacle. In other words, our goal is to minimize the ratio between the length of the path obtained using the strategy to the length of a shortest path (with perfect information); the competitive ratio p(n) is the maximum value of this ratio, over all environments having a given start-to-goal distance n. The competitive ratio, in this context, has been studied first by Papadimitriou and Yannakakis [319], and independently by Eades, Lin, and Wormald [146]. In particular, [319] show that if the obstacles are all axis-aligned squares, and the robot is equipped with a vision sensor, then one can achieve a competitive ratio of p(n) = ^y^ -f o(l), for all n. (The bound is 5/3 if 5* and t are points having the same x- or j-coordinate.) If the obstacles are in fact aligned unit squares, they prove that p(n) is at least 3/2, while supplying a strategy that achieves p(n) = 3/2 -h o(l), for all n. (It is now known that a ratio of p(n) = 3/2 is possible for square obstacles, even if they have different sizes and are not axis-aligned; see the citation of Chan and Lam [87] below.) Further, by an adversary argument, they show that, for arbitrary (e.g., "thin") aligned rectangular obstacles,^ there is no strategy with a bounded competitive ratio, for a robot with line-of-sight vision. In fact, in [146, 319], it is shown that if the goal region t is an infinite vertical line ("wall"), at distance n from s, and the obstacles are aligned rectangles, then p(n) = Q{^). Blum, Raghavan, and Schieber [74] provide a "sweep algorithm" for this wall problem that shows a matching upper bound of p{n) = 0(->/n), both for a vision-based robot and for a tactile robot (utilizing a "doubling" search procedure, suggested by Baeza-Yates, Culberson, and Rawlins [49]). If the obstacles are aligned rectangles having aspect ratio at most / and longest side at most g (and shortest side at least 1), then Mei and Igarashi [275] give an "adjusted bias heuristic" that achieves competitive ratio 1 + | / + o(l), if / = o(v^) and fg = o(n), assuming s and t have a common x- or >'-coordinate (the competitive ratio is slightly higher otherwise). (See also [276].) Blum et al. [74] also study the "room problem", in which P consists of an n-by-n (aligned) square room, with aligned rectangular holes (obstacles). For the room problem, they give an algorithm achieving p(n) = 0(2^^^^^"). Bar-Eli et al. [54] have improved upon this result, establishing a tight bound of p(n) = O(logn), for deterministic algorithms. The wall and room problems can be combined, resulting in a competitive ratio of p(n) = 0(-s/n) for point-to-point navigation among aligned rectangular obstacles. While, for deterministic algorithms, we have the tight bound p(n) = 0 (V^) for the competitive ratio in both the wall and point-to-point versions of the problem, it has been shown by Berman et al. [62] that randomized strategies are "powerful", in that one can obtain a competitive ratio of 0(/2^/^log/i) for the wall and point-to-point navigation problems among aligned rectangular obstacles. For randomized strategies, we define p(n) to be the ^ Note that if the obstacles (holes of P) are rectangles or squares, they are disjoint, but allowed to touch; however, the robot can "squeeze" between two touching obstacles. Thus, we cannot synthesize nonconvex obstacles by putting together rectangular obstacles. Also, unless otherwise stated (e.g., in the "room problem") we are assuming that P is the infinite plane, with holes that are the obstacles; i.e., there is no outer boundary of P.

662

J.S.B. Mitchell

supremum of the ratio of the expected path length to d(s, t; P), assuming that P is selected by an oblivious adversary, with knowledge of the strategy, but not of the coin tosses made during a walk using the strategy. Berman and Karpinski [63] obtained a randomized strategy for general convex obstacles with competitive ratio 0(n^^^). Blum and Chalasani [71] have shown that if the robot is to make multiple trips from s to r, it can make effective use of information gained on each trip, allowing it to improve its performance as it learns more about the environment. In particular, they show a strategy in which, for every i ^ n, the ith trip of the robot is a path of length 0(y/n/i) times d(s,t; P). Their results apply to the wall problem, as well as the point-to-point problem, in the presence of aligned rectangular obstacles. They also provide a lower bound, for deterministic strategies, of ^(y/n/k) on the cumulative k-trip competitive ratio (which measures the ratio of the total length of all k trips, over k times d{sj\ P)). If the obstacles are arbitrary nonaligned rectangles, then the competitive ratio for the room problem goes up: Blum et al. [74] show that p{n) = Q{^)\ they also give (nontight) upper bounds that either assume an excluded range of orientations of the rectangles, or allow a randomized algorithm. If the nonaligned rectangles have aspect ratio at most r, then a strategy of Chan and Lam [87] obtains a competitive ratio of (^ + 1), which is shown to be tight. In particular, in the case of nonaligned squares (r = 1), Chan and Lam's result implies a competitive ratio of 3/2, improving the earlier bound of Papadimitriou and Yannakakis [319]. An asymptotic competitive ratio of 3/2 has been obtained by Bezdek [69] for the case of nonaligned cubes in three dimensions; this result in fact implies the two-dimensional result for squares. For even more general environments P, Blum et al. provide some results in special cases of convex obstacles, as well as general polygonal domains ("mazes"), where the competitive ratio is ^(|V|), where V is the set of vertices of P. A simple "L-shaped" maze example shows that even a randomized algorithm cannot achieve a competitive ratio better than (| V| — 10)/6. Blum et al. also consider the three-dimensional version of the wall problem, obtaining a lower bound of C2 {n^^^) for the competitive ratio, and matching upper bounds in special cases (obstacles that are generalized cylinders, in the wall problem, or aligned boxes, in the point-to-point problem). Berman et al. [62] show that randomization can, again, help, allowing a strategy with competitive ratio 0{n^^^~^') for the point-to-point and wall problems. The on-line version of the weighted region problem (Section 4.3) has been studied by Reif and Wang [341], who consider an environment in which the axis-aligned rectangular "obstacles" are penetrable, with each having a weight (cost per unit distance) greater than one (the background has weight one). Using a modified sweeping strategy of Blum et al. [74], they show that a competitive ratio of O(v^) is achievable in the wall problem with penetrable obstacles (and this is tight). See their paper for generaUzations to "recursive" weighted environments, in which penetrable obstacles may include other penetrable obstacles of higher weight. In the search version of the on-line problem, our objective is to search for an entity at some unknown target location in an unknown environment, minimizing the total distance traveled from the starting point, until the visually identifiable target is first seen; see [49, 234,243].

Geometric shortest paths and network optimization

663

While for general simple polygons P, no constant competitive ratio is possible when searching for a target t, Klein [242] has shown that if one is navigating in a special type of simple polygon, called an s-t street (for s and t on the polygon boundary), then there is a strategy for searching for a path from s to t that achieves a competitive ratio of l + | 7 r ^ 5 . 7 1 . (Points s and t split the boundary of P into two subchains; P is an 5-f street if each point on one subchain is visible from some point on the opposite subchain.) Here, both P and the coordinates of t are unknown to the robot; the robot is equipped with a vision sensor, and we assume that the goal t is visually identifiable. Streets (as well as "star-shaped polygons"; see below) enjoy the special property that in the tree of shortest paths from s, left-turning paths and right-turning paths are grouped. Klein's strategy is based on the idea of minimizing the "local absolute detour," while moving from one point known to be on the shortest s-t path to another such point. Klein's analysis was improved by Icking [215], who proved a bound of n/2 + y 1 + n^/A ^ 4.44 on the competitive ratio. Kleinberg [243] gives a simpler strategy and analysis, achieving a competitive ratio of .61, and shows further that it achieves an optimal ratio of \/2 in the case of rectilinear streets. Lopez-Ortiz and Schuierer [261] present a strategy, similar to Klein's, having a substantially simpler analysis, resulting in a ratio of TT + 1 ^ 4 . 1 4 ; they show that a hybrid strategy based on this one achieves a ratio of ^VTT^ + 47r -}- 8 ^ 2.76. LopezOrtiz and Schuierer [260] have given a further improved strategy, using ideas similar to Kleinberg's (but with a substantially more complex analysis), achieving competitive ratio y i + (l+7r/4)^ ^ 2.05. Lopez-Ortiz and Schuierer [263] have given an extension of the original approach of Klein, to "continuous local absolute detour," that results in a competitive ratio of 2.03; further, by combining this approach with their earlier method ([260]), Lopez-Ortiz and Schuierer obtain a hybrid strategy achieving competitive ratio of L73. Most recently, Semrau [358] has developed a strategy that results in a competitive ratio of 7r/2 ^ 1.57, which is getting very close to the theoretical lower bound of \f2. OPEN PROBLEM

13. Is there a strategy achieving a competitive ratio of ^/l in streets!

The results on searching in streets discussed above have assumed that the robot does not know the location of the target t. For such problems, it is easy to show that V2 is a lower bound on the competitive ratio (see Klein [243]). However, Lopez-Ortiz and Schuierer [260] have shown a V2 lower bound on the competitive ratio for deterministic strategies, even if the coordinates of the target are known to the robot and the street is rectilinear. Thus, for rectilinear streets, knowledge of the target location does not assist the robot. Lopez-Ortiz and Schuierer [264] give a strategy with a constant competitive ratio (12.72) that finds a path to a target point in an unknown star-shaped polygon,^ even if the coordinates of the target point are unknown, and it is not necessarily on the polygon's boundary. They also prove a lower bound of 9 on any strategy that must find a path in an unknown starshaped polygon to a target point whose coordinates are not specified. Star-shaped polygons, like streets, enjoy the property that the left-turning and right-turning paths in the shortest path tree rooted at s are grouped. Note too that a star-shaped polygon can be made into A polygon is star-shaped if there exists a point within it from which all other points of the polygon are visible.

664

J.S.B. Mitchell

an s-t street, for any vertex s, by adding, if necessary, a vertex t such that the diagonal st intersects the kernel."* Some results are also known for more general polygons than streets or star-shaped polygons. Datta and Icking [129] introduced the notion of a "generalized street;" for rectilinear generalized streets they give an algorithm with competitive ratio of \/82 ^ 9.06 (or 9, in the L\ metric) and prove a lower bound of 9 on the competitive ratio of any strategy, assuming the target t is not known to the robot. (A simple polygon P is a generalized street (^-street) with respect to two points s and t on its boundary if for any point p on the boundary of P, there exists a horizontal chord, whose endpoints are on different subchains of the boundary, such that p is weakly visible to the chord. The class of generalized streets strictly contains the class of streets.) Lopez-Ortiz and Schuierer [260] show a lower bound of 9 even in the case that the coordinates of the target are known. Datta, Hipke, and Schuierer [128] define even more generalized notions of rectilinear streets ("HV-streets" and "^-^-streets"), for which they prove bounds in the L\ metric: 14.5 is an upper and lower bound on the competitive ratio for HV-streets, while 19.97 is an upper bound for ^-^-streets. (See also Schuierer [354] for lower bounds on the competitive ratio in 0-Qstreets.) Lopez-Ortiz and Schuierer [262] give a competitive strategy (with ratio 80) in arbitrarily oriented (nonrectilinear) ^-streets. Kleinberg [243] considers searching in general rectilinear simple polygons, obtaining a strategy with competitive ratio 0(m), where m is the number of essential cuts, which may be much smaller than the number of vertices. Streets have also been studied with respect to searching in link distance, instead of L2 or L\ length. Ghosh and Saluja [174] give a deterministic strategy for searching for an s-t path in a street, using at most 2m — 1 links, where m is the link distance from ^ to ^ Further, they show that this bound is best possible for deterministic strategies in general streets. For rectilinear streets, a rectilinear link distance of m + 1 is achievable, and this is best possible; here, m is the rectilinear link distance from siot. Ghosh and Saluja observe that in general (non-street) simple polygons, no competitive ratio better than n/A is possible, where n is the number of vertices. On-line path problems arise also for objectives other than that of finding a path from a start 5 to a target point t. Icking and Klein [216] have given a competitive strategy for the problem of searching for the kernel of an unknown star-shaped polygon; the goal is for the robot to get to some point of the kernel. (A vision-equipped robot can recognize when it reaches a point in the kernel.) The competitive ratio is based on the distance from s to the point of the kernel that is closest to s. Icking and Klein [216] obtain a ratio of 5.48 (which they have subsequently improved slightly); they also prove a lower bound of V2 (which was subsequently increased to 1.48 by [264]). The best current bound is that of Shin et al. [362], who obtain a competitive ratio of 1 + 2 ^ 2 ^ 3.829. Lopez-Ortiz and Schuierer [264] give a constant competitive ratio (of 46.35) for the on-line recognition of a star-shaped polygon, in which the robot must execute a path until it can prove or disprove the star-shapedness of P. They also show a lower bound of 9 on the competitive ratio of any such strategy. In the explore (or mapping) version of the problem, our objective is to execute a path such that the robot can map out the entire space, by seeing every point of free space; see The kernel of a polygon P is the locus of points within P from which every point of P is visible. If the kernel is non-empty, then the polygon is star-shaped.

Geometric shortest paths and network optimization

665

[136,243]. (See also Section 7.4 on the "watchman route problem," which is the off-line version of the problem, to compute a shortest route that sees the entire space when the map is given.) In particular, Deng, Kameda, and Papadimitriou [136] have shown that no competitive strategy exists, in general, if there are an unbounded number of obstacles; however, if the number of obstacles is bounded, they obtain a competitive strategy with a constant competitive ratio (0(k), where k is the number of obstacles). In particular, if P is a simple rectihnear polygon, there is a 2-competitive deterministic algorithm [136, 137], and a 5/4-competitive randomized algorithm [243] for the exploration problem. For general simple polygons, the competitive ratio of [136] is proved to be constant, but is only estimated to be in the thousands. A bound of 133 was later given by Hoffman et al. [211], and has recently been improved to (18\/2 + 1) < 26.5 by the same set of authors [212]. Kalyanasundaram and Pruhs [231] study the search and explore problems for a visionequipped robot among a set of k disjoint convex obstacles having average aspect ratio^ a. They obtain tight bounds on the competitive ratio for both problems: ^(min{^, V ^ } ) . (In the mapping problem of [231], the robot is required to see all of the boundary of the work space, but not necessarily all of its interior; this is in contrast with the mapping problem of [136].) They also show that the natural greedy "nearest neighbor" heuristic for the search problem can be quite bad, showing an Q{2^) lower bound on the competitive ratio for that strategy. In the visual traveling salesperson problem (visual TSP), the robot's objective is to visit and traverse the boundary of every obstacle; this formulation is meant to model the fact that a robot (equipped with a vision sensor) may have to get close to an object in order to map it completely. For the visual TSP, Kalyanasundaram and Pruhs [232] give a 19-competitive algorithm, based on applying their 18-competitive algorithm for the "online TSP" in planar graphs to a type of "relative neighborhood graph" in the presence of obstacles. (See also [233].) For other related work, without theoretical guarantees on the competitive ratio, but useful in autonomous vehicle navigation, we refer the reader to the book edited by Iyengar and Elfes [225], as well as [226,281,293,329,330]. Mitchell [281] has considered a model, based on a special case of the weighted region problem (Section 4.3), in which the robot gathers information, which it accumulates in a map, and at each step applies the best possible local strategy, assuming travel within known free space has a cost-per-unit-distance of 1, while travel in unexplored terrain has cost a > 1 per unit distance. For a recent survey of on-line searching and navigation algorithms, see Berman [61].

6. Shortest paths in higher dimensions We turn our attention now to the problem of computing shortest paths in higher dimensional geometric spaces. Most of the discussion will focus on three-dimensional spaces, since most effort has been devoted to this case. We begin with a few definitions. A polyhedral domain is a connected subset, P, of 9^^ whose boundary consists of a union of a finite number of triangles. (The definition is readily extended to d dimensions, where the boundary must consist of a union of "simplices.") The complement of P consists of ^ Here, the aspect ratio of a convex body is the ratio of the radius of the smallest circumscribing circle to the radius of a largest inscribed circle.

666

J.S.B. Mitchell

connected (polyhedral) components, which are the obstacles. A polyhedral domain is orthohedral if each boundary facet is orthogonal to one of the coordinate axes. A polyhedral domain P is a (convex) polytope if it is the convex hull of its vertices. A polyhedral surface is a connected union of a finite number of polygonal faces, with any two polygons intersecting in a common edge, a common vertex, or not at all, and each edge belonging to exactly two polygons. Throughout this section, n will denote the number of edges in a polyhedral domain or surface. Without loss of generality, we can assume that all faces of a polyhedral surface are triangles, since a polygon triangulation algorithm can be applied to decompose each polygonal face into triangles, introducing a total of 0(n) additional edges and faces.

6.1. Complexity In three or more dimensions, most shortest-path problems are very difficult. The problem is difficult even in the most basic Euclidean shortest-path problem in a three-dimensional polyhedral domain P, and even if the obstacles are convex, or the domain P is simply connected. There are two sources of complexity, as we now discuss. One difficulty arises from algebraic considerations. In general, the structure of a shortest path in a polyhedral domain need not lie on any kind of discrete graph. Shortest paths in a polyhedral domain will be polygonal, with bend points that generally lie interior to obstacle edges, obeying a simple "unfolding" property: The path must enter and leave at the same angle to the edge. It follows that any locally optimal subpath joining two consecutive obstacle vertices can be "unfolded" at each edge along its edge sequence, thereby obtaining a straight segment. (The edge sequence of a path is the ordered list of obstacle edges that are intersected by it.) Given an edge sequence, this local optimality property uniquely identifies a shortest path through that edge sequence. However, to compare the lengths of two paths, each one shortest with respect to two (different) edge sequences, requires exponentially many bits, since the algebraic numbers that describe the optimal path lengths may have exponential degree [51,52]. A second difficulty arises from combinatorial considerations. The number of combinatorially distinct (i.e., having distinct edge sequences) shortest paths between two points may be exponential. Canny and Reif [83] have used this fact to prove that the shortest-path problem is NP-hard, even if the obstacles are simply a set of parallel triangles. While this result is strong evidence that we will not be able to solve the problem exactly in polynomial time, it does not rule out the possibility that we could construct a shortest path map in time proportional to its combinatorial size, which may be exponential in general, but far smaller in many practical cases. OPEN PROBLEM 14. Can one compute a shortest path map for a polyhedral domain in output-sensitive timel

Sharir and Schorr [361] gave a doubly exponential time (2^ " ) exact algorithm, based on reducing to an algebraic decision problem in the theory of real closed fields. This result was improved by Reif and Storer [339], who give a singly exponential time algorithm

Geometric shortest paths and network optimization

667

(requiring 2" time and /^^(log^) space), based on the same theory, but using a more efficient reduction. Finally, Canny [81] has given a PSPACE algorithm, which applies not only to the shortest path problem in three dimensions, but also to the two-dimensional asteroid avoidance problem (see Section 4.8). Given the difficulty of solving the general problem exactly, it is natural to consider approximation algorithms for the general case, or to consider special cases in which we can obtain polynomial bounds.

6.2. Special cases If the polyhedral domain P has only a small number, k, of convex obstacles, a shortest path can be found in rP^^"^ time, as shown by Sharir [359]. If the obstacles are known to be "vertical buildings" having only k different heights, then shortest paths can be found in time O (^ ^^ ~ ^) [ 170]; however, it is not known if this version of the problem is NP-hard if k is allowed to be large. Both of these special cases have worst-case exponential algorithms; is there some nontrivial case of disjoint obstacles in three dimensions that is not hard to solve exactly? We have noted that Canny and Reif's hardness proof applies even to simple (convex) triangular "plates" that lie in parallel planes; however, their construction seems to rely on some edges of the triangles not being axis-parallel. This suggests an interesting question: OPEN PROBLEM 15. What is the complexity of the Euclidean shortest-path problem in 3spacefor obstacles that are disjoint aligned boxesl What about for disjoint {unit) sphere si

If we require paths to stay on a polyhedral surface (i.e., the domain P is essentially 2-dimensional), then the unfolding property of optimal paths can be exploited to yield polynomial-time algorithms. This was first used by Sharir and Schorr [361] to obtain an 0(n^ logn)-time algorithm for convex surfaces. Mitchell, Mount, and Papadimitriou [291] obtained an 0(n^ log n)-time algorithm for general polyhedral surfaces, by developing a continuous Dijkstra method of propagating a shortest path map over the surface, taking advantage of the local optimality (unfolding) property. Chen and Han [100] have improved the time bound even further, obtaining an algorithm requiring 0(n^) time and O(^) space. (The algorithm of [100] relies on the nonoverlapping property of the "star unfolding", as shown by Aronov and O'Rourke [34]; see below.) These algorithms not only construct a shortest path map with respect to a single source, but can be used to construct a geodesic Voronoi diagram for multiple source points within the same time bound (where n now includes the number of source points). One of the most interesting open problems in this area is to break the quadratic time barrier, even for the case of convex poly topes: 16. Can one compute shortest paths on the surface of a convex polytope in 9^^ in sub quadratic timel In 0{n logn) timel

OPEN PROBLEM

Note added in proof: Kapoor [236] has announced a recent advance on this problem.

668

J.S.B. Mitchell

Several facts are known about the set of edge sequences corresponding to shortest paths on the surface of a convex polytope P in 9fl-^. In particular, Mount [300] has shown that the worst-case number of distinct edge sequences that correspond to a shortest path between some pair of points is 0{n^). Further, Agarwal et al. [3] have shown that the exact set of such sequences can be computed in time 0(n^P(n) logn), where P(n) = o(log* n). (A simpler O(n^) algorithm can compute a small superset of the sequences [3].) The number of maximal edge sequences for shortest paths is @(n^), as shown by Schevon and O'Rourke [351]. Some of these results depend on a careful study of the "star unfolding" with respect to a point p on the boundary, dP, of P. The star unfolding is the (nonoverlapping [34]) cell complex obtained by subtracting from dP the shortest paths from p to vertices of P, and then "flattening" the resulting boundary. Agarwal et al [3] have also shown that two-point queries can be answered in time 0((v^/m^/^)logn), after spending 0(n^m^+^) preprocessing time and storage, for any choice of 1 ^ m ^ n^, and 5 > 0. (If one query point always lies on an edge of the polytope, the algorithm can be improved to use 0(n^m^^^) preprocessing time and storage and guarantee 0((n/m)^/^logn) query time, for any choice of I ^m ^n.) Further, the geodesic diameter is obtained in time O(n^logn), improving an earlier 0(«^^log«) bound of O'Rourke and Schevon [311]. Chiang and Mitchell [108] show how two-point queries can be answered efficiently (even in optimal 0(log/i) time) on nonconvex polyhedral surfaces; however, the preprocessing and space complexities are even higher than in the convex case. Performing efficient two-point queries while using only a small polynomial amount of storage remains an open problem: OPEN PROBLEM 17. How efficiently, and using what size data structure, can onepreprocess a polyhedral surface for exact two-point queriesl Can exact two-point queries be done in sublinear query time using subquadratic storage! What if the surface is convexl

In the special case of terrain surfaces (polyhedral surfaces having at most one intersection point with any line parallel to the z-axis), de Berg and van Kreveld [132] have studied various optimal path problems, including some bicriteria versions, with constraints imposed on the maximum allowed altitude. They build a "height-level map," in time 0(nlogn), stored implicitly using 0(n) space, which enables O(logn) time queries to compute a shortest s-t path that stays below a given elevation z, or an s-t path having a minimum total ascent.

6.3. Approximation algorithms Papadimitriou [317] was the first to study the general problem from the point of view of approximations. He gave a fully polynomial approximation scheme that produces a path guaranteed to be no longer than (I -\- s) times the length of a shortest path. His algorithm requires time 0(n^(L + \og(n/e))^/s^), where L is the number of bits necessary to represent the value of an integer coordinate of a vertex of P. Clarkson [120] gives an alternative method, requiring roughly 0(n^\og^^^^ n/s^) time (the exact expression includes also a precision parameter that depends on the geometry of P).

Geometric shortest paths and network optimization

669

Choi, Sellen, and Yap [115,114] have re-examined closely the analysis of Papadimitriou and have addressed some inconsistencies found in the original algorithm. To this end, it is important to distinguish between the bit framework and the algebraic framework of studying the complexity of the problem. Almost all shortest path algorithms (and most computational geometry algorithms) assume an algebraic model of computation, in which the time complexity is measured in terms of the number of algebraic operations performed on real numbers. It is assumed that these operations are performed exactly. In the bit framework, though, time complexity is measured in terms of the number of Boolean operations on bits, assuming the input is encoded with binary strings. Given the nature of current computer hardware, it is likely that the bit framework more accurately models actual computation times. Choi, Sellen, and Yap [115] give upper bounds on the bit complexity of the approximate shortest-path problem. They have also introduced the important notion of "precisionsensitivity" in algorithms, where the goal is to write the complexity in terms of an implicit parameter, 5, that measures the implicit precision of the input instance [114]. For example, in the shortest-path problem, they define 8 = (d2 — d*)/d'^ to be the relative difference between the length J* of an optimal path, and the length, d2, of the second-shortest, locally optimal path; i.e., J2 > ^* is the length of a shortest path that uses an edge sequence distinct from any optimal edge sequence, but is closest in length to J* among all such locally optimal paths. Provided that the optimal edge sequence is in some sense nondegenerate, one obtains an approximation algorithm that is polynomial in 1/8 and the other parameters of the input, with only linear dependence on 1/6:. Recently, Har-Peled [194] has shown how to compute an approximate shortest path map in polyhedral domains. In particular, he shows that, for a given source point s, and real parameter 0 < ^ < 1, a subdivision of dl^ of size can be constructed in time roughly Oin'^/s^), so that for any point t edi^ a. (1 -\- £)-approximation of the length of a shortest s-t path can be reported in time 0(log(n/£)). His technique is to sprinkle a carefully selected set S of discrete points within P and to record with each point of S a "weight" that corresponds to the approximate shortest path distance from s to it; the approximate shortest path map is then given by the additive-weight Voronoi diagram of S. In addition to approximation results for shortest paths in polyhedral domains, there have been a number of results on approximating shortest paths on polyhedral surfaces. Hershberger and Suri [208] obtain a 2-approximation for a shortest s~t path on a convex poly tope in time 0(n), using a relatively simple algorithm that considers the shortest path on the surface of the polytope's bounding box, between an appropriate pair of points. An extension to the algorithm allows one to compute a 2.38(1 + £)-approximate shortest path tree, SPT(^), in 0(n logn) time. The method also results in a 2^-approximation algorithm for shortest paths in a polyhedral domain consisting of k convex poly topes. Agarwal et al. [6] extend the method of [208] by surrounding the input (convex) polytope with a tighter-fitting constant-size (depending on s) bounding polytope, which approximately preserves shortest path distances. The result is that in time 0(nlog(l/6:) -|- l/s^) one can compute a (1 -h ^)-approximate shortest s-t path, for any 0 < ^ ^ 1. (The approximate length of a shortest path can be reported in time 0(n -h 1/6:^).) Har-Peled [193,192] improves this result, obtaining results for the approximate two-point query version: He gives an 0(n)-time algorithm to preprocess a convex polytope so that a two-point query

670

J.S.B. Mitchell

can be answered in time 0((log«)/^^/^ + \/s^), yielding the (1 -f- ^)-approximate shortest path distance, as well as a path having 0(1/^^/^) segments that avoids the interior of the input polytope. He also gives an 0{n + l/£^)-time algorithm to compute an approximate diameter of the polytope's surface, obtaining a pair of points on the surface whose shortest path distance is least (1 — e) times the diameter. Varadarajan and Agarwal [382] have considered the problem of approximating shortest paths on general (nonconvex) polyhedral surfaces. They have obtained the first subquadratic-time algorithms for provably good approximating paths, computing a 13approximation in 0{n^^^\og^^^ n) time, or a 15-approximation in 0{n^^^\o^/^ n) time. Their method is based on a partitioning of the surface into 0(n/r) patches, each having at most r faces, using a planar separator theorem. (The parameter r is chosen to be ^1/3 iog'/3 ^ Qj. „2/5 log^/^ w.) Then, on the boundary of each patch, a carefully selected set of points ("portals") is selected, and these are interconnected with a graph that approximates shortest paths within each patch. Finally, Dijkstra's algorithm is used to search for a shortest path in the resulting graph, which is proven to contain an approximately shortest path. 18. Can one compute a {\-{- e)-approximate shortest path on a nonconvex polyhedral surface {or even on a terrain) in subquadratic timel Can one compute an 0(1)-approximate shortest path in close to linear timel

OPEN PROBLEM

Har-Peled [194] has shown how to compute an approximate shortest path map on polyhedral surfaces, using techniques mentioned above. Given a source point and a parameter 0 < £ < 1, he constructs a subdivision of the surface of size 0{{n/e)\og{\/e)), so that a (1 + ^)-approximate shortest path query to any point t can be answered in time 0{\og{n/e)), by locating / within the subdivision. The preprocessing time is 0{n^\ogn + {n/£)\og{\/s)\og{n/s)) for general surfaces, and 0((«/^'^)log(l/^) + (nle^l^) log(l/^) logn) for convex polytopes. Finally, we mention some investigations on practical methods for computing nearly shortest paths on surfaces. By using the same methods that have been applied to the weighted region problem (Section 4.3) in subdivisions, Lanthier, Maheshwari, and Sack [247] and Mata and Mitchell [273] have shown that very simple algorithms based on searching a discrete graph (an "edge subdivision graph", or a "pathnet") produce paths that are remarkably close to optimal, and approach optimal as a parameter (5, or Xjk) approaches zero. The discrete graph can be constructed in advance, to assist in speeding two-point queries. Further, the path obtained can be postprocessed with a local optimality procedure that pulls the path "taut" within the sleeve of facets that it crosses, resulting in a solution even closer to optimal. Using a slightly different discrete graph than the edge subdivision graph of [247,273], Aleksandrov et al. [11] give alternative time bounds that depend on other parameters related to the "fatness" of the triangular facets of a polyhedral surface. They place Steiner points along edges in a geometric progression, as in Papadimitriou [317]. This allows one to compute a (1 + ^)-approximate shortest path from s \,o t in time 0(Mn logMn + nM'^) (and space 0(«M^)), where M = 0{j^ log ^ ) , X is the length of a longest edge, h is the minimum altitude of a triangular facet, 0 is the smallest angle of any triangular facet, and 0 < 6: < 2/3. By searching a sparser subgraph, they have recently ([12]) improved the time bound to 0(Mn log Mn).

Geometric shortest paths and network optimization

671

6.4. Other metrics Link distance in a polyhedral domain in dl^ can be approximated (within factor 2) in polynomial time, by searching a weak visibility graph whose nodes correspond to simplices in a simplicial decomposition of the domain. The complexity of computing the exact link distance is open. OPEN PROBLEM

19. How efficiently can link distance be computed in polyhedral do-

mains in 3-space! For the case of orthohedral domains, and rectilinear (L i) shortest paths, the shortest-path problem in "^^ becomes relatively easy to solve in polynomial time, since the "grid graph" induced by the facets of the domain serves as a path preserving graph that we can search for an optimal path. In dl^, we can do better than to use the 0(n^) grid graph induced by O(^) facets, as shown by Clarkson, Kapoor, and Vaidya [122]; an 0(n^log^^) size subgraph suffices for the case of n (possibly overlapping) axis-parallel boxes, allowing shortest paths to be found using Dijkstra's algorithm in time 0(n^log^ n). More generally, for a set of obstacles given by n axis-aligned (not necessarily disjoint) boxes in dl^, de Berg et al. [133, 134] show that one can compute a data structure of size 0((nlogn)^~^), in 0(n^logn) preprocessing time, that supports fixed-source link distance queries in 0(log^~^ n) time. Further, this result applies, within the same complexities, to the case of a combined metric, in which path cost is measured as a linear combination of L \ length and the rectilinear link distance (see also Section 4.7). In the case of axis-parallel disjoint box obstacles in 9i^, Choi and Yap [116] have shown that rectilinear shortest paths can be computed in time 0{n^\ogn). Also, for this same problem in higher dimensions, a recent structural result of Choi and Yap [118,117] may help in devising very efficient algorithms: There always exists a coordinate direction such that every shortest path from ^ to f is monotone in this direction. 7. Other network optimization problems Until now, we have been considering problems of computing a shortest path from one point to another (or from one point to all others). We consider now some other network optimization problems, in which the objective is to compute a shortest path, cycle, tree, or other graph, subject to various types of constraints. We focus primarily on two classes of problems: those of finding minimum-cost trees or tours that span some or all elements of a set 5. We discuss the resulting "minimum spanning tree" and "traveling salesperson" problems in the next subsections, and then give more details of a general method of obtaining approximations to these problems. The subject of spanning trees and spanners is surveyed extensively in the Chapter 9 by Eppstein [151] in this Handbook. Other well-studied network optimization problems that we do not attempt to survey here include minimum cost matching (which has polynomial-time exact and approximate solutions; see [36,379,380,389]) and minimum weight triangulation {MWT) (whose complexity status is still open, although constant-factor approximation algorithms exist for

672

J.S.B. Mitchell

both the Steiner and non-Steiner versions; see Bern and Eppstein [65] and Levcopoulos and Krznaric [257]). We also refer the reader to the article of Smith and Winter [365], which surveys a large class of topological network design problems. Kalyanasundaram and Pruhs [233] survey on-line versions of standard network optimization problems.

7.1. Optimal spanning trees Minimum spanning trees. A minimum spanning tree (MST) of a set of n points 5 is a tree of minimum total length whose nodes are the set S of n points, and whose edges are line segments joining pairs of points. The (Euclidean) minimum spanning tree problem can be solved to optimahty in the plane in time 0(n logn), by appealing to the fact that the MST is a subgraph of the (0(n)size) Delaunay diagram; after computation of the Delaunay diagram, results of Cheriton and Tarjan [105] can be applied to find the MST in only 0(n) additional time. PROPOSITION

4. An edge in a Euclidean MST is Delaunay.

The above proposition remains valid in 9i^, for J ^ 3; however, the result does not lead directly to a subquadratic-time algorithm for MST in higher dimensions, since there can be Q{n^) Delaunay edges, even in "^^. However, geometry can be exploited to avoid examining the full set of (2) = Q{n^) weighted edges in the complete graph. Yao [391] was the first to compute an MST in D^i^ in subquadratic time. His general method yields a time bound of O(n^~"^(logn)'~"^0, where a j is a constant depending on the dimension d. His algorithm is based on partitioning the space around each point p into sufficiently small cones so that there is at most one MST edge incident on p per cone, with this edge linking p to its nearest neighbor within that cone. In Yao [391], ofj = 2~^^+^\ but this has improved as better data structures for nearest-neighbors have been developed. Agarwal et al. [5] give a randomized algorithm whose expected running time has OfJ = r^|\-^^^ — y, for any fixed y > 0. In three dimensions, their algorithm requires 0{n^^^\og^^^ n) expected time. (See also Agarwal, Matousek, and Suri [7], who study maximum spanning trees (Section 7.1); a variant of their somewhat simpler randomized algorithm applies also to minimum spanning trees.) These algorithms exploit the close relationship between the problem of computing an MST and that of computing a bichromatic closest pair of points between n red points and m blue points. Letting Td(n,m) denote the complexity of solving the bichromatic closest pair problem, Agarwal et al. [5] show that the Euclidean MST can be computed in time 0(Td(n, n) log^ n) (if Td{n, n) = o(n^^^)), or in time 0(Td{n,«)), if Td(n, n) is superiinear. Since they give a randomized algorithm for the bichromatic closest pair, with expected time 0((/2m)'~'/^'^^/^^'^'^"^^), their result implies that the MST can be computed in expected time Q(Jl^-^/(\^/'^^+^)+^)^ Callahan and Kosaraju [80] show a bound of 0(Td(n, n) log«), while Krznaric, Levcopoulos, and Nilsson [245], as well as Kapoor [235], obtain 0(Td(n, n)). These bounds hold for any Lp metric (p ^ 1); 0(Td(n, n)) is optimal in the algebraic computation tree model. For some Lp metrics, more efficient algorithms are known. Agarwal et al. [5] give a deterministic algorithm requiring 0(n log^ n) time for any metric having a polyhedral unit

Geometric shortest paths and network optimization

673

ball (e.g., L\ and L^)\ see also Gabow, Bentley, and Tarjan [162]. In three dimensions, there is now an optimal O(nlogn) time algorithm for the MST in the L\ or Loo nietric, due to Krznaric, Levcopoulos, andNilsson [245] (improving an earlier 0{n\ogn\og\ogn) bound of [162]). Clarkson [121] and Vaidya [378] have given algorithms that are particularly efficient for points that are independently and uniformly distributed in a unit cube in 9^^. Their algorithms have expected time 0{na{cn, n)), where c is a constant depending on dimension, and a is the very slowly growing inverse Ackermann function. Several results are also known about approximation algorithms for the MST. Clarkson [119] gives an 0(n(}ogn + ^ log5)) time (0(nlog(5) space) algorithm for a (1 + 6:)approximate Euclidean MST in 9^^; he also gives results in higher dimensions for the L\ metric {0{n(a{m,n) + log'^~^(^)log5)) time, 0{n(\og{\) + log5)) space). Here, m = 0(n), and 5 is a parameter that depends on the data: it is the ratio between the maximum and the minimum distance between pairs of points. Vaidya [377] gives a (1 + e)approximation for any Lp metric, requiring time 0(n(logn)^~^^s~^^~^''), which he later improves to 0(s~^nlogn) time [378]. Callahan and Kosaraju give an approximation, based on their "well-separated pair decomposition," for the Euclidean MST that requires time 0(n logn + (e""^/^ log ^)n). Das, Kapoor, and Smid [127] have studied the problem of r-approximating the Euclidean MST, for large values of r: For 4 < r < n, and for any J ^ 1, they prove a lower bound of ^ (n log(n/r)), in the algebraic tree model of computation, and prove that this is tight by exhibiting an algorithm having the same asymptotic time complexity. If the (non-algebraic) floor function and random access operations are permitted, then they obtain a 3V^^^"^'^^-approximation algorithm requiring 0(n) time. For this more powerful model, Bern et al. [68] compute a (1 + £)-approximate MST in the plane in time

0((l/s)nloglogn)

(time 0(n + ^^^g^) in m').

The best lower bound for the (exact) MST problem is currently ^(n logn), in any fixed dimension J ^ 1, in the algebraic tree model of computation for a general input of unordered points. In contrast, the seemingly related all-nearest-neighbors problem can be solved in time 0(2^n logn), using the algorithm of Vaidya [381]. The all-nearest-neighbors problem is to compute the nearest neighbor for each of the n input points; given the MST, it is readily solved in 0(n) time, since the MST must include an edge linking each point to one of its nearest neighbors. 20. Does there exist a near-linear time algorithm for Euclidean MST (or bichromatic nearest neighbors) in ^^, for J ^ 3? OPEN PROBLEM

Maximum spanning trees. If instead of finding a minimum spanning tree, the objective is changed to ask for a maximum-length spanning tree on a set of points, the problem changes its nature. (Applications are given in [39].) While in graphs the same algorithms that find minimum spanning trees also can be used for computing maximum spanning trees, by negating edge lengths, the geometric version of the problem changes because it is not obvious how to find a small subgraph of the complete graph that is guaranteed to contain the maximum spanning tree. The natural generalization of the MST result might be to expect that the maximum spanning tree must appear as a subgraph of the (linear-size)

674

J.S.B.

Mitchell

furthest-point Delaunay diagram (see the chapter on Voronoi diagrams, by Aurenhammer and Klein [46]). However, this is not true in general, since the only points that are vertices of the furthest-point Delaunay diagram are the points on the convex hull; further, even if the input point set is in convex position, the maximum spanning tree need not lie on the furthest-point Delaunay diagram (see [296]). A different approach is taken by Monma et al. [299], who provide an optimal 0(n\ogn)-iime algorithm for computing a maximum spanning tree of n points in the plane. They start with computing, in 0(n logn) time, the furthest neighbor graph, joining each point to its furthest neighbor; the resulting graph is a forest, whose connected components are called clusters. They then show that the clusters can be cyclically ordered around their convex hull, allowing a maximum spanning tree to be computed by adding a longest edge between adjacent clusters. (If the input points are already in convex position, the algorithm requires only 0(n) time.) Subquadratic-time algorithms for higher dimensions are also known, based on efficient methods to compute bichromatic farthest neighbors. [7] give randomized algorithms with expected time 0(«4/3 iog7/3 „) in m^^ and 0(^2-^^) in ^^ (d ^ 4), where a^ = pj/2]+i+^. for any fixed y > 0. They also give a simpler (deterministic) approximation algorithm, giving a tree at least (I — e) times optimal, that requires 0(e^^~^^^^n log^ n) time. Minimum Steiner spanning trees. A minimum Steiner spanning tree (or simply Steiner tree) of 5 is a tree of minimum total length whose nodes are a superset of the given set S. Those nodes that are not points of S are generally called Steiner points. It turns out that allowing the flexibility of adding Steiner points in order to obtain a potentially shorter spanning tree makes the problem much more difficult. In fact, the Steiner tree problem is known to be NP-hard [164], even for points in the Euclidean plane. The Steiner tree problem is in sharp contrast with the MST problem, which can be solved exactly in low-degree polynomial time. It is natural, therefore, to study how closely the MST solution approximates the Steiner tree. The supremum, over all point sets, of the ratio between the length of the MST and the length of the Steiner tree is known as the Steiner ratio;^ it has been studied extensively in the last several years. A simple example (the three comers of an equilateral triangle) shows that the Euclidean Steiner ratio in the plane can be as high as 2/V3. Gilbert and Pollak [175] conjectured that this ratio can in fact never be greater than 2 / \ / 3 . This conjecture was finally confirmed by a proof due to Du and Hwang [143]. (For the L\ metric, the Steiner ratio in the plane is 3/2, and this is tight [214].) Approximation algorithms have also been obtained for the Steiner tree problem. First, because of the Steiner ratio, the MST algorithms already give a 2/\/3-approximation for the Euclidean Steiner tree problem in the plane. However, in a series of results, starting with important work by Zelikovsky [392], improved approximation algorithms were obtained, for both graph versions and geometric versions of the problem. In the Euclidean plane, the approximation factor has been improved to just over 1.1 by Zelikovsky's "relative greedy" algorithm [393]. We refer the reader Bern and Eppstein [65] and Du and Hwang [144], for excellent surveys on these problems and the recent results. Finally, though, a PTAS was Many authors have defined the Steiner ratio to be the reciprocal of what we call the Steiner ratio. We follow the notation of Bern and Eppstein [65].

Geometric shortest paths and network optimization

675

discovered by Arora [35] and Mitchell [289]. This result serves to separate the geometric versions of the problem from the "metric" version (in an arbitrary graph whose edge lengths satisfy the triangle inequality), since the metric version is known to be MAXSNPhard (meaning that no PTAS exists, unless P = NP), even if all edge lengths are 1 or 2 [66]. A problem that arises in some applications in VLSI is that of computing a minimumlength rectilinear Steiner tree within a rectilinear polygon P, for a set of n sites on the boundary of the polygon. If P is rectilinear convex (any horizontal/vertical line intersects it in a connected set), having k vertices, Richards and Salowe [344] solve this problem exactly in time 0(k^n). For the same problem, Cheng, Lim, and Wu [103] give an O(n^) algorithm, and Cheng and Tang [104] give an 0(J