Diagonalization and Self-Reference (Oxford Logic Guides)

  • 72 148 2
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Diagonalization and Self-Reference RAYMOND M. SMULLYAN Department of Philosophy Indiana University

CLARENDON PRESS 1994

OXFORD

?.V

Oxford University Press, Walton Street, Oxford OX2 6DP Oxford New York

:°.

oho

Athens Auckland Bangkok Bombay Calcutta Cape Town DaresSalaam Delhi Florence Hong Kong Istanbul Karachi KualaLumpur Madras Madrid Melbourne Mexico City Nairobi Paris Singapore Taipei Tokyo Toronto t0'

and associated companies in

Berlin Ibadan

K~>

Oxford is a trade mark of Oxford University Press Published in the United States by Oxford University Press Inc., New York © Raymond M. Smullyan, 1994

U^-

00'

°.°

bhp

ti,

°i°

All rights reserved. No part of this publication maybe reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press. Within the UK, exceptions are allowed in respect of any fair dealing for the purpose of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act, 1988, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms and in other countries should be sent to the Rights Department, Oxford University Press, at the address above. chi

£S,

This book is sold subject to the condition that it shall not, by way of trade or otherwise, be lent, re-sold, hired out, or otherwise circulated without the publisher's prior consent in any form of binding or cover other than that in which it is published and without a similar condition including this condition being imposed on the subsequent purchaser. 4a'

A catalogue record for this book is available form the British Library

Library of Congress Cataloging in Publication Data (Data available)

ISBN 0 19 853450 7 Typeset by J. M. Ashley

0.w

ti.

Printed in Great Britain by Bookcraft (Bath) Ltd Midsomer Norton, Avon

((D

Preface

,!.S

car

""3

This volume is written for the beginner and expert alike, since it is both an introduction to self-reference, diagonalization, and fixed points, as they occur in Godel's incompleteness proofs, recursion theory, combinatory logic, car

-I-

semantics, and metamathematics, and a presentation of new results car

C14

s0.

partly in these areas, but mostly in their synthesis, as developed in the last nine chapters. No prior knowledge of these fields is presupposed, except that for the applications to mathematical logic in Chapters 5 and 9, a little acquaintance with the logical connectives and quantifiers, though not strictly necessary, is desirable. The rest of the book, however, makes no use of the symbolism of mathematical logic, since I wish the results to be as general and independent of syntax as possible. The main purpose of this study is to present a unified treatment of fixed points in the aforementioned areas. Originally, this book was planned as a companion volume to my previous books: G.I.T. (Godel's Incompleteness Theorems)[32] and R.M. (Recursion Theory for Metamathematics)[33] but it was finally decided to make it independent of the other two volumes, and thus available to a much wider class of readers. The first three chapters are of an introductory nature and consist mainly of exercises (sometimes called `problems') with solutions given to most of them. Despite their elementary nature, there are things there that should interest the expert as well. Chapter 1 introduces the subject of self-reference via quotation, and here, even the expert not familiar with my paper `Quotation and self-reference' should find it of interest that crosscar

car

Q-'

"-n

cc.

ooh

cad

chi

..a

-.s

sm.

°X'

5.,"

G).

chi

ac'

reference (pair of sentences, each one referring to the other) is easer to

R.,

Kph

car

."3

car

r..

,.O

achieve using one-sided quotation instead of the more usual two-sided quotation. Chapter 2 is an updated version of my paper `Some unifying fixed point principles' and contains the germ of an idea developed in later chapters. Chapter 3 contains abstract versions of halting problems, and should be of interest to the expert mainly for pedagogical reasons. Chapter 4 is essential to several subsequent chapters and is a greatly expanded treatment of representation systems as defined in T.F.S. (Theory

'U"

v

Preface

vi

ti,

of Formal Systems). It consists of abstract versions of classical results of Tarski, Godel, and Rosser, as well as an account of Rowitt Parikh's demonstration that there are sentences in systems such as Peano Arithmetic car

'++

which can be proved to be provable in far fewer steps than they can be proved!

:D:

car

s."

U'1

sp.

55.

G].

90t

Chapter 5, in which mathematical logic first appears, can be read any time after Chapter 1, but its first part is a prerequisite to Chapter 9. Actually, Chapters 5 and 9 can be thought of as supplementary to the rest of the volume (and, if desired, could be read after all the other chapters). In Part I of Chapter 5, it is first shown how the method of normalization explained in Chapter 1 can be carried out for first-order arithmetic with class abstracts, as in my paper `Languages in which self-reference is possible'. This is followed by an account of Tarski's method of achieving self-reference in the more standard formulation of first-order arithmetic in which we don't have class abstract, and this is also compared to Godel's method. Part II of this chapter contains some particularly curious material which, though easily understood by the beginner who has at least a nodtin

Vast

car

G-,

car

'C7

42)

ding acquaintance with the logical connectives and quantifiers, should both 'C3

surprise and amuse the expert. It begins by showing how a minor modification of normalization works for the more usual first-order formulations of arithmetic in which there are not any class abstracts, and furthermore, that normalization itself works if the system is in Polish notation! It then goes on to simultaneously generalize normalization, diagonalization and Cantor's famous construction. Chapters 6, 7, and 8 constitute an introduction to recursion theory from the viewpoint of elementary formal systems, as originally presented in T.F.S.. I have devoted considerable space to their treatment for several reasons. First, these systems appear to be coming into vogue these days partly due to computer science, and partly due to their interesting generalization by Melvin Fitting in his study: `Elements of Recursion Theory.' Secondly, the original T.F.S. is out of print (though the Russian translation might still be available). Thirdly, I make it quite clear here (which I apparently failed to do to some readers of T.F.S.) that in application to incompleteness proofs for systems such as Peano Arithmetic, elementary formal systems not only show the existence of undecidable sentences, but can conveniently be used to find them. In Chapter 9 (using Part I of Chapter 8) it is shown quite explicitly how from an elementary formal system for an incomplete axiomatizable theory, an undecidable sentence can be found. We carry this out in particular for Peano Arithmetic. The approach is quite neat, in that Godel numbering comes in only at the last moment, so to speak; most of the work deals directly with the expressions car

car

icy

car

CYO

jai

.`3

"Z7

ran

cm.

`-'

.-.

0(D

car

'C3

acr

car

.V.

°-r

car

.`3

.V.

themselves.

Preface

vii

.-y

coo

car

((D

Chapters 9 and 10 constitute an upgraded account of many of the results established in T.F.S. and R.M.. The present account is improved in several respects. For one thing, the theorems given in R.M. for first-order systems are shown here to generalize to representation systems. Secondly, the double indexing of r.e. relations introduced in Part II of Chapter 8, and carried out in the more general context of Chapter 10, makes the proofs simpler and more elegant. Also, many of the results for recursion theory proved '1y

in R.M. are established here in the more general context of generalized (D0

(CD

recursion theory, in the sense of [5]. Also, as promised in R.M., Shepherd((D

son's exact separation theorem and some results in recursion theory are derived from a common construction. Furthermore, the strengthened form of Shepherdson's theorem that was stated and proved in the last chapter of R.M. is shown here to be a corollary of a result in pure recursion theory (Theorem 8.1, Chapter 11) which we believe to be of interest in its own s0.

's.

F'.

c0.

right. 'C3

Chapters 12-20 fulfil the main purpose of this study, which is the unification of fixed point theorems in recursion theory (in fact in generalized recursion theory), combinatory logic, and metamathematics. These chapters are logically independent of the preceding chapters, but their motivation would be lost to those readers without a background comparable to that given in Chapters 5-8. [The expert could, of course, read Chapters 12-20 from scratch.] Chapter 12 introduces and begins the study of abstract structures called sequential systems, each fixed point theorem of which simultaneously yields a fixed point theorem for recursion theory, one for combinatory logic, and one for metamathematics. The results on sequential systems are also applicable to the uniform reflexive structures of Erik Wagner[37], but these applications are not given here. I have devoted considerable space to combinatory logic since some of its fixed point properties can be established as corollaries of fixed point theorems for sequential systems, and others, being of a highly specialized nature, are not even statable in terms of sequential systems. Chapters 17 and 18 constitute an introduction to combinatory logic, with special emphasis on fixed point properties and various ways of constructing fixed point combinators and related entities. Chapter 19 is devoted to a generalization and extensions of the result known as the second fixed point theorem of combinatory logic. This is usually stated in terms of numeral systems together with a Godel numbering, but we take a more general approach, using what we call an admissible naming relation between combinators. A specific such relation is given, which does not employ either a numeral system or a Godel numbering, but our more general results are applicable to the more usual naming relation as well, as is also explained. The fixed point results of Chapter 19 all have further generalizations to sequential ((DD

0-9

'z7

ran

'Z7

((D

car

':r

't-

Rum

car

cad

Preface

viii

Fig. 1. Interdependence of chapter's

systems with some added structure, which constitutes the closing chapter of this study.

Contents I - Introduction to Self-Reference and Recursion Introduction to self-reference

cad

Lam-

§9 §10 §11 §12 §13

F'.

Part II

eon

§2 §3 §4 §5 §6 §7 §8

N--

§1

1

Quotation and self-reference 2 2 Use and mention Self-reference using diagonalization 3 Normalization 4 One-sided quotation 7 9 Cross-reference using one-sided quotation Some general principles concerning designational systems 12 A pseudo-quotational system 14 Some other methods of self-reference 15 Self-reference in a more general setting 16 A more general setting 16 17 Applications to quotational systems Near Diagonalization 18 Cross-reference 19 Application to one-sided quotational systems 20 cad

Part I

'CI

Ana

2 Some classical fixed point arguments compared Part I

r-1

coy

Boa

Five fixed point arguments §1 An argument from combinatory logic §2 Godel sentences §3 An argument from recursion theory §4 Another argument from recursion theory §5 Self-reproducing machines Part II A unification §6 A relational fixed point theorem Part III Quasi-diagonalization §7 Some stronger results

moo-

.N.

GOON

cad

CID

3 How to silence a universal machine ix

25 25 25 26 27 28 29 31 31 34 34 C.0

1

39

x

Contents

Part I

Silencing a universal machine A universal machine Part II Some related problems Part III Solutions to problems

39 39 43 44

§1

4 Some general incompleteness theorems Part I §1

§7

Part II §8

Self-reference in arithmetic

75

Part I

75

Part III Generalizations

Ski

6 Introduction to formal systems and recursion Part I

84 85 85

87 88

90

93 97

97 97 105 110 113 113 118 122 126 129 irk

Q:0

Formal representability §1 Elementary formal systems §2 Some basic closure properties §3 Solvability over K Part II Recursive enumerability §4 Recursively enumerable relations §5 Recursive functions Finite quantifications and constructive definability §6 Recursive pairing functions §7 Dyadic Godel numbering §8

91

ON)

§9

81

91

Godelizing operations Generalized diagonalization .'3

§8

75

Lam-

if)

Normalization and Tarskification §1 Normalization in arithmetic §2 Tarski's method of achieving self-reference The more general situation §3 Part II Some special devices §4 Near normalization §5 Situation in Polish notation §6 More on quotational systems §7 Some variants

Con

ire

5

'

Lam-

CIO,)

§2 §3 §4 §5 §6

48

Incompleteness 50 Diagonalization 50 An incompleteness argument using a truth set 52 Godel's argument and some variations 56 Rosser's argument 59 Complete representability and definability 63 Admissible functions 67 Kleene's symmetric incompleteness theorem generalized 68 Provability by stages 69 Provability by stages 69

Contents

§9

xi

Lexicographical Godel numbering and n-adic notation 132

7 A universal system and its applications Part I

135 135 139 141 141

r-1

§4

135

E-+

A universal system §1 The universal system (U) §2 The recursive unsolvability of (U) Part II Enumeration and iteration theorems §3 The enumeration theorem Iteration theorem

144

II - Systems with Effective Properties

Part II A concrete incompleteness proof §2 Peano arithmetic §3 Peano arithmetic is a formal system §4

The Godel and Rosser proofs

10 Doubly indexed relational systems Part I

Indexed relational systems §0 Definitions §1 Universal sets §2 Generative sets

Part II Double indexg

11 Effective representation systems Part I Effective representation

176 176 176 178 179 182 182 r-1

cad

7-a

Double enumeration and double universality Doubly generative pairs Semi-D.G. pairs Closing the circle Summary of main results

CJ's

coy

§3 §4 §5 §6 §7

165 166 166 169 169 170 174

F-'

§1

Truth in CO cannot be formalized The non-formalizability of arithmetic truth

,-1

cad

Part I

,-1

.O+

9 Elementary formal systems and incompleteness proofs

149 149 150 154 156 156 159 159 161

r-1 r-1

cc,)

Arithmetization of elementary formal systems §1 Some facts about Eo-relations §2 Arithmetization of elementary formal systems Part II Double enumeration §3 Some applications Part III More on arithmetization §4 All r.e. relations are E1 §5 More on arithmetization (optional)

,-1

Part I

187 190

,-1

8 Arithmetization of formal systems

195 198

200 200

xii

Contents +--F

Undecidability Generative systems Doubly generative systems Rosser systems Extensions Part II Exact and effective Rosser systems Exact Rosser systems §6 §7 Variations on a theme by Shepherdson' §8 Effective Rosser systems §9 Some simpler related results §10 Unconquerable systems and omniscient functions

204 205 207 210 212 212 214 219 223 223 r-1

201

§2 §3 §4 §5

coy

§1

III - Fixed Point Theorems in a General Setting

coy

§1

Types 0,1,2

d1,

Part III Some fixed point properties §2 The weak fixed point property Universal systems and the weak Kleene property §3 §4 Integrated systems Part IV Summary and applications

13 Strong fixed point properties Part I

coy

anti

14 Multiple fixed point properties Part I §1

Double fixed points The weak double fixed point property

245 245 245 249 251 252 253 255 256 257 258 259 260 261 262

r-1

The recursion and extended recursion properties §1 The recursion property §2 A type 2 approach The extended recursion property §3 Part II The Myhill property and related properties §4 Strongly connected systems §5 The Myhill and recursion properties compared §6 Very strong connectivity §7 Universal integrated systems §8 The strong Kleene property Part III Summary and applications I Indexed relational systems II Sentential systems III Applicative systems

228 228 233 233 236 236 239 241 242

Cpl

tea)

0-d

12 Sequential systems Part I Definitions and purpose Part II Preliminary definitions

263 263 263

Contents

...

xiii

Part II Double recursion properties Double recursion A type 2 approach

§2 §3

Part III Double Myhill properties and related results §4 The double Myhill property The properties DMk §5 §6 Very Strongly Connected Systems Integrated systems of type 1 §7 Part IV Symmetric functions and nice functions §8 Symmetric functions coy

Nice functions Universal systems of type 2 Multiple fixed point properties

§9

coy

§10 §11

Part I

C'.

c"'

15 Synchronization and pairing functions coy

Synchronized fixed point properties §0 Synchronized sequences of functions §1 Synchronized fixed point theorems §2 Relation to multiple fixed point properties Part II Pairing functions §3 Exact and quasi-exact functions §4 Some consequences §5 Strongly connected systems of type 1* Part III Synchronized recursion resumed §6 The property DR* §7 The property DM* §8 Integrated systems of type 1* CG)

CA']

cap h-1 cap coy coy

ray

cap

16 Some further relations between fixed point properties Part I

C/2

C7)

coy

Single and double fixed point properties compared §1 Recursion and double fixed points §2 Recursion and double recursion §3 Systems of type 1* §4 Myhill and double Myhill properties Part II More on systetns of type 1* §5 Bar fixed points §6 Further topics

267 267 269 273 273 277 278 278 279 279 281 283 284 286 286 286 288 290 292 293 296 297 299 299 301 302

304 304 304 306 307 308 309 309 311

IV - Combinators and Sequential Systems 17 Fixed point properties of combinatory logic Part I §1

Interdefinability of some combinators The combinator B

315 315 316

Contents

xiv

E-4

We add T We add M Two special combinators The combinator I A basis equivalent to {B, T, 1} .`n

S."

Some curiosities We now consider K Idempotent and voracious elements Part II Fixed points §10 Some sufficient conditions §11

i

317 320 324 325 326 326 327 329 329 329

Fixed points in relation to idempotent and voracious

§16

Sages from B and W

§17

n-sages

334 335 336

GOB

330

§15

elements A fixed point principle Some fixed point combinators Turing sages Some special properties of 0

§12 §13

§14

0)X

car

Part III Multiple fixed points §18 coy coy coy

§19 §20 §21

§22

Some more useful combinators The cross-point property The weak double fixed point property The property DM Nice combinators and symmetric combinators

urn

Admissible name functions Admissible maps via numeral systems Part II Fixed point theorems of the second type §3 The second fixed point theorem §4 Applications §5 Other fixed point theorems of the second variety §1 §2

20 Extended sequential systems M-diagonalizers

333

336 336

337 338 339

357 357 357 360 361

366 366 366 371 373 373 374 378

380 381 r-1

§0

333

O..

19 A second variety of fixed point theorems Part I Self-reference in CLo

331

Lc'.)

§3

Formal combinatory logic Consistency and models Extensional combinatory logic

O""

§2

.Vi

§1

S"+

....

18 Formal combinatory logic Part I The axiom systems CL and CLo

330

Contents

§2 §3 §4 §5

M-fixed points The double M-fixed point theorem Strong M-fixed point properties Uniform double M-fixed point properties M-nice and M-symmetric functions

382 383 384 387 388 000

§1

xv

References

389

Index

392

i

C

To Blanche

ft

I

Introduction to Self-Reference and Recursion

Chapter 1

Introduction to self-reference Self-reference plays a crucial role in the famous incompleteness theorem of G6del[6]. What he did can be roughly described as follows. He showed that for a large class of mathematical systems, one can assign to each sentence car

a number called the Godel number of the sentence, and then construct a sentence X asserting that a certain number n has the property of being the Godel number of a sentence that is not provable in the system, but this number n is the Godel number of the very sentence X itself! And so X is true if and only if its Godel number is not the Godel number of a sentence provable in the system - in other words, X is true if and only if it is not provable in the system. This means that either X is true but not provable in the system, or X is false (not true) but provable in the system. Under the assumption that the system is correct in that only true sentences are provable, the second alternative is out, hence the sentence is true but not provable in the system. This is an extremely rough sketch of what Godel cad

did. Now, how did Godel manage to construct a sentence that says something about its own Godel number? Such a sentence is, in a sense, self-referential. There are many methods now known of achieving self-reference, as we will

see in the course of this book.

Indexicals

Suppose we wish to construct a sentence that ascribes to itself a certain property - say the property of being read by an individual named John. The simplest way to do so is to construct the following sentence. John is reading this sentence.

Chapter 1. Introduction to self-reference

2

Obviously, the above sentence is true if and only if John is reading it. Now,

'-S

41.

that sentence contains the indexical word this. (An indexical is a word or phrase whose denotation depends on its context, as for example, the denotation of the word "I", which depends on who is uttering it, or again, the denotation of "now" depends on the time it is uttered.) Now, although it is possible to formalize the use of indexicals (as in [29]), indexicals do not occur in the type of systems studied by Godel, and so we must turn to other

S."

methods. In Part I of this chapter, we study self-reference by quotation. This strikes us as a nice introductory approach, since quotation is a well known device for naming expressions, and its formalization leads to some intriguing problems that should interest the beginner and expert alike. We conclude Part I with a brief mention of other methods of achieving selfreference - including that of Godel numbering. A more general approach to self-reference is considered in Part II. Not all sections of this chapter are needed for subsequent chapters; most of them are of purely independent interest. The sections that are most relevant for this volume as a whole are §§1-3, 8, 9, and also exercises 33 and 34.

Part I Quotation and self-reference §1 Use and mention The following sentence is true. (1)

Ice is frozen water.

What about the following sentence? (2)

Ice has three letters.

Strictly speaking, sentence (2) is false. The substance ice has no letters

at all; it is the word "ice" that has three letters. And so the correct rendition of (2) is (2)'

"Ice" has three letters.

When one talks about a word, rather than that which is denoted by the word, one encloses the word in quotation marks. There is a difference between using a word and mentioning the word (which is talking about the word, instead of the denotation of the word). It can happen that a word is both used and mentioned in the same sentence - witness (3)

"Ice" is the name of ice.

§2 Self-reference using diagonalization

3

Or, a more startling illustration of how a phrase can be both used and mentioned in the same sentence is (4)

It takes longer to read the bible than to read "the bible."

Yes, it takes much longer! Here is another illustration. (5)

This sentence is longer than "this sentence."

On first encountering the distinction between use and mention, confusions often occur. For example, is the following sentence true of false? " "Ice" " has two pairs of quotation marks.

(6)

..O

Many would say that (6) is true; actually it is not! What is written has two pairs of quotation marks, but what is talked about has only one pair. And so it is rather the following sentence that is correct. " "Ice" " has one pair of quotation marks.

(6)'

o-.

When no ambiguity can arise, words and phrases are sometimes used autonomously, i.e., as names of themselves. For example, in a treatise on language, sentence (2) (ice has three letters) would be acceptable. Also, under autonomous use, sentence (6) would be acceptable. In textbooks and articles on symbolic logic, formulas are used as names of themselves. [For example, one would write "P D (P V Q) is a tautology" instead of " "P D (P V Q)" is a tautology."]. The context will usually make it clear when autonomous use is permissible.

§2 Self-reference using diagonalization

Q.,

Godel achieved self-reference by the method known as diagonalization, that we now illustrate using quotation marks instead of Godel numbering. We use the symbol "x" as a variable ranging over expressions of the English language. By the diagonalization of an expression, we mean the result of substituting the quotation of the expression for every occurrence of the variable "x" in the expression. For example, consider the following

0,a

s0,

expression. (1)

John is reading x

The expression (1) is not a sentence, true or false, but becomes a sentence (true or false) upon substituting the quotation of any expression for "x". If we substitute the quotation of (1) itself for "x", we obtain the diagonalization of (1), which is (2)

John is reading "John is reading x"

Chapter 1. Introduction to self-reference

4

Now, (2) is a sentence, and it asserts that John is reading (1). However,

(2) is not self-referential; it does not assert that John is reading (2); it asserts that John is reading (1). Let us consider the following expression. (3)

John is reading the diagonalization of x.

The diagonalization of (3) is the following (4)

John is reading the diagonalization of "John is reading the diagonalization of x."

((D

Sentence (4) asserts that John is reading the diagonalization of (3), but the diagonalization of (3) is (4) itself. Thus (4) asserts that John is reading the very sentence (4)! Thus sentence (4) is self-referential. This, in essence, is how Godel achieves self-reference. It might be easier to understand this if we use the following abbrevi-

ations. Let us use "J" to abbreviate "John is reading," and "D" to abbreviate "the diagonalization of." Then (3) and (4) assume the following abbreviated forms: (3)' (4)'

JDx JD"JDx"

The sentence (4)' asserts that John is reading the diagonalization of (3)', but the diagonalization of (3)' is (4)' itself.

§3 Normalization To carry through the idea of diagonalization for arithmetical systems is relatively complicated, since it involves the operation of substitution, which is difficult to arithmetize. In my paper Languages in which self-reference is possible[27], I showed another method of self-reference that does not involve substitution, nor does it use variables. To this we now turn. By the norm of an expression we shall mean the expression followed by its own quotation. For example, consider the following expression. (1)

John is reading

The norm of (1) is the following. (2)

John is reading "John is reading"

The sentence (2) is not self-referential; it doesn't assert that John is reading (2); it asserts that John is reading (1). But now consider the following.

§3 Normalization (3)

5

John is reading the norm of

Its norm is the following sentence. (4)

John is reading the norm of "John is reading the norm OF

Sentence (4) asserts that John is reading the norm of (3), but the norm of (3) is (4) itself. And so (4) asserts that John is reading (4). Thus (4) is self-referential. -t4

Let us look at an abbreviated version. Again, we abbreviate "John is reading" by the letter "J." And we shall use the letter "N" to abbreviate "the norm of." Then (3) and (4) assume the following symbolic forms. (3)' (4)'

JN JN"JN" sue.

(4)' asserts that John is reading the norm of (3)', but the norm of (3)' is (4)' itself. Thus (4)' asserts that John is reading (4)'. This construction is like one due to Quine, whose version[18] of the semantical paradox of the liar is the sentence: "Yields falsehood when appended to its own quotation" yields falsehood when appended to its own quotation. This sentence asserts its own falsehood. Alternatively, using our notion of norm, the following is a version of the paradox. The set of false sentences contains the norm of "the set of false sentences contains the norm of" Using diagonalization instead of normalization, the paradox would be stated thus. cad

The set of false sentences contains the diagonalization of "the set of false sentences contains the diagonalization of x"

.m.

cad

S,"

Let us note in passing that, using diagonalization, the following is an expression that designates itself: the diagonalization of "the diagonalization of x." Or symbolically: D "Dx" . Using normalization, the following expression designates itself: the norm of "the norm of." Or symbolically: N "N". C.))

Exercise 1 Using the symbols J and N as well as opening quotes and closing quotes, construct a sentence X that asserts, not that John is reading X, but that John is reading the norm of X.

Exercise 2 Using the same symbolism, find an expression that names its own norm.

Chapter 1. Introduction to self-reference (FD

6

car

Exercise 3 Now let us add the symbol "Q" to abbreviate "the quotation of." Construct an expression that names its own quotation.

Exercise 4 In this and the remaining exercises of this section (through Exercise 7) we continue to use "Q" for "the quotation of."

Find an X that names the norm of its own quotation and an X that names the quotation of its norm.

Exercise 5 Find a sentence X that asserts that John is reading the quotation of X.

Exercise 6 Find a sentence X that asserts that John is reading the quotation of the norm of X, and another sentence X that asserts that John is reading the norm of its quotation. Exercise 7 Find two distinct expressions X and Y such that each names the other.

Solutions

.00

1 Take X = JNN "JNN" . The sentence X asserts that John is reading the norm of the norm of JNN. Now, the norm of JNN is X, hence the norm of the norm of JNN is the norm of X. Thus X asserts that John is

reading the norm of X.

2 NN"NN". 3 QN"QN". It names the quotation of the norm of QN, which is the quotation of QN"QN" (since the norm of QN is QN"QN").

4 An expression that names the norm of its quotation is NQN"NQN". An expression that names the quotation of its norm is QNN "QNN" . 5 Take X = JQN "JQN" . This sentence X asserts that John is reading the quotation of the norm of JQN, which is the quotation of X (since the norm of JQN is X).

6 A sentence that asserts that John is reading the quotation of its norm

is JQNN"JQNN". cad

A sentence that asserts that John is reading the norm of its quotation

§4 One-sided quotation

7

is JNQN"JNQN". (In unabbreviated English, this would be: John is reading the norm of the quotation of the norm of "John is reading the norm of the quotation of the norm of".)

.`3

car

7 In Exercise 3, we found an X that names its own quotation which in turn names X. So we take X = QN"QN", and Y ="QN"QN" ".

§4 One-sided quotation

car

'C3

((DD

To explore an interesting side stream, I would like to say something about one-sided quotation. Here we use only one quotation symbol - which we will take to be the symbol "*". For any expression X, we call *X the (onesided) quotation of X. One-sided quotation is used in certain programming languages such as LISP. For our purposes, one sided quotation has certain advantages over two-sided quotation - in particular it allows not only selfreference, but also easily allows cross-reference (pairs of sentences, each of ^r'

car

which ascribe a certain property to the other) in a manner that we will shortly explain.

-cc

.'S

We now define the associate of an expression X to be the expression X*X - in other words, X followed by its own one-sided quotation. We shall use "A" to abbreviate "the associate of" and we shall continue to use "J" to abbreviate "John is reading." Then a sentence that asserts that John is reading it is JA*JA. It asserts that John is reading the associate of JA, which is JA* JA. A conceptually simpler operation than association, which also accomcoo

(moo

ti,

plishes self-reference, is that of repetition, where by the repeat of an expresI+.

sion X is meant the expression XX, i.e., X followed by itself. Let us use "R" to abbreviate "the repeat of." Then a sentence that asserts that John is reading it is JR*JR*. It asserts that John is reading the repeat of JR*, which is the very sentence JR*JR*. (The sentence JR*JR won't work; it asserts that John is reading JRJR, not that John is reading JR*JR.) These two schemes of self-reference - the one using association; the other using repetition - were discussed in further depth in my paper Quotation and self-reference [31]. For now, let me remark that an expression that is its own name is A*A (it names the associate of A, which is A*A), and another is R*R* (it designates the repeat of R*, which is R*R*). Before proceeding further, let us compare the four ways we have so far of constructing a sentence that asserts that John is reading it. L''

c0.

cad

c0.

cad

Chapter 1. Introduction to self-reference

8

JD"JDx" JN"JN" JA*JA JR*JR*

(1) (2) (3) (4)

Let us also compare the four ways we have of constructing an expression that is its own name. (1) (2) (3) (4)

D "Dx" N "N" A*A R*R* car

In what follows, we consider a language containing at least the symbols J, A, R, * and we assume that if X is any expression built from the symbols

((DD

`_'

"J'

ono

NCO

(DR

of the language, the expression *X denotes (or is a name of) X, and that if X denotes Y, then AX denotes the associate of Y and RX denotes the repeat of Y. (For example, A*Y denotes the associate of Y; R*Y denotes the repeat of Y; AA*Y denotes the associate of the associate of Y; RA*Y denotes the repeat of the associate of Y.) Also, for any expressions X, Y of the language, if X denotes Y, then JX asserts that John is reading Y. (For example, JA*Y asserts that John is reading the associate of Y, since A*Y denotes the associate of Y.)

Exercise 8 Using just the two symbols A and *, find an expression that names (denotes) its own (one-sided) quotation. (Note that unlike the case of two-sided quotation, we get this result "free," as it were, that is, we do not have to add a symbol such as "Q" to abbreviate "the quotation of.") Do the same using R in place of A.

Exercise 9 Using any or all of the three symbols A, R, * find .-1

(1) An X that names its own repeat. (There are two solutions.) (2) An X that names its own associate. (There are two solutions.)

(3) An X that names the repeat of its associate. (There are two solutions.)

(4) An X that names the associate of its repeat. (There are two solutions.)

C-4

Exercise 10 Using any or all of the same three symbols, together with "J" for "John is reading," find a sentence X that asserts that John is reading the repeat of X. (There are two solutions.)

§5 Cross-reference using one-sided quotation

9

Solutions 8 Using "A", a solution is A**A. It denotes the associate of *A, which is *A**A. Using "R", a solution is R**R*, which denotes the repeat of *R*, which is *R**R*. car

9 (1) RR*RR* is one solution. Another is RA*RA. (2) AA*AA is one solution. Another is AR*AR*.

(3) RAA*RAA is one solution. Another is RAR*RAR*. (4) ARA*ARA is one solution. Another is ARR*ARR*. 10

One such sentence is JRA*JRA. Another is JRR*JRR*.

§5 Cross-reference using one-sided quotation

er-

Suppose now we consider a second individual named Paul, and we wish to construct two sentences X and Y such that X asserts that John is reading

Y and Y asserts that Paul is reading X. Using one-sided quotation, this is relatively easy, using either association or repetition. There are two solutions in each case. We add "P" to our language to abbreviate "Paul is reading."

Exercise 11 Using the four symbols J, P, A, and *, construct such sentences X and Y. There are two solutions. Can you find them both? Exercise 12 Do the same, using R instead of A.

Exercise 13 Using the two symbols A and *, find two distinct expressions X and Y such that each is the name of the other. Do the same with R and *.

Exercise 14 [A fixed point principle] Show that for any expression E of the language, an expression X can be found that names EX (that is, it names the expression consisting of E followed by X. For example, taking E to be ARA, there is some X that names ARAX.) There are two ways of finding X, given E; one way uses A and the other uses R. Find both ways.

Chapter 1. Introduction to self-reference

10

Exercise 15 Show that for any expression E there is an X that denotes CAD

the associate of EX (there are two ways of getting such an X) and there is an X that denotes the repeat of EX (again there are two solutions).

Exercise 16 Find expressions X and Y such that X names the repeat of Y and Y names the associate of X (there are four solutions).

G".

Exercise 17 Given an expression E, find a sentence X that asserts that John is reading the repeat of EX (there are two solutions). Then show CAD

that there is a sentence X that asserts that John is reading the associate of EX (there are two solutions).

Exercise 18 Find sentences X and Y such that X asserts that John is cad

reading the repeat of Y and Y asserts that Paul is reading the associate of X (there are four solutions).

Exercise 19 Show that for any two expressions El and E2 there are expressions X and Y such that X names E1Y and Y names E2X (there are four solutions).

Solutions 11 One solution is X = J*PA*J*PA; Y = PA*J*PA. Another is X = JA*P*JA; Y = P*JA*P*JA.

12 One solution: X = J*PR*J*PR*; Y = PR*J*PR*. Another solution: X = JR*P*JR*; Y = P*JR*P*JR*.

13 Using A, take X = A**A and Y = *A**A. Using R, take X = R**R* and Y = *R**R*. 14 Using A, take X = A*EA. It denotes the associate of EA, which is

EA*EA, which is EX.

Using R, take X = R*ER*. It denotes the repeat of ER*, which is ER*ER*, which is EX.

15 An X that denotes the associate of EX is AA*EAA. Another is AR*EAR*. 4..

An X that denotes the repeat of EX is RR*ERR*. Another is RA*ERA.

§5 Cross-reference using one-sided quotation

11

16 Two solutions are obtained by finding an X that denotes the repeat of A*X and then taking Y = A*X. By the last exercise, there are two ways of finding such an X (taking A* for E), and so we have the following two solutions: (1) (2)

X = RR*A*RR* Y = A*RR*A*RR* X = RA*A*RA

Y = A*RA*A*RA

Two other solutions can be obtained by taking some Y that denotes the

associate of R*Y and then taking X = R*Y. We thus get the following two solutions: (3) (4)

X = R*AA*R*AA

Y = AA*R*AA

X = R*AR*R*AR* Y = AR*R*AR*

17 An X that asserts that John is reading the repeat of EX is JRA*EJRA. Another is JRR*EJRR*.

An X that asserts that John is reading the associate of EX is JAA*EJAA. Another is JAR*EJAR*. d

18 Two solutions are obtained by finding an X that asserts that John is reading the repeat of PA*X and then taking Y = PA*X (which asserts

that Paul is reading the associate of X). By the last exercise (taking PA* for E), we have the two solutions X = JRA*PA*JRA, or X =

o00 ,;4

JRR*PA*JRR*. In either solution, we take Y = PA*X. Two other solutions are obtained by taking some Y that asserts that Paul is reading the associate of JR*Y, and then taking X = JR*Y. One such Y is JAA*JR*JAA. Another is JAR*JR*AR*.

19 Two solutions are obtained by finding an X that denotes E1*E2X and taking Y = *E2X. These two solutions are: X = A*E1*E2A

Y = *E2A*E1*E2A

X = R*E1*E2R* Y = *E2R*E1*E2R*

*

(1) (2)

Two other solutions are obtained by finding some Y that denotes E2*E1Y and taking X = *E1Y. We thus get: X = *E1A*E2*E1A

Y = A*E2*E1A

X = *E1R*E2*E1R* Y = R*E2*E1R*

*

(3) (4)

Let us now add a third person named William and add to our symbolic language the symbol "W" to abbreviate "William is reading."

Chapter 1. Introduction to self-reference

12

Exercise 20 Find sentences X, Y, and Z such that X asserts that John is reading Y, Y asserts that Paul is reading Z, and Z asserts that William is reading X.

Exercise 21 Find sentences X, Y, and Z such that X asserts that John is reading Y, Y asserts that Paul is reading the associate of Z, and Z asserts that William is reading the repeat of the associate of X.

§6 Some general principles concerning designational systems

(DPI

car

('CJ"

(-D

¢,0

BAS

C!2

Suppose now that we are given a language L in which there are rules determining that certain expressions of the language denote or designate others. Consider now a function f that assigns to every expression X of the language an expression f (X) of the language. Let us say that an expression F defines the function f in the language L if for any expressions X and Y such that X designates Y, the expression FX designates f (Y). (For example, in the one-sided quotation system that we have been considering, the symbol A defines the association operation and R defines the repetition operation. Also RA defines the function that assigns to each X the repeat of the associate of X. Any string of R's and A's defines some function or other.) And we say that f is definable in L if there is some expression F that defines it. Now, let us say that L has the designational fixed point property if for any expression E (of the language L) there is some X that designates EX. Let us say that L has the functional fixed point property if for any function f definable in L, there is some expression X that designates f (X).

Exercise 22 Prove that if L has the designational fixed point property then it also has the functional fixed point property.

Exercise 23 Let us say that L has the designational cross-point property if for any expressions El and E2 there are expressions X and Y such that X denotes E1Y and Y denotes E2X. Let us say that L has the functional cross-point property if for any functions f and g definable in L, there are expressions X and Y such that X denotes f (Y) and Y denotes g(X). Prove that the designational cross-point property implies the functional cross-point property.

6 Some general principles concerning designational systems

13

Solutions 22

Suppose F defines the function f. By the hypothesis, there is some Y

that designates FY. Then FY designates f (FY). We then take X = FY and so X designates f (X). 23 Suppose F defines f and G defines g. Then by hypothesis, there are

expressions Zl and Z2 such that Zl denotes GZ2 and Z2 denotes FZl. Then FZl denotes f (GZ2) and GZ2 denotes g(FZ1). And so we take X = FZl and Y = GZ2. Then X denotes f (Y) and Y denotes g(X). Self-referential and cross- referential languages

P..

Suppose now that certain expressions of L are called sentences and certain of the sentences are called true sentences. We shall say that a property P of expressions of L is defined by an expression H if for any expressions X and Y such that X designates Y, HX is a sentence and is a true sentence if and only if Y has the property P. (For example, in the language we have been studying, J defines the property of being read by John, since if X designates Y, then JX asserts that John is reading Y, and accordingly is true if and only if Y has the property of being read by John.) Now, let us say that L is self-referential if for any property P definable in L, there is a sentence X such that X is true if and only if it has the property P. (In a purely extensional sense, such a sentence X can be thought of as asserting that it has property P.) CAD

"ms's

CAD

.U)

y-,

Exercise 24 Prove that if L has the designational fixed point property then L is self-referential.

Let us now say that L is cross-referential, if for any two properties Pl and P2 definable in the language, there are sentences X and Y such that X is true if and only if Y has property Pl and Y is true if and only if X has property P2.

Exercise 25 Prove that if L has the designational cross-point property, then L is cross-referential. A fact about one-sided quotational systems

Call L a one-sided quotational system if there is an expression - which such that for any expression X, the we will take to be the symbol expression *X designates X.

Chapter 1. Introduction to self-reference

14

Exercise 26 Suppose L is a one-sided quotational system having the designational fixed point property. Prove that L has the designational cross point property.

Solutions 24 Suppose H defines property P in L. By hypothesis, there is some Y that designates HY. Then HY is true if and only if HY has property P. We thus take X to be HY. 25 Suppose H1 defines P1 and H2 defines P2 in L. By hypothesis there are expressions Z1 and Z2 such that Z1 designates H2Z2 and Z2 designates H1 Z1. Then H1 Z1 is true if and only if H2 Z2 has property P1 and H2Z2 is true if and only if H1Z1 has property P2. And so we take X = HI ZI and Y = H2Z2. c",

26 Assume the hypothesis. Take any expressions E1 and E2. By hypothesis there is some X that designates EI*E2X. Then take Y = *E2X. Alternatively, there is some Y that designates E2*E1Y. Then we can take X = *E1Y.

§7 A pseudo-quotational system Cam

Before leaving the subject of quotational systems, I would like to briefly

mention a streamlined method of achieving self-reference and crossreference which does so in a particularly simple manner. The essential feature of a quotational system is that a named expression

actually appears as a part of its name. The system to which I now turn also has this feature, but neither two-sided nor one-sided quotation is used. Accordingly, the system might aptly be called pseudo-quotational.

Let us now take the six symbols J, P, W, J, P, and W. For any expression X, we now boldly write JX to express the proposition that John is reading X. (We don't use any name of X, unless we regard X as its own name.) And we write JX to mean that John is reading XX (the repeat of X). And, of course, we use P and W similarly (for Paul and William).

car

Self-reference now is completely trivial: an expression X that asserts that John is reading X is obviously JJ. Cross-reference, though not quite so trivial, is simple, compared to what we have done before. We want rte,

cad

sentences X and Y such that X asserts that John is reading Y and Y asserts that Paul is reading X. One solution is X = JPJ; Y = PJPJ. Another is X = JPJP; Y = PJP.

§8 Some other methods of self-reference

15

Exercise 27 In this system, find sentences X, Y, and Z such that X asserts that John is reading Y, Y asserts that Paul is reading Z, and Z asserts that William is reading X. coy

§8 Some other methods of self-reference

ran

Let us now briefly consider some other methods of achieving self-reference. In both quotational and pseudo-quotational systems, the expression named actually appears as part of its name. We now consider two other setups in which this is not the case. One method is what Quine [16] calls designation by spelling. In this method, there is a finite or denumerable, ordered alphabet of symbols including two special symbols, which we will currently assume to be S and the subscript. We then use the expressions Si, S1i, Sill, ... as respective

names of the first, second, third, ... symbols respectively. A compound expression is "spelled out" (named) by replacing each symbol by its name. For example, let us consider the following four symbols:

JNS1 "`S

The names of these four symbols are S1, Si1, Sill, and Sill,. (Note that S itself has a name, namely Sill, and the name of the subscript is 51111.) Then the name of the compound expression SNJ is SiiiSiSii; the

name of S11N is SlllSllilSliiiSll

((D

We now redefine the norm of an expression X as X followed by its own name, and we let N abbreviate "the norm of." That is, N followed by the name of an expression X (not by X itself!) designates the norm of X (e.g., since Si is the name of the symbol J, then NS1 designates the norm of J, which is JSi). Again, we let J abbreviate "John is reading," and if we want to write a sentence that asserts that John is reading a given expression X, we write down J followed, not by X itself, but by an expression that designates X. [A sentence that asserts that John is reading JJ is not JJJ, but JS1S1.] What sentence X asserts that John is reading the very sentence X? JNS1S11 is such a sentence. [Compare with JN"JN".] Now let's consider self-reference via Godel numbering. There are many useful ways to assign Godel numbers to expressions, and we now consider a way that I have termed lexicographical Godel numbering. Given a finite ordered alphabet al, a2, ... , an of symbols, we order all expressions in these symbols lexicographically i.e., according to length, and then alphabetically within each group of the same length. For example, if n = 3, our ordering of all expressions in the symbols al, a2, a3 begins as follows: al, a2, a3, a1a1, a1a2, a1a3, a2a1, a2a2, a2a3, a3a1, a3a2, a3a3, alaial, a1a1a2, aiala3,

ala2al, ...

.

16

Chapter 1. Introduction to self-reference

We then take the Godel number of an expression to be the position it occupies in the sequence, for example, for n = 3, the (lexicographical) Godel number of alas is 4; the Godel number of alala2 is 14. Let us now take the following three symbols.

JN1 We consider the lexicographical Godel numbering of all words (expressions) in these three symbols (the symbols being ordered in the given order-

p..

ing J, N, 1, so that, e.g., the Godel number of NN is 8. For any positive integer n, we let n consist of a string of is of length n (e.g., 3 ="111"). By the Godel numeral of an expression X, we mean n where n is the Godel number of X (e.g., the Godel numeral of NN is 11111111). And by the norm of an expression we now mean the expression followed by its Godel numeral (e.g., the norm of 1J is 1J1111111111). By a sentence we mean an expression of one of the two forms Jn (n is any positive integer) or JNn. We interpret Jn to mean that John is reading that expression whose Godel number is n, and we interpret JNn to mean that John is reading the norm of the expression whose Godel number is n. Then a sentence that asserts that John is reading it is JN11111 (since 5 is the Godel number of JN). Self-reference via Godel numbering will play a crucial role in later chapters.

Part II Self-reference in a more general setting §9 A more general setting We now consider self-reference in a far more general setting than that of quotational systems, or Godel numbering, and show how most of the results of Part I are but special cases of some very simple but general principles to which we now turn. We consider a language L in which certain expressions are called predicates and certain expressions are called sentences, and we are given a rule that assigns to each predicate H and any expression X a sentence denoted H(X), which we call the result of applying H to X. (Informally, predicates are names of properties, and the sentence H(X) is thought of as asserting that X has the property named by H.) How a predicate H is applied to an expression X to yield a sentence H(X) varies from system to system. For two-sided quotational systems, we take H(X) to be H "X" . Thus, for a predicate H (such as "J", for "John is reading"), H "X" can be thought

§10 Applications to quotational systems

17

of as asserting that X has the property named by H. For one-sided quotational systems, we take H(X) to be H*X. Thus, for a predicate H, the sentence H*X asserts that X has the property named by H. For systems in which Godel numbering takes the place of quotation, H(X) will be the result of performing a certain operation on H and the Godel number of X - as we will see in a later chapter. Next, we are given some equivalence relation 1 - between sentences. What this relation is varies from system to system. (For example, we car

might have some sentences determined as true, by some definition or other of truth, and we might then define two sentences to be equivalent if they are either both true or both not true. Or we might have an axiom system in which certain sentences are provable in the system, in which case we might call two sentences equivalent if they are either both provable or both

non-provable.) We write X - Y to mean that X is equivalent to Y. And now we shall use the term diagonalization in a far more general sense than was used in Part I. We shall say that a predicate H# diagonalizes a predicate H, or that H# is a diagonalizer of H, if for every predicate K,

car

H#(K) is equivalent to H(K(K)). We call a sentence X a fixed point of a predicate H if X is equivalent to H(X). The following theorem, despite its simplicity, generalizes many fixed point theorems that occur in metamathematics.

Theorem 1 If H has a diagonalizer, then there is a fixed point for H. Proof Suppose H# diagonalizes H. Then for every predicate K, the sentence H#(K) is equivalent to H(K(K)). We take H# for K, and so H#(H#) is equivalent to H(H#(H#)). Thus H#(H#) is a fixed point of H. Remark

If we think of H as the name of a property of expressions, then a fixed point X of H can be thought of as asserting that X itself has the property named by H, and is in that sense self-referential.

§10 Applications to quotational systems Let us see how Theorem 1 generalizes some of the results of Part I. We now turn back to the study of quotational systems. In the sentences considered

in Part I, each sentence was given an interpretation (e.g., for two-sided 1That is, a relation = such that for any sentences X, Y, and Z: (1) X - X; (2) If

X -Y thenY-X; (3) IfX=_Y andY- ZthenX- Z.

Chapter 1. Introduction to self-reference

18

p,.

quotational systems, J"X" was interpreted to mean that John is reading

chi

6.0

car

X, and in one-sided quotational systems, J*X was also interpreted to mean that John was reading X), and we will call two sentences equivalent if the truth of either one implies the truth of the other (e.g., the sentence JN"X" and J"X "X" " are equivalent). By an N-system let us mean a two-sided quotational system in which s-.

cad

cad

there is an expression - which we will take to be the letter "N" - such that for any predicate H, HN is a predicate, and for any expression X, the sentence HN"X" is equivalent to H"X "X" ". (CD

c0.

pp,

car

car

,"°,

By an A-system let us mean a one-sided quotational system in which there is an expression - which we will take to be "A" - such that for any predicate H, the expression HA is a predicate, and for every expression X, the sentence HA*X is equivalent to H*X*X. By an R-system let us mean a one-sided quotational system in which there is an expression - which we will take to be "R" - such that for any predicate H, the expression HR is a predicate and for any expression X, the sentence HR*X is equivalent to H*XX. (Miniature A-systems and miniature R-systems were informally presented in §§4 and 5.) In an N-system, for any predicate H and any predicate K, the sentence HN "K" is equivalent to H "K "K" ", which means that HN diagonalizes H. Therefore, by Theorem 1, H has a fixed point, which by the proof of Theorem 1 is seen to be HN "HN" . In an A-system, for any predicates H and K, HA*K is equivalent to H*K*K, hence HA diagonalizes H, and so by Theorem 1 and its proof, HA*HA is a fixed point of H. In an R-system, the situation is different. Although HR does not diagonalize H (HR*K is equivalent to H*KK, not to H*K*K), H nevertheless does have a fixed point, namely HR*HR*. This, however, is not a consequence of Theorem 1, but is a consequence of the stronger Theorem 14

fag

!-a

60.0

below. coy

§11 Near Diagonalization ,t."

s0.

:.p

We shall say that a predicate H° is a near diagonalizer of a predicate H if for every predicate K there is at least one expression X (but not necessarily a predicate) such that H°(X) is equivalent to H(K(X)). Of .N.

course, a diagonalizer H# of H is also a near diagonalizer of H, since for any predicate K there is an expression X - namely K itself - such that v02

III

°p'

III

H#(X) - H(K(X)). Therefore the following theorem is stronger than Theorem 1.

Theorem 1# If H has a near diagonalizer H°, then H has a fixed point.

§12 Cross-reference

19

Proof Suppose that H° is a near diagonalizer of H. Since for every predicate K there is some expression X such that H°(X) -_ H (K (X)), we can take H° for K, and hence there is some expression X (not necessarily a predicate) such that H°(X) - H(H° (X)). Thus H° (X) is a fixed point of H. An application

Let us see how this applies to 1Z-systems. Although HR is not a diagonalizer of H, it is a near diagonalizer of H, since for any predicate K, HR*K* - H*K*K*, and so if we take K* for X, then HR*X = H*K*X, and so H°(X) - H(K(X)) where H° is HR. And so HR is a near diagonalizer of H. Then by Theorem 1# and its proof, HR*HR* is a fixed point of H, which is as it should be.

§12 Cross-reference Let us return to the study of systems not necessarily quotational. We will call a pair (X1, X2) of sentences a cross-point for a pair (H1, H2) of predicates if X1 - H, (X2) and X2 - H2(X1). (If we interpret H1, H2 as names of properties, X1 asserts that X2 has the property named by H1 and X2 asserts that X1 has the property named by H2. Thus each of the two sentences ascribe a property to the other.)

Theorem 2 A sufficient condition for (H1, H2) to have a cross-point is that there exists a predicate L such that for every predicate K, the sentence L(H2(K)) is equivalent to H1 (H2(K(H2(K)))).

We will prove a strengthening of Theorem 2, but we first remark that although Theorem 2 has an application to A-systems, it does not have any application to R-systems, whereas the following strengthening of Theorem 2 does.

III

Theorem 2# A sufficient condition for (H1, H2) to have a cross-point is that there exists a predicate L such that for every predicate K there is an expression X such that L(H2(X)) - Hl(H2(K(H2(X)))). car

Proof For then, taking L for K, there is an expression X such that

L(H2(X)) - H1(H2(L(H2(X)))). We then take X1 to be L(H2(X)) and H1(X2), and of course X2

III

X2 to be H2(X1), and so X1 X2 is H2(X1).

H2(X1), since

20

Chapter 1. Introduction to self-reference

Of course the hypothesis of Theorem 2 implies the hypothesis of Theorem 2# (taking X = L), and so Theorem 2 follows from Theorem 2#. Also, if the stronger hypothesis of Theorem 2 holds, then L(H2(L)) - H1(H2(L(H2(L)))), and so if we take X1 = L(H2(L)) and X2 = H2(L(H2(L))), then (X1, X2) is a cross-point of (Hi, 112).

§13 Application to one-sided quotational systems Let us first consider A-systems. Given two predicates H1 and H2, take L to be H1A. Now, for any predicate K, H1A*H2*K is equivalent to H1*H2*K*H2*K, which means that L(H2(K)) - HI(H2(K(H2(K)))), and so the hypothesis of Theorem 2 holds. Therefore by Theorem 2 and its proof, if we take X1 = H1A*H2*H1A and X2 = H2*H1A*H2*H1A, then (XI, X2) is a cross-point of (H1i H2).

For R-systems, given two predicates H1 and H2, we take L to

be H1R, and for any predicate K, we take X to be K*.

Then

L(H2(X)) - H1(H2(K(H2(X)))) (since H1R*H2*K* is equivalent to HI*H2*K*H2*K*), and so the hypothesis of Theorem 2# is satisfied. gin'

Then by Theorem 2# and its proof, a cross-point (X1, X2) of (H1, H2) is obtained by taking X1 = H1R*H2*H1R* and X2 = H2*H1R*H2*H1R*.

Exercise 28 In an A-system, a pair (H1, H2) of distinct predicates has two distinct cross-points. Why is this a consequence of Theorem 2? Also, what are the two cross-points? The same is true for R-systems, using Theorem 2# in place of Theorem 2. What are the two cross-points?

Exercise 29 In the pseudo-quotational system of §7, JJ is a fixed point of J. Why is this a consequence of Theorem 1? Also, (JPJ, PJPJ) is a cross-point of (J, P). Why is this a consequence of Theorem 2?

Exercise 30 Prove that a sufficient condition for (HI, H2) to have a cross-point is that there is a predicate L such that for every predicate K, L(K) - HI(H2(K(K))). Exercise 31 Suppose that there is a predicate L such that for every predicate K there is an expression X such that L(X) - HI(H2(K(X))). Does (HI, H2) necessarily have a cross-point?

Exercise 32 Suppose that every predicate has a fixed point and that for any predicate H1 and H2 there is a predicate H such that for every

§13 Application to one-sided quotational systems

21

expression X, H(X) - Hl(H2(X)). Prove that every pair (Hl, H2) of predicates has a cross-point.

.,.

Exercise 33 [ A Miniature Incompleteness Theorem] Suppose that we now have an axiom system A in which certain sentences of the language L are proved. To each sentence X is assigned a sentence -X called the negation of X, and a sentence X is called refutable (in A) if its negation -X is provable (in A). The system A is called consistent if no sentence is both provable and refutable in A. The system A is called complete if every sentence is either provable or refutable in A; otherwise A is called incomplete. A sentence X is called decidable in A if it is either provable or refutable in A; otherwise undecidable in A. Suppose now we are given the following three conditions.

Gl To each predicate H is assigned a diagonalizer H#, i.e., a predicate such that for every predicate K, the sentence H# (K) is provable if and only if H(K(K)) is provable.

G2 To each predicate H is assigned a predicate H' such that for every expression X, H'(X) is provable in A if and only if H(X) is refutable in A.

G3 There is a predicate P such that for every sentence X, P(X) is provable in A if and only if X is provable in A. Prove that if the system is consistent, then it must be incomplete!

Exercise 34 Suppose that in the preceding exercise we weaken condition Gl to the following condition Gi: every predicate H has a fixed point (a sentence X that is provable if and only if H(X) is provable). (This is a weakening of condition Gl by virtue of Theorem 1.) Now, suppose that A is consistent and satisfies conditions Gi, G2, and G3. Is A necessarily incomplete?

Exercise 35 (For those who have read the sections on one-sided quotation.) Let us consider a computing machine that prints out various expressions built from the following four symbols.

,.,PR* Let us call an expression printable if the computer can print it. We assume the computer is programmed such that every expression that the machine can print will be printed sooner or later (allowing an infinite future).

Chapter 1. Introduction to self-reference

22

By a sentence we shall mean any expression of one of the following four forms (where X is any expression build from the four symbols).

(1) P*X (2) PR*X

(3) -P*X (4) PR*X This language is build on one-sided quotation with * as the quotation symbol, and is an R-system, with the symbol R standing for "the repeat of." The symbol P stands for "printable." Thus truth of sentences is defined as follows.

(1) P*X is true if X is printable. (2) PR*X is true if XX (the repeat of X) is printable.

(3) -P*X is true if X is not printable. (4) -PR*X is true if XX is not printable. We have given a perfectly precise definition of what it means for a sentence to be true, and we have here an interesting case of self-reference: G1.

the computer is printing out various sentences that assert what it can or cannot print, and so it is describing its own behavior! (It somewhat resembles a self-conscious organism, and such computers are accordingly of interest to those working in artificial intelligence.) (1)

We are given that the machine is totally accurate in that all sentences printed by the machine are true. And so, for example, if the machine ever prints P*X then X really is printable (X will be printed by the machine sooner or later). Also if PR*X is printable, it is true, which means that XX must be printable. Now, suppose that X is printable; does it necessarily follow that P*X is printable? No; if X is printable then P*X is certainly true, but we are not given that the machine is capable of printing all true sentences, but only that the machine never prints any false ones. Is it possible that the machine can print all false sentences? The answer ..O

is no, and the problem for the reader is to find a true sentence that the machine cannot print. G].

Exercise 36 There also exists a curious pair (X, Y) of sentences such that it can be shown from the given conditions of Exercise 35 that one of the two must be true but unprintable, but there is no way to determine

§13 Application to one-sided quotational systems

23

which one it is! Can the reader find such a pair? (Hint: construct sentences

X and Y such that X asserts that Y is not printable and Y asserts that X is printable. There are two different ways of doing this!)

Solutions 33 Assuming the given conditions, the sentence P# (P#') is undecidable

in A. Reason: P#(P#') is provable if P(P#'(P#')) is provable (by G1, taking P for H), iff P#' (P#') is provable (by G3, taking P#' (P#') for X) if and only if P#(P#') is refutable (by G2, taking P# for H and P#' for X). Thus P#(P#') is provable iff it is refutable. This means that it is

.'3

either both provable and refutable or neither provable nor refutable. By the assumption of consistency, it is not both provable and refutable, hence it is neither - it is undecidable in the system A. Another undecidable sentence in A is P(P #(P'#)). Can the reader see why? (If not, see the solution to Exercise 34.)

ova

34 Yes; if we replace G1 by Gi, it still follows that A is incomplete, for suppose X is a fixed point of the predicate P'. Thus X is provable iff P' (X ) is provable, if P(X) is refutable. Thus X is provable iff P(X) is refutable. MSS

0.1

But also, by G3i X is provable if P(X) is provable. Therefore P(X) is provable if P(X) is refutable, and so by the assumption of consistency,

P(X) is neither provable nor refutable. Another point of interest: let us go back to the last exercise, in which we are given G1 instead of the weaker condition G. We have just seen Biz

(using just G2 and G3 and the assumption of consistency) that if X is any fixed point of P', then P(X) is an undecidable sentence of A. But with G1, a fixed point of P' is P #(P'#) (as we saw in the proof of Theorem 1), and so P(P'#(P'#)) is therefore undecidable in A (as stated at the end of the solution of the last exercise). Of course we can see this directly: P(P'#(P #)) is provable iff P'# (P'#) is provable if P'(P'#(P'#)) is provable if P(P'#(P'#)) is refutable. Thus the sentence P(P'#(Pt#))

°-o

is provable if it is refutable, and so by consistency it is neither.

air.

The sentence -PR*-PR* is true if the repeat of -PR* is not printable, but this repeat is the very sentence -PR*-PR*. Thus -PR*-PR* 35

is true if it is not printable, hence by the assumption that no false sentence is printable, the sentence must be true, but the machine cannot print it. 36

Let X be the sentence -P*PR*-P*PR* and Y the sentence

PR*-P*PR*. Thus X is the sentence -P*Y, hence is true iff Y is not

24

Chapter 1. Introduction to self-reference

printable. And Y is true if the repeat of -P*PR* - which is X - is printable. Thus X is true if Y is not printable, and Y is true if X is printable. Now, if X is not true, then Y is printable, hence true, which means that X is printable, and this is contrary to the given condition that only true sentences are printable. Therefore X must be true, and therefore Y is not printable. Next, if Y is true, then Y is a sentence that is true but not printable. If Y is not true, then X is not printable (since Y is true if X is printable), in which case X is true but not printable. ova

In summary, if Y is true, then Y is true but not printable, and if Y is not true, then X is true but not printable. Thus at least one of the sentences X, Y is true but not printable, but there is no way to tell which one it is. Another pair of sentences that works is X = -PR*P*-PR* and Y =

P*-PR*P*-PR*.

Chapter 2

Some classical fixed point arguments compared

,.y

In the last chapter we saw how fixed points are related to self-reference. We shall now look at fixed points in a larger perspective. Although no prior knowledge of mathematical logic, recursion theory, or combinatory logic is presupposed, Part I of this chapter consists of abstract versions of classical results in these three areas. We present them as problems, which the reader might have fun trying to solve. Solutions are given at the end of Part I. In Part II, we show how all five results of Part I are but special cases of one very general fixed point theorem, and then in Part III we show that all the preceding results go through under a curious weakening of the

xe.

hypotheses.

Part I Five fixed point arguments §1 An argument from combinatory logic By an applicative system is meant a set N together with an operation that assigns to each ordered pair (x, y) of elements of N an element of N that we denote xy. We call xy the result of applying x to y. The applicative system that will interest us most in this study is the system of combinatory logic that we will study in later chapters. It is closely related to the A-calculus, which can serve as a basis for the field known as recursion theory - some of which we will study later on. Combinatory logic and the A-calculus are playing an increasingly important role these days in computer science.

Chapter 2. Some classical fixed point arguments compared

26

For now, we will not consider anything as specific as combinatory logic,

but will consider a fixed point result in applicative systems of a much broader sort. In an applicative system, an element b (of N) is called a y,,

fixed point of an element a if ab = b. A solution of the following problem generalizes a famous result in combinatory logic. u-,

Problem 1 Suppose we are given the following two conditions. (1) There is an element m such that for every element x, mx = xx. [Such an element m might be called a duplicator.]

(2) For any elements a and b there is an element c such that for every element x, cx = a(bx). [This condition might be called compositional closure.]

The problem is to prove that every element a has a fixed point.

§2 Godel sentences

Dar

We now turn to a generalization of a famous problem from metamathematics (the theory of mathematical systems). Throughout this chapter, the word "number" shall mean a positive integer. We consider a denumerable collection C of sets of positive integers arranged in a denumerable sequence A1, A2, ...,An ..... We consider all sentences i c A; . We call the sentences i E A; true if i is a member of the set Aj. There are only denumerably many of these sentences (since there are only denumerably many ordered pairs (i, j) of positive integers). We arrange all these sentences in some 1-1 sequence S1, S2, ... , Sn..... Thus for each n, there is some i and j such that Sn is the sentence i E A2. We call n the Godel number of Sn. And for any set A of numbers, we call a sentence Sn a Godel sentence for A if the following condition holds.

Snistrue f-+nEA car

[In other words, either Sn is true and its Godel number is in A, or Sn is not true and its Godel number is not in A. In a purely extensional sense, a Godel sentence for A can be thought of as asserting that its own Godel number is in A.] For any number i, let d(i) be the Godel number of the sentence i c A. [Thus Sd(i) is the sentence i c Ai.] For any set A, by d-1(A) is meant the d(i) E A. set of all numbers i such that d(i) E A. Thus i E d-1(A)

Problem 2 Suppose that we are given that for every set A in C, the set d-1(A) is also in C. Prove that for every A E C there is a Godel sentence for A.

--1

§3 An argument from recursion theory

27

Problem 2a To highlight the significance of the above problem, suppose we have a mathematical system (M) in which certain sentences of the ."j

(D(

above problem are provable. And let us assume that the system is correct in that only true sentences are provable - in other words, for any numbers i and j, if the sentence i E Aj is provable in the system, then i really is a member of A;. We continue to assume, as in Problem 2, that for every set A in C, the set d-' (A) is in C. In addition, suppose that the following two conditions hold. a-,

..O

(1) For every set A in C, its complement A is in C. [By the complement A of A we mean the set of all numbers not in A.]

(2) The set P of Godel numbers of the provable sentences is one of the sets in C.

The problem now is to show that some true sentence is not provable in 'nom

the system (M) - in other words that for some numbers i and j, i is a member of A;, but the sentence i c Aj asserting this is not provable in the system.

§3 An argument from recursion theory We let N be the set of numbers (positive integers) and w, W2, ... , Wn,.. . be a denumerable sequence of subsets of N. We let E be a set of functions from N into N.

Problem 3 Suppose that there is a function F(x, y) from ordered pairs S."

of numbers to numbers and a function d(x) from numbers to numbers such that the following three conditions hold. (1) For every number n, Wd(n) = WF(n,n)

(2) For every number a there is a number b such that for all numbers x, WF(b,x) = WF(a,d(x))

(3) For every function f (x) in E there is a number a such that for all numbers x, WF(a,x) = Wf(x).

The problem is to prove that for every function f in E, there is at least one number n such that W f(n) = W. Remark

This is an abstract version of a famous result in recursion theory known as the recursion theorem (in one of several forms). We will later be studying

28

Chapter 2. Some classical fixed point arguments compared

sets of numbers known as recursively enumerable sets. [Informally, a recursively enumerable set is one that can be generated by a purely mechanical process - as, for example, the set of even numbers, which can be generated RR.

by starting with 2 and adding 2 to any number already obtained - thus generating the sequence 2,4,6,... , 2n, 2n+2,....] We will also be studying a class of functions known as recursive functions which, informally speaking, are functions that can be computed by a purely mechanical process. A precise definition of recursively enumerable sets and recursive functions will be given in a later chapter. Meanwhile, I wish to point out a fact that we will later verify in detail - namely, that it is possible to arrange all the recursively enumerable sets in a sequence W1, W2, ... ,Wn, ... and to find recursive functions F(x, y) and d(x) such that the given conditions of the previous problem do hold (taking E to be the class of recursive functions of one argument). The surprising conclusion, then, is that for any recursive function f (x) there must be at least a number n such that the sets Wn and W f(n) are identical!

As I said, this result is one form of what is known as the recursion theorem, which has numerous application. Many other forms and variants of this form will be studied in this volume - some of them being of a more complex nature. We now turn to another relatively simple form.

§4 Another argument from recursion theory CAD

In addition to the sequence w1, W2, ... , Wn, of sets, we have an infinite sequence R1 (x, y), R2 (x, y), ... , Rn (x, y), ... of binary relations of positive integers. We let C be the collection of all the relations Rn(x,y). con

At this point it will be useful to use the notation of class abstracts: Given any property P(x) of positive integers, by {x : P(x)} is meant the set of all numbers x such that x has the property P.

Problem 4 Suppose there is a function d(x) from numbers to numbers that satisfies the following two conditions.

(1) For every number n, Wd(n) = {x : Rn(x, n)} (in other words, Wd(n) is the set of all numbers x such that Rn (x, n) - or equivalently, for any numbers n and x, x E Wd(n) Rn(x, n). (CD

(2) For every relation R(x, y), in C, the relation R(x, d(y)) is in C (in other words, for every relation R in C there is a relation R' in C such that for all numbers x and y, R'(x, y) R(x, d(y)).

The problem is to prove that there exists at least one number n such that wn = {x : R(x, n)}.

§5 Self-reproducing machines

29

(gyp

CJ'

car

s."

(z7

009

.°.

This problem is again taken from recursion theory. In addition to the recursively enumerable sets, we have recursively enumerable binary relations of positive integers. [A binary relation is a set of ordered pairs, and a recursively enumerable relation R(x, y) can be informally described as one whose members (the ordered pairs) can be generated by a purely mechanical process. A simple example is the set of all ordered pairs (x, y) such that 5x = y. We start with the pair (1, 5) and given any pair (x, y) already obtained, we can form the new pair (x + 1, y + 5). We thus generate the sequence (1, 5), (2, 10),... , (n, 5n), ....] Now the recursively enumerable sets can be arranged in a sequence W1, W2, , Wn) ... and the recursively enumerable relations can be arranged in a sequence Ri(x, y), R2 (X, y), ..., Rn(x, y), ... and a recursive function d(x) exists such that conditions (1) and (2) of the above problem hold. A number n is called an index of a recursively enumerable set A if A = Wn. The amazing thing, then, is that for any recursively enumerable relation R(x, y) there exists a number n such that the set of x's for which R(x, n) holds - this set is a recursively enumerable set having the very number n as an index! This is another form of the result known as the recursion theorem. In later chapters, after giving a precise definition of recursive enumerability, v..

(Dc

O,!

(0D

...

cam

we will see that there is a recursive function d(x) such that (1) and (2) do hold. Meanwhile, it should be of interest to note that without knowing what recursively enumerable sets and relations are, knowing just (1) and (2), one can know that the conclusion follows. coy

r+.

§5 Self-reproducing machines '.3

Can a machine reproduce itself? Early results along these line were obtained by von Neumann. Hartley Rogers[20] uses the recursion theo-

car

car

'.7

rem (that we will study later) to obtain such a result. I will now show you a construction along these lines that does not depend on the recursion theorem, but uses a principle similar to that used in the proof of the recursion theorem. What now follows is completely self-contained. We are given a denumerable sequence P1, P2, ... , Pn,... for programs for constructing robots. The robots construct other robots and each robot bears a program which instructs it as to what robot it should create and what program to give it. A robot X will be called self-reproducing if the robot that it creates has the same program as X. For any two numbers x car

'Z3

7-+

cod

and y we will say that Robot x creates Robot y to mean that any robot .,,

0.H

with program PP will create a robot with program Py. We are given that there is a function d(x) from numbers to numbers and a function F(x, y) from ordered pairs of numbers to numbers such that

30

Chapter 2. Some classical fixed point arguments compared

the following two conditions hold.

(1) For any number x, Robot d(x) creates Robot F(x, x).

(2) There is a number b such that for all numbers x and y, if Robot y creates Robot F(b, x) then Robot d(x) creates Robot y.

Problem 5 The problem is to show that under these conditions there must exist at least one number n such that Robot n creates Robot n (and thus any robot with program Pn is self-reproducing).

Solutions to problems 1-5 1

By (1) there is an element m such that for all elements x, mx = xx.

Given a, it follows from (2) (taking m for b) that there is an element c such

that for all x, cx = a(mx), hence cx = a(xx). Taking c for x, we have cc = a(cc), and so cc is a fixed point of a. 2 Take any set A in C. Then d-1(A) is in C, hence for some number i, Ai = d-1(A). Then for all x, x E Ai +-4 d(x) E A. Taking i for x, i c Ai H d(i) E A. But d(i) is the Godel number of the sentence i E A.

And so the sentence i E Ai is a Godel sentence for A (it is true if and only if its Godel number d(i) is in A).

2a By (1) and (2) it follows that the complement P of the set P is in car

C, hence for some number i, Ai = P. Then, as_ we saw in the solution of Problem 2, the sentence i E Ai is true if d(i) E P, but d(i) E P if d(i) ¢ P, if d(i) is not the Godel number of a provable sentence of (M). However, d(i) is the Godel number of the sentence i E Ai, and so i E Ai is true if it is not provable in (M). This means that either the sentence is true but not provable in (M), or not true but provable in (M). The latter alternative is contrary to the assumption that only true sentences are provable in (M). And so the sentence is true but not provable in (M). Take any function f (x) in E. By (3) there is a number a such that for all x, WF(a,x) = Wf(x). Then by (2) there is a number b such that for all 3

x, WF(b,x) = WF(a,d(x)) Hence WF(b,x) = W f(d(x)) Taking b for x, WF(b,b) = W f (d(b)). But also by (1), Wd(b) = WF(b,b). Therefore Wd(b) = W f (d(b)), and so

Wn = Wf(n) for n = d(b).

§6 A relational fixed point theorem

31

cry

°:v

4 Given R(x, y) in C, by (2) the relation R(x, d(y)) is Rb(x, y) for some b. Thus for all numbers x and y, Rb(x, Y) R(x, d(y)). Therefore Rb(x, b) H R(x, d(b)). But also by (1), x E wd(b) H Rb(x, b). Therefore x E wd(b) H R(x, d(b)). And so for n = d(b), wn, = {x : R(x, n)}.

Take a number b satisfying (2). Then take b for x and d(b) for y, and then by (2), if Robot d(b) creates Robot F(b, b), then Robot d(b) creates Robot d(b). But Robot d(b) does create Robot F(b, b) by (1), and so Robot d(b) does create Robot d(b). Thus Pd(b) is a program for a self-duplicating Q..

5

robot.

Part II A unification §6 A relational fixed point theorem What do the five preceding problems have in common? Theorem 6.1 below provides one such answer (and so does Theorem 6.2 below). But first let

us pause and consider why our previous results are called "fixed point" theorems. Well, this term is usually used for functions: given a function f (x) from a set N into N, an element a of N is called a fixed point of the function f if f (a) = a. We shall generalize this notion as follows: Given a binary relation R(x, y) on a set N, we shall call an element a of N a fixed point of the relation if R(a, a) holds. [Thus a is a fixed point of the function f (x) if a is a fixed point of the relation f (x) = y.] Under this slightly extended notion of "fixed point" the results of the problems of Part I are indeed fixed point theorems. In Problem 1 we were required to show that for any element a, the relation ax = y (as a relation between x and y) has a fixed point. For Problem 2, we were to show that for each expressible set A, the relation "Sx is true if y c A" has a fixed point. For Problem 3, it was to be shown that for each function f (x) in E, the relation "wx = w f(y)" has a fixed point. In Problem 4, we were to show that for any cad

relation R E C there is a fixed point of the relation "wx = {z : R(z, y)}." For Problem 5 we were to show that there is a fixed point for the relation "Robot x creates Robot y." Now for a general relational fixed point theorem, of which all our preceding results can be obtained as corollaries.

Theorem 6.1 A sufficient condition for a relation R(x, y) on a set N to have a fixed point is that there be a relation S(x, y, z) on N and a function d(x) from N to N such that (1) For every x in N, S(d(x), x, x).

Chapter 2. Some classical fixed point arguments compared

32

(2) There is at least one element b of N such that for all x and y in N, S(x, b, y) implies R(x, d(y)).

car

Proof Assume (1) and (2). Take b satisfying (2), then take d(b) for x and b for y and we have: S(d(b), b, b) = R(d(b), d(b)). But S(d(b), b, b) holds by (1), hence R(d(b), d(b)). Thus d(b) is a fixed point of R.

ado

:s'

The solution of Problem 5 is obviously that special case of Theorem 6.1 in which R(x, y) is the relation "Robot y creates Robot x" and "S(x, y, z)" is the relation "Robot x creates Robot F(y, z)." Theorem 6.1 has the following corollary.

Theorem 6.2 Suppose that N and W are sets and z('(x, y) is a mapping from ordered pairs (x, y) of elements of N to elements of W and that O(x) is a function from elements of N to elements of W. Suppose that there is a function d(x) from N into N such that the following conditions hold.

(1) For all x in N, 0(d(x)) = O(x, x). (2) For every element a of N there is an element b of N such that for all

y E N: '(b, y) = 0(a, d(y)) Then for every a c N there is some c c N such that O(c) = 0(a, c).

Proof Given a, let R(x, y) be the relation O(x) = '(a, y). Let S(x, y, z) be the relation O(x) = z'(y, z). Then by (1), S(d(x), x, x) holds for every x. Also, since for every y, 0(b, y) = 0(a, d(y)), then for every x, O(x) = 0(b, y) implies O(x) = t(a, d(y)), which means S(x, b, y) implies R(x, d(y)). Then

by Theorem 6.1, there is some c such that R(c, c), which means O(c) _ 0(a, c). Discussion

It is perhaps easier to prove Theorem 6.2 directly as follows. Given a, take b satisfying (2). Then, taking b for y, 0(b, b) ='(a, d(b)) But also by (1), O(d(b)) = 0(b, b). Therefore O(d(b)) = 0(a, d(b)), so O(c) = 0(a, c) for c = d(b). Though this proof is more direct, we find it of interest that Theorem 6.2 can be derived from Theorem 6.1 without any further diagonalization.

a..

The solution of Problem 4 is that special case of the above theorem in which N is the set of positive integers, W is the set of subsets of N, q(x) is w, and 0(x, y) is the set of all z such that Rx(z, y).

§6 A relational fixed point theorem

33

Theorem 6.2 in turn has the following corollary.

Theorem 6.3 Let - be an equivalence relation on a set N and F(x, y) be a function from ordered pairs of elements of N to elements of N. Suppose

that there is a function d(x) from N into N such that the following two conditions hold.

(1) F(x, x) - d(x) for all x E N.

(2) For every a c N there is some b c N such that for all x c N, F(b, x) - F(a, d(x)). Then for every a E N there is some c c N such that c - F(a, c).

III

Proof We let W be the set of all subsets of N. For every x E N, let Ixj be the set of all y E N such that x - y. Thus, for any x and y in N, x - y iff 1xI = M.

Now take O(x) = xj and zb(x, y) = IF(x, y)l. Then by (1),'(x, y) _ q5(d(x)) and by (2), for each a there is some b such that 4(b, x) = '(a, d(x)). Hence by Theorem 6.2, there is some c (viz d(b)) such that O(c) = '(a, c), III

which means that c - F(a, c). Remark S."

Again, a direct proof is simpler. In (2) take b for x and we have F(b, b) F(a, d(b)). Then by (1), F(b, b) - d(b). Hence d(b) - F(a, d(b)), so d(b) is a fixed point of the relation x - F(a, y). But again, it is of interest to see that Theorem 6.3 is derivable from Theorem 6.1 via Theorem 6.2. Now the solutions of Problems 1, 2, and 3 are all special cases of The-

orem 6.3. For Problem 1, take - to be identity and F(x, y) to be xy and d(x) = mx. We are given that mx = xx, and so d(x) = F(x, x). Then, given a, by the second hypothesis of the problem, there is an element b such

that for all x: bx = a(mx), hence F(b, x) = F(a, d(x)). Thus the hypotheses of Theorem 6.3 hold, hence there is an element c such that c = F(a, c), which means that c = ac. As to Problem 2, define two sentences to be equivalent if they are either

both true (both in T) or both not true. Then define x - y to mean that the sentences SS and Sy are equivalent. Take F(x, y) to be the Godel number of the sentence y c Ax, and take d(x) = F(x, x). Then of course d(x) - F(x, x). Now, the given condition of the problem is that for any element a there is some element b such that for all elements x, x E Ab if and

only if d(x) E Aa, which means that for all x, F(b, x) - F(a, d(x)). Then

34

Chapter 2. Some classical fixed point arguments compared

by Theorem 6.3, for any element a there is some c such that c - F(a, c), which means that S, is equivalent to the sentence c c Aa, and thus S, is a Godel sentence for the set Aa. As to Problem 3, define x - y to mean that wx = wy. Then the three given conditions can be rewritten:

(1) For all numbers x, d(x) - F(x, x). (2) For every a there is some b such that for all x, F(b, x) - F(a, d(x)).

(3) For every f c E, there is some a such that for all x, F(a, x) - f (x).

41v

Given f c E, take a such that for all x, F(a, x) - f (x). But by (1), (2), and Theorem 6.3, there is some c such that c - F(a, c), but F(a, c) f (c), so c - f (c), which means that w, = wf(c). We now see how the solutions of Problems 1-5 are ultimately derivable from Theorem 6.1. 4-'

Part III Quasi-diagonalizat ion §7 Some stronger results We will now show that all the previous results go through under a curious weakening of the hypotheses.

Let us reconsider Problem 1 on applicative systems. An element m

LTA

such that mx = xx for all x is sometimes called a duplicator. Let us call an element m a quasi-duplicator if for all x there is at least one element x' (not necessarily x itself) such that mx' = xx'. Now, the interesting thing is that in Problem 1, if we replace the hypothesis that there is a duplicator (the first hypothesis) by the weaker hypothesis that there is a quasi-duplicator, it will still follow that every element has a fixed point! For suppose m is a quasi-duplicator. Given any element a, there is an element b such that ,i.

for every element x, bx = a(mx) (this by the second hypothesis). Also, since m is a quasi-duplicator, there is an element b' such that mb' = W. Also, since bx = a(mx) for all x, then bb' = a(mb'). But since mb' = bb', it follows that bb' = a(bb'). And so bb' is now a fixed point of a. This curious phenomenon spreads to all the results we have proved so far. Let us begin by considering the following strengthening of Theorem 6.1.

Theorem 7.1 A sufficient condition for a relation R(x, y) on a set N to have a fixed point is that there be a relation S(x, y, z) on N and a function d(x) from N into N such that

§7 Some stronger results

35

(1)* For every element x there is an element x' such that S(d(x'), x, x').

(2)* There is an element b such that for all x and y: S(x, b, y) = [Z,

R(x, d(y)).

Note ..T

*

Hypothesis (1)* is a weakening of hypothesis (1) of Theorem 6.1. Hypothesis (2)* is the same as (2) of Theorem 6.1.

Proof Choose b so that (2)* holds. Then by (1)* there is an element b' such that S(d(b'), b, b'). But also by (2)*, taking d(b') for x and b' for y, we see that S(d(b'), b, b') implies R(d(b'), d(b')). Hence R(d(b'), d(b')).

Of course, both Theorems 6.2 and 6.3 have similar strengthenings.

Theorem 7.2 Same as Theorem 6.2, replacing hypothesis (1) by the weaker hypothesis: (1)* For all x in N there is some x' in N such that 0(d(x')) = O(x, x).

Theorem 7.3 Same as Theorem 6.3, replacing hypothesis (1) by the weaker hypothesis: (1)* For every x E N there is some x' in N such that F(x, x') - d(x'). Exercise 1 Prove Theorems 7.2 and 7.3. S."

We have already shown a strengthening of Problem 1. It can be derived as a consequence of Theorem 7.1 (or more easily from Theorem 7.3). The other four problems have similar strengthenings, as indicated in the next four exercises.

N-4

Exercise 2 Problem 2 can be strengthened in the following interesting

-N-0

manner. Again we are given a denumerable collection C of sets of positive integers, but we are not given an enumeration of C. To each number i and each set A in C is associated the sentence i c A, which is called true if and

only if i is a member of A. And these denumerably many sentences are arranged in some 1-1 sequence S1, S2, ..., Sn,, .... [Thus the sentences are cad

arranged in a given sequence, but the sets - the members of C - are not.] We are now given a function d(x) from numbers to numbers such that the following two conditions hold.

Chapter 2. Some classical fixed point arguments compared

36

(1) For every element A in C there is at least one number n such that Sd(,,,) is true if and only if n E A.

(2) [As before] For any set A in C, the set d-1(A) is in C. Prove that for every set A in C, there is a Godel sentence for A.

Exercise 3 Suppose that in Problem 3 we replace the first hypothesis by the weaker hypothesis: (1)* For every number n there is a number n' such that Wd(n') = WF(n,n') Show that the conclusion still follows.

Exercise 4 The result of Problem 4 can be strengthened as follows. We again consider a denumerable sequence wo, w1, ... , wn, ... Of subsets of the set N of natural numbers. But now we are given a collection C of binary relations on N (but not arranged in any given denumerable sequence). We are given a function d(x) from N into N satisfying the following two conditions.

(1) For every relation R E C there is at least one number n such that Wd(n) = {x : R(x, n)}.

(2) For every relation R E C there is a relation R' E C such that for all x, y in N: R'(x, y) = R(x, d(y)) Prove that for every R E C there is a number c such that w _

{x : R(x, c)}. How does this result strengthen the result of Problem 4?

Exercise 5 State and prove an analogous strengthening of the result of Problem 5.

Exercise 6 Here is a variant of Problem 5 that is apparently of incomparable strength to Problem 5. We dispense with the function d(x). We are now given that there are numbers a and b such that the following two conditions hold. (1) For every x, Robot F(a, x) creates Robot x.

(2) For every x and y, if Robot x creates Robot y, then Robot F(b, x) creates Robot F(y, F(a, y)). Prove that there is some x such that Robot x creates Robot x.

.U"

The following relational fixed point theorem, though its proof is laughably simple, is nevertheless strong enough to yield Theorem 7.1 (and hence all previous results) as a corollary.

§7 Some stronger results

37

Theorem R A sufficient condition for a relation R(x, y) on a set N to have a fixed point is that there be a relation R'(x, y) on N and a function d(x) from N into N such that (1) There is at least one element a of N such that R'(d(a), a). (2) For all elements x and y of N, R'(x, y) implies R(x, d(y)).

Exercise 7 Prove Theorem R and show how it yields Theorem 7.1 as a corollary.

The following exercise is based on a somewhat different fixed point principle.

Exercise 8 [Axioms that assert their own consistency] Let us consider

(D.

car

a setup in which we have a denumerable 1-1 sequence S1, S2,.. ., Sn,.. . of sentences and a denumerable sequence H1i H27 ... , Hn, ... of predicates and an operation assigning to each predicate HZ and each natural number n a sentence H2(n). We now imagine that we have some axiom system A in which certain sentences are provable. Let us call a system consistent if not every sentence is provable in it. And now let us say that a sentence X is consistent with A if the result of adding X as a new axiom to the axioms of A is a consistent system. The system is given an interpretation, according to which each sentence asserts a definite proposition (whether true or false). We are given a predicate C and functions d(x), e(x) from numbers to numbers such that for any numbers n and m, the following three conditions hold.

(1) C(n) asserts that Sn is consistent with A. (2) The proposition asserted by Hn(n) is also asserted by Sd(n).

(3) The proposition asserted by Hn(d(m)) is also asserted by He(n)(m). The problem is to prove that there is a sentence X that asserts its own consistency with A. Remark

In the systems studied by Godel, there are recursive functions d(x), e(x)

and a predicate C such that the conditions (1), (2), and (3) above are satisfied. Thus there is a sentence X that asserts that the adjunction of X to the axioms of the system produces a consistent system. It so happens

Chapter 2. Some classical fixed point arguments compared

38

that the sentence X is false! If it is adjoined to the axioms, the resulting system is inconsistent. This follows from Godel's second incompleteness theorem, which, roughly speaking, is to the effect that consistent systems .`3

of sufficient strength cannot prove their own consistency.

Solution to Exercise 8 We are to find a fixed point for the relation: "S., asserts that Sy is consistent with A." III

Call two sentences X and Y equivalent - in symbols, X - Y - if (CD

X and Y assert the same proposition. The predicate C is H,, for some number c. Now, III

Sd(e(c))

=

He(c)[e(c)]

III

Hc[d(e(c))]

(by (2))

(by (3), taking c for n and e(c) for m)

But Hc[d(e(c))] is the sentence C[d(e(c))], which asserts that Sd(e(c)) is consistent with A (by (1)), hence so is Sd(e(c)).

Exercise 9 The solution of Exercise 8 can also be obtained as a corollary of Theorem 6.3. How?

Chapter 33

How silence a universal machine How to to silence

Part Part I

i..

COO

"s.

As we now now turn turn to aa fascinating As further applications applications of diagonalization, diagonalization, we fascinating problems that generalize variety of problems generalize some results in recursion recursion theory theory known known as halting halting problems. problems. That they they really really do do generalize generalize theorems theorems in in recursion recursion theory will also will be be seen seen in in aa later later chapter. chapter. The results of this chapter are also incompleteness theorems, related to incompleteness theorems, as as will will be be seen seen in in the the next chapter.

Silencing Silencing aa universal machine

§§1 1 A universal machine

cam

cam/]

We consider consider aa denumerable denumerablesequence sequenceMMl, M2, ...,, M,,,, ... (allowing We Mn, ... (allowing rep1, M 2 , ... etitions) of mathematical machines, or computers, computers, that behave etitions) machines, or behave in in the the folfoloperate on on positive positive integers. integers. The The word word lowing lowing manner. manner. First of all they operate number in this this chapter will mean positive integer. Each machine number in positive integer. machine is, so to speak, in in charge of aa certain property of numbers, and its function speak, charge of numbers, and function is to try and determine determine which which numbers have have the property and which which ones don't. One feeds feeds in in aa number number (positive (positive integer) integer.)x xasasinput inputtoto aa machine machineMM and and One goes into into operation operation and one of three things happens. the machine machine goes (1) The machine machine eventually eventually halts and and flashes flashes a signal signal (say (say a green green light) (1) The signifying that the number have the the property property in in question. signifying that number xx does does have question. If this happens, happens, we say that the the machine machine affirms affirms x. (2) The machine machine eventually eventually halts and and flashes flashes a different different signal (say a red (2) The light) signifying signifying that light) that xx does does not have have the the property. property. If this happens, happens, we say say that the we the machine machine denies denies x. x. ..o

(3) The The machine machine can never determine (3) determine whether whether or or not not xx has the property, so runs runs on onforever. forever. In that case case we we say say that the the machine machine is is and so

Chapter 3. How to silence a universal machine

40 P..

silenced by x, or that x silences the machine. We will call a machine total if it is not silenced by any number.

Given two machines M and N and numbers x and y, we say that M behaves the same way towards x as N behaves towards y if either M affirms x and N affirms y, or M denies x and N denies y, or M is silenced by x and

coo

G2'

(CD

N is silenced by y. We say that M and N are similar if for every number x, they behave the same way towards x. We now need a 1-1 function that maps each ordered pair (x, y) of numbers to a number x*y. We could take x*y to be the number which when written in ordinary base 10 notation consists of a string of 1's of length x followed by a string of 0's of length y, e.g., 3*4 = 1110000. Actually our choice of functions doesn't really matter; we could alternatively take x*y to be 2x3y, or we could take the recursive pairing function J(x, y) that we define in Chapter 6. To each machine M is associated a machine M# called the diagonalizer of M, which behaves towards any number x as M behaves towards x*x. Next, to each machine M is associated a machine M' called the opposer of M, which affirms those numbers that M denies, and denies those numbers that M affirms, and is silenced by those numbers that silence M. Finally, one of the machines U is called a universal machine and behaves towards any number x*y as Mx behaves towards y. We might remark that the universal machine U is as valuable as all the S."

machines M1, M2, ..., M, ... put together because to find out how Mx would react to y, instead of feeding in y to M, we could just as well feed in x*y to U. Let us now review and label the three key facts.

P1: M# behaves towards x as M behaves towards x*x.

P2: M' affirms just those numbers that M denies and denies just those numbers that M affirms. P3: U behaves towards x*y as Mx behaves towards y. The stage is now set and the show can begin.

Problem 1 The universal machine U, despite all the clever things it can do, is not omniscient (as the philosopher Leibniz might have hoped). There is a number x that silences U. How is this proved?

Problem 2 To make the problem more concrete, suppose that we are given conditions P1, P2, P3 in the following more specific forms.

§1 A universal machine

41

P1' For any number n, M# = P2 For any odd number n, Mn = Mn+1. P3 M1 is universal.

The problem now is to find a number x that silences M1. `ti

Problem 3 Still assuming P1', P2, P3, how many distinct numbers can you find that will silence M1?

Problem 4 Suppose we change Pi, P2, P3 as follows. P1 For any number n, Mn = M5n.

P2 For any odd number n, M# = Mn+1. P3' Same as P3 (M1 is universal).

Now how many numbers are there that silence M?

Problem 5 We now retract the special conditions P,", P2", P3 as well as P1, P2, P3, and go back to the more general conditions P1, P2, and P3 For any machine M, M' O is the diagonalizer of the opposer M' of M, whereas M#' is the opposer of the diagonalizer M# of M. Are the machines M#' and M# necessarily similar?

Problem 6 (1) Prove that there exists a machine C that behaves towards any number x as Mx behaves towards x. (2) Prove there exists a machine D such that for any number x, D affirms x if and only if Mx denies x, and D denies x if and only if Mx affirms X.

'C,

Problem 7 [A key problem] Prove that for any machine M, there is at least one number n such that U and M behave the same way towards n. The solution of Problem 1 is an easy consequence of this fact. Why?

Chapter 3. How to silence a universal machine

42

Problem 8 [A question of omniscience] Given two machines M and N, let us say that M knows at least as much as N if M affirms all numbers Q..

affirmed by N, and denies all numbers denied by N (but it might also affirm or deny some numbers that silence M - indeed it might affirm some numbers that silence M and deny other numbers that silence M). Let us call M omniscient if it knows at least as much as the universal machine U and is furthermore total (not silenced by any number). Prove one of the following two statements. (1) One of the machines must be omniscient. (2) None of the machines can be omniscient.

Problem 9 [A halting problem] Let us say that a machine halts at x if it either affirms or denies x. Prove that there is no machine that halts at those and only those numbers that silence U. 'ti

Problem 10 [A fixed point problem] For any numbers x and y, let x(y) be one of three integers +1, 0, -1. °a.

Suppose we are given the following three facts.

Fl For any number a there is a number b such that b(x) = a(x*x).

F2 For any number a there is a number b such that for all x, b(x) = -a(x). F3 There is a number h such that for all x and y, h(x*y) = x(y). We let h be a number satisfying F3. (1) Prove that for any number a there is a number n such that h(n) = a(n).

(2) Show that for any number a there is a number n such that h(n) =

-a(n). .°a

(3) Show that there is a number n such that h(n) = 0.

Problem 11 (1) Why is the solution of Problem 1 an immediate consequence of (3) above?

(2) Why is the solution of the key Problem 7 an immediate consequence of (1) above?

§1 A universal machine

43

Problem 12 Part (1) of Problem 10 can be shown to be a corollary of (CD

one of the fixed point theorems of Chapter 2. How?

moo

Problem 13 Let ai be the set of all numbers affirmed by Mi and let E be the collection of all the sets ai. Give two examples of an element A of E whose complement A (the set of all numbers not in A) is not in E.

Problem 14 Let D be the machine U#'. Describe a function f (x) such that if Mi is any machine that knows at least as much as D, the number f (i) silences D.

Part II Some related problems We now turn to some related problems that will be seen in the next chapter to be more closely related to incompleteness theorems. We shall be considering a denumerable collection E of sets of positive integer and a 1-1 function x*y such that the following two conditions hold.

Q1 For any set A in E the set A# of all n such that n*n E A is also in E.

Q2 There exists an enumeration (al, 8i),

(al, 8i), ... of all

disjoint pairs of elements of E such that the set Ul of all numbers x*y such that y c ax is in E, and so is the set U2 of all numbers x*y such that y c ,3x. Note

An important example of a collection E satisfying the above two conditions is the collection of all recursively enumerable sets, as we will see in a later chapter. In what follows, it will be assumed that E satisfies conditions Q1 and Q2 OED

Problem 15 Show that E must contain a set whose complement is not in E. [This will be seen to generalize a basic theorem in recursion theory: there exists a recursively enumerable set whose complement is not recursively enumerable.]

Problem 16 Given two disjoint elements A and B of E, call the pair (A, B) inseparable in £ - or £-inseparable - if there exists no pair (ai,,3i) such that A C ai and B C 3i and 3i is the complement of ai.

Chapter 3. How to silence a universal machine

44

Prove that there are disjoint sets A, B in E such that (A, B) is inseparable in E. [This will be seen to generalize the important result in recursion theory that there exists a pair of recursively enumerable sets that is recursively inseparable.]

Problem 17 The solutions to Problems 15 and 16 are really corollaries of results proved in Part I of this chapter. How? Indeed, collections E satisfying conditions Q1 and Q2 are intimately connected with machines satisfying P1, P2, and P3. How?

rd)

Part III Solutions to problems 1 - 6 It will be simpler and more instructive to solve the first six problems in a different order than that in which they were given. We will begin with Problem 6. We let C be the machine U#, where U is any universal machine, and we will see that for any number x, U# behaves towards x as Mx behaves

4.'

towards x. Well, U# behaves towards x as U behaves towards x*x, but U behaves towards x*x as Mx behaves towards x. Therefore C behaves towards x as Mx behaves towards x. We then take D to be the opposer C' of C, and this machine - which is U#' - obviously works. The solution of Problem 1 now follows. Let M be either D itself, or any machine similar to D. Then let h be any index of M, i.e., any number such that Mh = M. Then for any number x, Mh affirms x if and only if Mx denies x. We take h for x and so Mh affirms h if and only if Mh denies h. Hence Mh must be silenced by h, and therefore U is silenced by h*h. In summary, if h is an index of U#', or of any machine similar to U# then h*h is a number that silences U. Now for Problem 2. We take U to be M1, in which case U# is M5 and U#' is M6. And so 6*6 silences U. We next do Problem 5. The answer is yes; the machines M' O and M#' ...

are similar, for M'# affirms (denies) x if M' affirms (denies) x*x, if M denies (affirms) x*x, iff M# denies (affirms) x, if M#' affirms (denies) x. Now for Problem 3. It follows from what we have just shown that the machine U'# is similar to U#'. Now, U#' is M6 (as we have seen in the solution of Problem 2), whereas U'# is the machine M10 (since U' is M2). Thus M10 is similar to M6. We have see in the solution to Problem 1 that if Mh is any machine similar to U#', then Mh is silenced by h and U is silenced by h*h. Well, M10 is similar to M6, and so M10 is silenced by

r-+

§ 1 A universal machine

45

G2'

s0"

4.(D

10, just as M6 is silenced by 6. But also, since M10 and M6 are similar, any number that silences one also silences the other, and hence M10 is also silenced by 6 and M6 is silenced by 10. Therefore U is silenced by the numbers 10*10,6*6, 10*6, and 6*10. And so we now have four numbers that silence U. Four more are obtained by noting that M10 is the opposer of M9 and M6 is the opposer of M5, and a number silences a machine if and only if it silences its opposer. Therefore the numbers 6 and 10 also silence both M9 and M5, and so we have four more numbers that silence U, namely, 9*6, 9*10, 5*6, and 5*10. And so there are at least eight numbers that ..a

silence U.

Q..

(79

Now for Problem 4. Obviously, for any machine M, the machine M" is similar to M. Also, in our present indexing, the machines are numbered in such a way that for every number n there is a greater number m (25n, in fact) such that M,,,, is similar to M, and therefore there are infinitely many numbers h such that Mh is similar to M6, and U is then silenced by h*h for every such h. Therefore infinitely many numbers silence U.

Given a machine M, let Mh be any machine similar to the diagonalizer M# of M. Then for any x, Mh behaves towards x as M behaves towards x*x. Hence Mh behaves towards h as M behaves towards h*h. But also U behaves towards h*h as Mh behaves towards h, and so U behaves towards h*h as M behaves towards h*h. We then take n to be h*h, and U and M behave alike towards n. The solution of Problem 1 is immediate from this by simply taking U' for M. Then U and U' behave alike towards some number n, hence they are both silenced by n. (CD

pr,

7

cad

car

i-1

car

.71

More generally, let M be any machine that knows at least as much as U. Then by the last problem there is a number n towards which U and M' behave alike. If M affirms n then M' denies n, hence U denies n (since U and M' behave alike towards n), hence M denies n (since M knows at least as much as U), and we have a contradiction. If M denies n, M' affirms n, U affirms n, M affirms n and we again have a contradiction. Hence n silences M. Thus M is not omniscient. Of course, this again solves Problem 1, since U obviously knows at least as much as U. 8

9 We could prove this from scratch, but it is immediate from the result of Problem 7. Given a machine M, take a number n towards which M and U behave alike. Then they either both halt at n or are both silenced by n.

Chapter 3. How to silence a universal machine

46

10 (1) Take any number a. Let b be such that for all x, b(x) = a(x*x). Then b(b) = a(b*b). But also h(b*b) = b(b). Therefore h(b*b) _ a(b*b), and so we take b*b for n, and we have h(n) = a(n). Fly

(2) Given a, take b such that for all n, b(n) = -a(n). Then by (1) there is some n such that h(n) = b(n). Hence h(n) = -a(n). in'

(3) By (2), taking h for a, there is some n such that h(n) = -h(n). Thus h(n) = 0. 11

The last problem ties up with all the earlier problems as follows. For cm.

any numbers x and y define x(y) to be +1 if Mx affirms y, -1 if Mx

Chi

OAF

Wpm

denies y, and 0 if Mx is silenced by y. Then F1, F2, and F3 are immediate consequences of conditions P1, P2, and P3 (for F1, take b to be an index of Ma ; for F2, take b to be an index of Ma; and for F3, take h to be any index of a universal machine). (CD

(1) By (3) of the last problem, there is some n such that h(n) = 0. Then U is silenced by n. (2) Given a machine Ma, by (1) of the last problem there is some n such

that h(n) = a(n), which means that U behaves towards n as Ma behaves towards n. 12 Part (1) of Problem 10 follows from just F1 and F3. By F3, there is a number h such that for all x, h(x*x) = x(x). To obtain the conclusion from Theorem 6.2 of Chapter 2, we take W = {-1, 0, +1}, d(x) = x*x, O(x, y) = x(y), and O(x) = h(x).

I--

,O++

13 The set of all n affirmed by U is one such set, for given any machine M, there is some x such that U and M behave alike towards x (Problem 7). For such an x, it is not possible that M affirms x if and only if U doesn't affirm x. Therefore the set of all numbers affirmed by M cannot be the complement of the set of all numbers affirmed by U. [Another such set is the set of all numbers denied by U. Another is the set of all numbers affirmed by C; another is the set of all numbers denied by C.] 14 Take f to be the identity function. Suppose Mi knows as much as D. Then if D affirms i, so does Mi, and if D denies i, so does Mi. But

also (Problem 6) if Mi affirms i then D denies i, and if Mi denies i then D affirms i, from which it follows that Mi can neither affirm nor deny i.

§1 A universal machine

47 coo

15 through 17 We will solve these in a different order than that in which ((DD

they were given. Let us go back to the machines of Part I. The mathematically significant

coo

""s

car

aspects of what we called a "machine" is not its physical behavior (such as flashing a green or red light), but the set of numbers it affirms and the set of numbers if denies. And so we can take a "machine" M to be simply a disjoint ordered pair (a,,3) of number sets and we can say that (a, Q) affirms a number x if x c a, and denies x if x c Q, and is silenced by x if x ¢ (a U )3). Well, by conditions Q1 and Q2, the set of all disjoint ordered pairs (a,)3) of elements of £ can be arranged in a sequence (a,, 01), (an, /3n), ... that has the properties P1, P2, and P3 because we can take (A, B)' to be (B, A), and (A, B)# to be (A#, B#), where A# is the set of all x such that x*x E A and B# is the set of all x such that x*x E B. Finally, our "universal" pair is the pair (U1, U2), where U1 is the set of all x*y such that y E ax and U2 is the set of all x*y such that y E 3x. It then follows from the solution of Problem 8 that the pair (U1, U2) is inseparable in £, for consider any disjoint pair (A, B) of members of £ such that U1 C A and U2 C_ B. Then the "machine" (A, B) "knows as much" as (U1, U2), hence is not "omniscient," which means that there is some number n that is neither in A nor in B, so A is not the complement of B. Thus (U1, U2) is inseparable in £, which solves Problem 16. It of course then follows that neither U1 nor U2 can have its complement in £, which solves Problem 15.

Chapter 4

Some general incompleteness theorems car

This chapter is basic to several later chapters of this study, and is devoted to a reconstruction, generalization, and extension of several classical results in mathematical logic concerning incompleteness, provability, and truth. S."

ICJ"

car

ran

We give abstract versions of the incompleteness proofs of Godel[6] and Rosser[22] as well as the lesser known incompleteness proof using Tarski's notion of truth[35]. We conclude the chapter with a fascinating result of Rowitt Parikh[11]. Our results are presented in a purely abstract form which does not employ any apparatus of mathematical logic; the more concrete applications are given in Chapters 5 and 9 of this study. The central definition of this chapter is that of a representation system, which allows us to study the metamathematical properties of mathematical systems of highly diverse structures. The type of systems studied by Godel and Rosser have at least the following features. First a denumerable set E of elements called expressions is given, each of which is assigned a Godel number. Then there is a subset S of E whose elements are called sentences; next a set P of sentences called provable sentences and a set R of sentences obi

No's

car

-.',

car

.""

(DS

chi

-t-

(CD

car

called refutable (or disprovable) sentences. The system is called consistent if no sentence is both provable and refutable in the system, and inconsistent otherwise. The system is called complete if every sentence is either provable or refutable, and incomplete otherwise. A sentence X is called decidable in the system if it is either provable or refutable, and undecidable otherwise. Thus an incomplete system is one in which there is at least one undecidable sentence. Next, there are certain expressions called (unary) predicates and there is an operation that assigns to each predicate H and each numbers n, a sen'In some applications, "number" will mean natural number, and in others, positive

Chapter 4. Some general incompleteness theorems

49

.`3

tence denoted H(n), and which may be referred to as the result of applying H to n. [Informally, predicates can be thought of as names of properties of numbers, and the sentence H(n) intuitively expresses the proposition

that n has the property named by H.] A predicate H is said to represent the set of all n such that H(n) is provable in the system. How a predicate H is applied to a number n to yield a sentence H(n) varies considerably from system to system, and our whole purpose of working in representation systems is to be independent of the formal peculiarities of individual systems, and thus to study representability in systems of highly diverse syntactical structures. In later chapters we will see how representation systems are applicable to first-order systems of arithmetic. In later chapters we will also need the notion of representing relations of numbers, as well as sets, and so for each positive n we then have a set of expressions called predicate of degree n, and there is an operation that assigns to every predicate L of degree n and every n-tuple (a1, . . . , an) of numbers, a sentence L(a1,... , an), called the result of applying L to (a1, ... , an). And we say that L represents the set of all n-tuples (al,.. . , an) such that L(a1,... , an) is provable in the system. For technical reasons it will be helpful to consider an operation that assigns to every expression E, whether a predicate or not, and every ntuple (al,.. . , an) of numbers, an expression denoted E(a1,... , an) such that if E is a predicate of degree n, then E(a1,... , an) is a sentence. Again, for technical reasons, it will be convenient to presently assume that our Godel numbering is onto the set N of positive integers, i.e., we will assume that every positive integer n is the Godel number of some expression that we denote En. We remark that if we start with the more S."

usual type of Godel numbering in which not every number is a Godel number, we can always arrange all the expressions in the infinite sequence E1i E2) ..., En, ... according to the magnitude of their Godel numbers, and then forget about the old Godel numbering and define the new Godel number of EZ to be i. The collection of all the items just described will be called a representation system. To review the definition, we now formally define a representation system Z to be a collection of the following items. O..

(1) A 1-1 denumerable sequence E1i E2, ..., En, ... of elements called expressions. We call i the Godel number of E. (2) A set of expressions called sentences. (3) For each positive n, a set of expressions called predicates of degree n, or n-ary predicates, or n-place predicates, or predicate of n arguments. natural number. The results of this chapter hold under either application.

Chapter 4. Some general incompleteness theorems

50

(4) A mapping that assigns to each expression E and each n-tuple (a1i ... , an,) of numbers an expression E(al,... an), subject to the ,

condition that if E is a predicate of degree n, then E(al,... , an) is a sentence.

(5) An ordered pair (P, R) of sets of sentences. The elements of P are called the provable sentences of the system, and the elements of R are called the refutable sentences of the system.

This concludes our definition of a representation system. The only predicates that will be considered in this chapter are predicates of degree one, and so for the rest of this chapter, predicate will mean predicate of degree 1.

Part I Incompleteness §1 Diagonalization

(0D

BCD

(AD

rye

For any expression Ei (i being the Godel number of Ei) by its diagonalization we mean the expression Ei(i). If Ei is a predicate (of degree 1) then its diagonalization Ei(i) is a sentence. We shall always use the letter "H" to stand for a predicate (of degree 1) and "h" to stand for its Godel number. Thus H(h) is the diagonalization of H and H(h) is a sentence. [Intuitively, H(h) is supposed to express the proposition that the Godel number of H lies in the set represented by H.] For any number i, we let d(i) be the Godel number of Ei(i), and we call d(x) the diagonal function of Z. For any set W of expressions we let WO be the set of its Godel numbers

- the set of all i such that Ei E W - and we let W* be the set of all i such that Ei(i) E W. Thus W* = d-'(Wo). For any number set A, we let A# = d-1(A) (the set of all i such that d(i) E A). Thus W* = WO W. For any number set A we let A be the complement of A. It is easily verified that Wo = (W)o (W being the complement of W with respect to the set of expressions) and that W* = W.

Exercise 1 Verify that WO = (W) o and W * = W * . First diagonal lemma

We recall that for any predicate H of degree 1 and any set A of numbers,

H is said to represent A if for all numbers n, n E A H H(n) E P. We shall have need to generalize this notion as follows: Given an arbitrary

§ 1 Diagonalization

51

set K of sentences, we will say that H represents A with respect to K if

for any number n, n E A H H(n) E K. And we will say that A is Krepresentable if there is a predicate H that represents A with respect to K. [Thus P-representable is what we have called "representable."]

L."

Lemma 1.0 (First diagonal lemma) If A# is K-representable then there is a sentence X such that X E K

Pxl.... ,xn

P2x1i...,xn -> Px1,...,xn SAC

In this extended system, P represents W1 U W2. 'This definition is narrower than the one I gave in Theory of Formal Systems, in that I do not allow the introduction of constants, which I treat separately in (d).

Chapter 6. Introduction to formal systems and recursion

108

As for the intersection, instead of the above two axioms we add the following single axiom. Plx1i...,xn

->

P2xl,...,xn - Pxl,...,xn

(b) Existential quantification Let P represent the relation R(xi, ... , xn, y) in (E). Take a new predicate Q (of degree n) and add the axiom.

Pxl,...,xn,y

--+

Qxl,...,xn

Then Q represents the relation zlyR(xl, ... ) xn, y).

(c) Explicit transformations Suppose P represents R(xl,... , xk) in (E). Take a new predicate Q and add the following axiom. Pxi1,...,xik

-f Qx1,...,xn

Then Q represents Axl,... , xnR(xi, , .... xik ).

(d) Introduction of constants Suppose P represents R(x, yl,... , yn) in (E) and c is any fixed word in K. We take a new predicate Q and add the axiom.

PC,x1,...,xn

Qx1,...,xn

Then Q represents the relation R(c, yi, ... , yn) (as a relation among yl) ... , yn), i.e., Q represents the relation Ay1, ... , yn . R(c, y1,

...)yn).

We note that it is obvious from (d) that if a relation R(xl,... , x.,,,,, , yn) is f.r. over K, then for any words cl, ... , c, in K, the relation R(cl,... , C,n, yl, ... , yn) (as a relation among yl, ... , yn) is f.r. over K. y1,

We say that a function f (x1, ... , xn) is explicitly definable from a func-

tion g(x1i ... , xk) if there are numbers il, ... , ik, all < n, such that for all elements X1, ... , xn, f (xl, ... , xn) = g(xi1, ... xik) In A-notation, by ... xn g(xi17 ... , xik) is meant the function f such that for all x1, ... , xn, f (x1, ... , xn) = g(xi...... xin ) . Obviously if f (x1, ... , xn) is explicitly definable from 9(X1, ... , xk), then the relation f (x1, ... , xn) = y is explicitly definable from the relation g(xl, ... , xk) = y. )

Ax1

We say that a function f (xl, ... , xn) is f.r. over K if the relation f (x1, ... , xn) = y is f.r. over K.

§2 Some basic closure properties

109

Theorem 2.3 (a) The class of functions f.r. over K is closed under composition, i.e., for any functions f1(x1, , xn), ... , fk(xl, .. , xn), g(xl, , xk) that are f.r. over K, the function g(f 1(x1, ... - , X . ) ,- . , fk (xl, ... , xn)) is f.r. over K.

(b) For any functions fl (xl,... , xn), ... , fk (xl, ... , xn) f.r. over K and any relation R(xl, ... , xk) f.r. over K, the relation R(fl (xl, ... , xn), ..., fk(x1i...,xn)) is f.r over K. (c) The class of functions f.r. over K is closed under explicit definability.

(d) The class of functions f.r. over K is closed under introduction of constants, i.e., if f (y, x1i ... , xn) is f.r. over K, then for any word c in K, the function f (c, xl,... , xn) (as a function of xl,... , xn) is f.r. over K.

0

Proof

G".

(a) g(fl (xl, ... , xn), ... , fk (xl, ... , xn)) = y if and only if 2yl ... ]yk (fl (x1, ... , xn) = yl A ... A f k (xl,... , xn) = yk A g(yl, ... , yk) = y). The result then follows since the class of relations f.r. over K is closed under existential quantifications and conjunctions.

(b) R(fl (xl, ... , xn), ... , fk (xl, ... , xn)) if and only if 3yl ... 3yk (f l (xl, ... , xn) = yl A ... A fk (xl, ... , xn) = yk A R(yl, ... , yk)). Again, the result follows by closure under existential quantifications and conjunctions. (c) Suppose that f (X 1, ... , xn) is explicitly definable from g(xl, ... , xk) and that the latter is f.r. over K. Then the relation f (xl, ... , xn) = y is explicitly definable from the relation g(xl, ... , xk) = y, and since the latter is f.r. over K, so is the former (by (c) of Theorem 2.2.), which means that function f (xl, ... , xn) is f.r. over K. (d) This follows easily from (d) of Theorem 2.2.

Let us note that (b) of the above theorem also holds when R is a set A, i.e., if A is a set of words in K that is f.r. over K, then for any function f (xl, ... , xn) f.r. over K, so is the set of all n-tuples (x1, ... , xn) such that f (xl, ... , xn) E A, i.e., so is the set ) x1 ... xn(f (xl, ... , xn) (E A).

In particular, this also holds when f is a function of one argument - in

110

Chapter 6. Introduction to formal systems and recursion

other words, if f (x) and A are f.r. over K, so is the set f -1(A) (the set of all x such that f (x) E A). More generally, for any relation R(xl,... , xn) and any function f (x), by f -1(R) we shall mean the set of all n-tuples (x1, ... , xn) such that R(f (xl),... , f (xn)). And so we have

Theorem 2.4 If R and f are formally representable over K, so is f -1(R). Exercise 10 (a) For any set W and function f (x), by f"(W) is meant the set of all elements f (x) such that x E W. Show that if f and W are f.r. over K, so is f"(W). (b) For any sets W1 and W2, by W1 x W2 (the Cartesian Product of W1 and W2) is meant the set of all ordered pairs (x, y) such that x E W1 and y E W2. Show that if W1 and W2 are f.r. over K, so is W1 x W2.

§3 Solvability over K For any relation W(xl, ... , xn) on a set S, by W (the complement of W with respect to the set of all n-tuple elements of S) is meant the set of all n-tuples.. (x1, . , xn) of elements of S that are not elements of W. [Thus W (X 1, ... , xn) if -W (X 1, ... , xn).] We presume the reader to be familiar with the fact that if W = W1 U W2, then W = W1 fl W2. A relation W of words in K is said to be solvable over K if W and its complement W are both f.r. over K. Given any collection R of relations on a set S, we let RX be the colBCD

lection of all relations W such that W and W are both in S. We write RX (K) for (R(K))x - thus RX (K) is the collection of all relations that are solvable over K.

Theorem 3.1 The collection RX (K) of all relations solvable over K is closed under union, intersection, explicit transformations, and introduction of constants.

Proof More generally, we will show that for any collection R of relations on a set S, if R is closed under union, intersection, explicit transformations, and introduction of constants, then RX has these same closure properties and is also closed under complementation. And so suppose R has these closure properties.

§3 Solvability over K

111

Complementation is trivial - If W E RX, then W and W are both in R. Hence Wis in R (since W = W), so W and W are both in R, which means that W E RX. As for union, suppose W1 and W2 are elements of RX of the same degree. Then W1, W2, W1, W2 are all in R. Let W = W1 U W2. Then of course W is in R (by hypothesis). Now, W = W1 n W2, and since W1 and W2 are both in R, so is W1 nW2, thus W is in R. Since W and W are both in R, then W E RX . The proof for intersection is obviously dual (or indeed follows from the fact that RX is closed under union and complementation).

As for explicit transformations, we note that if S is explicitly definable from R, then S is explicitly definable from R, because if S(xl, .... xn) is equivalent to R(x2,, ... , xi. ), then S(xl, ... , x,,,) is equivalent to R(x2,, ... Xik), and so if R E RX and S is explicitly definable from R, then S is explicitly definable from R, and since R and R are both in R then S and S are both in R, so S E RX. The proof for introduction of constants is similar and is left to the )

reader.

Theorem 3.2 If R(xl,... , x,,) is solvable over K and if fl (xl, ... , xn), .... f k (x1, ... , xn) are functions that are solvable over K then the relation R(f l (xl, ... , xn), ... , fk (xl, ... , xn)) is solvable over K.

Proof Let S(xl,... , xn) be the relation R(fl (xi, ... , xn), ... , fk (xl, ,xn)). Then S(xl,...,xn) is the relation R(f1(_x1,...,xn),...,fk(xl) xn)). If now R is solvable over K, then R and R are both f.r. over K, hence S and S are both f.r. over K (by (b) of Theorem 2.3), hence S is ,

solvable over K.

Exercise 11 Show that the relation x = y is solvable over K. [It is easy to represent the relation x = y, but it is trickier to represent the relation x

y!]

Extension of alphabets

We call an alphabet L an extension of an alphabet K if every symbol of K is also a symbol of L. We wish to show that any relation of strings in K which is formally representable over K is formally representable over any extension L of K. Suppose now that (E) is an elementary formal system over K. For any extension L of K, let (E)L be that elementary formal system over L whose

Chapter 6. Introduction to formal systems and recursion

112

.j'

W'S

axioms are the same as those of (E). [In particular, (E)K = (E).] In general, a predicate P of (E) does not represent the same relation in (E)L as it does in (E). [As a trivial example, suppose (E) has the sole axiom Px. Then in (E)K, P represents the set of all strings in K, whereas in (E)L, P represents the set of all strings in L.] However, given any elementary formal system (E) over K, we can construct an elementary formal system (E*) over K whose predicates include all predicates of K and which is such that for any extension L of K, each predicate of (E) represents the same relation in (E*)L as it represents in (E). We do this as follows. Let al, ... , a,, be the symbols of K and let Xl,... , X,,, be the axioms of (E). We take a new predicate Q (which is to represent the set of strings in K) and for each axiom X of (E), let X* be the string Qx1 -> ... -> Qxt -> X, where x1, ... , xt are the variables that occur in X. We then define (E*) to be that elementary formal system whose axioms are: Qal

QaT,

Qx -*Qy -*Qxy

Xi X*

Now let L be any extension of K. It is easily seen that in (E*)L, the predicate Q represents the set of all strings in K, and that each of the other predicates represents the same relation as it does in the original system (E). We have thus given a solution to Exercise 7 and have proved

Theorem 3.3 Any relation formally representable over K is formally representable over any extension L of K. The converse of Theorem 3.3 is also true, as we shall see later on. As an obvious corollary of Theorem 3.3 we have:

Theorem 3.5 Any relation solvable over K is solvable over any extension L of K.

§4 Recursively enumerable relations

113

Part II Recursive enumerability §4 Recursively enumerable relations Just as any non-negative integer is uniquely expressible as a polynomial in powers of 2 with coefficients 0 and 1, so is any positive integer uniquely expressible as a polynomial in powers of 2 with coefficients 1 and 2. We let D2 be the two-sign alphabet {1, 2}; these two symbols we call dyadic digits; any string dndn_1 ... dldo of dyadic digits is called a dyadic numeral and denotes the positive integer do + 2d1 + 4d2 + ... + 2ndn. This way of naming

?.'

positive integers, which we call dyadic, has certain technical advantages over the better known binary notation (using 0 and 1). Until further notice we shall identify the positive integers with the dyadic numerals that denote them. We note that if we order the dyadic numerals lexicographically, i.e., according to length and alphabetically within each group of the same length, then the position that any numeral has, in this sequence, is the same as the number which it denotes. Any elementary formal system over the two-sign alphabet D2 will be called an elementary dyadic arithmetic - or a dyadic arithmetic for short. We now define a relation W of positive integer to be recursively enumerable

- abbreviated "r.e." - if it is representable (in dyadic notation) in some dyadic arithmetic, i.e., formally representable over {1, 2}, and recursive if W and W are both recursively enumerable. Having defined recursive enumerability for relations of positive integers, we can now define recursive enumerability for relations of natural numbers as follows. For any relation W (x1, ... , xn) of natural numbers, let W+ be

cam.

the set of all n-tuples (x1, ... , xn) of positive integers such that W(xi 1, ... , xn - 1). Then we define W to be a recursively enumerable relation of natural numbers if and only if W+ is a recursively enumerable relation of positive integers.

Until further notice we will be concerned only with positive integers, and the word "number" shall mean positive integer, unless specified to the contrary. Remark

In our definition of recursive enumerability, our choice of 2 as a base was arbitrary, but will prove to be convenient. Whatever base b we chose, the definition of "recursively enumerable" would be equivalent, as we will see.

By a numerical notation - more briefly, a notation - we shall mean a mapping A that assigns to each positive integer n an expression An -

Chapter 6. Introduction to formal systems and recursion

114

,.o

also written n when A is fixed for the discussion - called the A-numeral for n. By the alphabet of A we mean the set of symbols from which the A-numerals are formed. We shall say that A is an admissible notation if the alphabet K of A is finite and the set of all ordered pairs (On, An+1) is formally representable over K. Given an admissible A and a relation R(x1, ... , xn) of positive integers, by RA we shall mean the set of all nsue.

-a)

tuples. (Dy...... Ax.) such that R(xl,... , xn) holds. And now we shall Eau

say that R is formally representable in A-notation if R. is formally representable over the alphabet K of A. By the dyadic notation we of course mean the mapping that assigns

to each number n its dyadic numeral. And we are defining a relation of numbers to be recursively enumerable if it is f.r. in dyadic notation.

,1'

[Actually the choice of notation doesn't matter, so long as the notation is admissible, but the dyadic notation has certain particular advantages.] The dyadic notation is admissible, because the successor relation is represented (in dyadic notation) by "S" in the dyadic arithmetic whose axioms are

`pa

Si, 2 S2,11 Sxl, x2 Sx, y -* Sx2, yl Eau

More generally, let us consider n-adic notation for n > 2. In this notation, we take n symbols 61, 62, ... , 6n called, n-adic digits, as names of the positive integers 1, 2, ... , n respectively, and any string 6i,.Si,._1 ... 6o of these n-adic digits denotes the positive integer io + iln + i2n2 + ... + irn''. Again, if we order all the n-adic numerals lexicographically, the position a numeral occupies in the lexicographical sequence is the number it designates. We let Dn be the alphabet {61i ... , 6n}. For any n > 2, n-adic notation is admissible, since in n-adic notation the successor relation is represented by "P" in that elementary formal systems over Dn whose axioms are the following :.a

61,62

6n-1, 6n, P6n, 6161

Px61i x62

§4 Recursively enumerable relations

115

Px8,,,_1, x6",

Px, y --* Px&",, y51

We thus have

Theorem 4.1 For any n > 2, n-adic notation is admissible. Unary and Peano notations

In unary notation we take the numeral n to be a string of is of length '.3

.'3

cal

n. Obviously unary notation is admissible, since the successor relation is represented by S in that E.F.S over the alphabet {i} whose sole axiom is Sx, xi. In Chapter 5 we used what might be called Peano notation, in which An (usually written n) is taken to be the symbol "0" followed by n accents. [Thus the Peano positive numerals are 0', 0", 0"', ....]

Theorem 4.2 Peano notation is admissible. Proof The alphabet of the Peano notation is {0,' }. We construct the following elementary formal system over {0,' }.

We first take a unary predicate N to represent the set of (positive) Peano numerals and we take the following axioms. NO'

Nx -fNx' BCD

Then we take a predicate S to represent (in Peano notation) the successor relation and we add the axiom

Nx-->Sx,x' Theorem 4.3 For any admissible notation 0, the following relations are f.r. in A-notation.

(a) The set of 0-numerals. (b) The relations x < y, x < y, x = y, x

(c) x+y=z,xxy=z,xy=z.

Y.

116

Chapter 6. Introduction to formal systems and recursion

Proof We let K be the alphabet of A and we let (E) be an elementary formal system over K in which S is a predicate that represents the successor relation in A-notation.

(a) To represent the set of A-numerals, we take a new predicate N and add the axiom Sx, y --> Nx

(b) To represent the relation x < y (in A-notation), we take a new predicate L and add the axioms Sx, y --> Lx, y

Lx, y -> Ly, z -* Lx, z

(c) To represent the relation x < y we take a new predicate L' and add the axioms Lx, y --> L'x, y Nx --> L'x, x

To represent the relation x = y we take a new predicate I and add the axiom

Nx -->Ix,x To represent the relation x # y we take a new predicate I' and add the axioms

Lx, y -' I'x, y Ly, x --> I'x, y

(d) To represent the relation x+y = z (in A-notation) we add a predicate A and the axioms

Sx,y-*Ax,A,,y Ax, y, z --> SY, yi --> Sz, zi -4 Ax, yi, zl

Next we take a new predicate M to represent the multiplication relation x x y = z and we add the following axioms

Nx-> Mx,A,,x Mx,y,z-+Sy,yl

Az,y,w->Mx,yl,w

§4 Recursively enumerable relations

117

Next we take a predicate E to represent the exponential relation xy = z and add the following axioms

Nx --fEx,Al,x Ex, y, z --> Sy, yl --> Mz, x, w --> Ex, yl, w

Dyadic concatentation

For any numbers x and y, by x * y we mean that number whose dyadic numeral is the dyadic numeral of x followed by the dyadic numeral for y (e.g., in dyadic notation, 12112 * 212 = 12112212). We call the relation x * y = z the relation of dyadic concatenation. This relation satisfies the following three conditions and is uniquely determined by them.

(1) x*1=2x+1 (2) x*2=2x+2 (3) x*(y*z)=(x*y)*z From this we have

Theorem 4.4 For any admissible notation A, the relation of dyadic concatenation is f.r. in A-notation.

Proof Suppose A is admissible and K is its alphabet. Let (E) be an elementary formal system over K in which "A" represents the addition relation and "M" the multiplication relation (in A-notation). To represent the dyadic concatenation relation we take a new predicate "C" and add the following axioms.

MA2,x,y -* Ay,A,,z -* Cx,A,,z M1 2,x,y--'Ay,J2,z--->Cx,02,z Cx,y,v --> Cv,z,t -> Cy,z,w --> Cx,w,t From Theorems 4.1, 4.3, and 4.4 we have

Theorem 4.5 The following relations are recursively enumerable (a) x+ = y [By x+ is meant the successor x + 1 of x.]

(b) x Fx', z

(b) Let (E) be a dyadic arithmetic in which S represents the successor relation x + 1 = y, G represents the relation g(x1i ... , xn) = y, and H represents the relation h(xl, ... , xn, y, z) = w. Then take a new predicate F and add the axioms

Gx1,...,xn,y Fx1,...,xn,y,z

Fx1,...,xn,1,y Hx1i...,xn,y,z,w Sy,y' --> Fx1i...,xn,y',w

Then F represents the relation f (x1, ... , xn, y) = Z. And now for a definition of primitive recursive functions. First, the projection function Ui (x1, ... , xn) (i < n) is defined by the condition: Ui (x1, ... , xn) = xi. Of course projection functions are recursive (the relation Ui (x1, ... , xn) = xi is represented by P in that dyadic arithmetic whose only axiom is Px1,... , xn, xi). Well, the class of primitive recursive functions is defined as the smallest class containing the successor function S(x) = x + 1, all projection functions, and closed under composition and definability by recursion. And so by Theorem 5.5, and the facts that the successor function and the projection functions are recursive, and that the recursive functions are closed under composition, we have:

Theorem 5.6 Every primitive recursive function is recursive.

§5 Recursive functions

121

Minimization

Let us call a numerical function f (xl, ... , xn, y) regular if for every X1, ... , xn there is at least one number y such that f (x1 .... ) xn, y) = 1. If now f is regular, then we define µy f (xl, ... , xn, y) = 1 to be the smallest

number y such that f (xl, ... , xn, y) = 1. We then say that the function ,ay f (xl, ... , xn, y) = 1 (which is a function of X1.... , xn) is the minimization of the function f.

Theorem 5.7 The minimization of any regular recursive function is recursive.

Proof Exercise. Another characterization of recursive functions We now see that any class of functions that contains all primitive recursive functions and is closed under minimization of regular functions contains all recursive functions. Actually, the class of recursive functions happens to be the smallest class of functions that contains all primitive recursive functions and is closed under minimization of regular functions. Indeed, some treatments of recursion theory define the recursive functions to be the members of this class. *Partial recursive functions

."5

.-y

CDR

Let q(xl,... , xn) be a function defined on some but not necessarily all n-tuples of numbers. We write q(x17 ... , xn) -- y to mean that q is defined on (x1, ... , xn) and its value on x1i ... , xn) is y. The function q is called a partial recursive function if the relation q(xl,... , xn) ^_ y is recursively enumerable. In general, this does not imply that the relation q(x1i ... , xn) ^ y is recursive, but we have the following theorem of which .s:

Theorem 5.1 is a special case.

*Theorem 5.8 If q(xl,... , xn) is partial recursive and if the set of all n-tuples (x 1, ... , xn) on which q is defined is recursive, then the relation q(x1i ... , xn) ^_ y is recursive.

Proof Exercise. One also defines the minimization py(q(xl,... , xn, y) ^_ 1) of a partial recursive function q(xl, .... xn, y) as follows. If there is some y such that

Chapter 6. Introduction to formal systems and recursion

122

q(xl,... , xn, y) = 1, then we take pyq(xl, ... , xn, y) to be the smallest such

inn

'0-h

.'3

car

coo

y; otherwise 1cy(q(x1 i ... , xn, y)) - 1 is undefined. It is easy to prove that for any partial recursive function q, its minimalization is a partial recursive function. An alternative characterization of the class of partial recursive functions is the smallest class of partial functions that contains all primitive recursive functions and is closed under minimalization. This result, though well known, is not needed for this study. Although many treatments of recursion theory take partial recursive functions as the fundamental objects, we prefer the approach of Post[13] which takes recursively enumerable sets and relations as the fundamental objects. And now we turn to something that will play a fundamental role in the approach that we will be taking.

§6 Finite quantifications and constructive definability For any relation R(xi, ... , xn, y) of numbers, by the relation (Vz < y)R(xl,... , xn, z) we mean the set of all (n + 1)-tuples (x1, ... , xn, y) such that R(x1, ... , xn, z) for all z less than n equal to y. By the relation (]z < y)R(xl,... , xn, z) (read "There exists a number z less than or equal to y such that R(xl, ... , xn, z)") is meant the set of all (n + 1)-tuples (x1, .... xn, y) such that R(x1, ... , xn, z) holds for at least one z less than or equal to y. We abbreviate the relation (Vz < y)R(xl,... , xn, z) by VF(R), and (1]z < y)R(xl, .... xn, z) by IF (R), and we say that VF(R) is .s:

the finite universal quantification of R, and that 3F(R) is the finite existential quantification of R. We also say that VF(R) arises from R by finite universal quantification and that ]F(R) arises from R by finite existential quantification. As previously stated, we abbreviate "recursively enumerable" by "r.e."

Theorem 6.1 (a) If R(xl, ... , xn, y) is r.e., so are the relations IF (R) and VF(R). (b) If R is recursive, so are ZIF(R) and VF(R).

Proof (a) Suppose R(xl,... , xn, y) is r.e. Then the relation (3z < y)R(xi, ... , xn, z) is equivalent to ]z(z < y A R(x1,... , xn, z)), which is r.e., hence ZIF(R) is r.e.

§6 Finite quantifications and constructive definability

123

As to VF(R), let (E) be a dyadic arithmetic in which P represents R and S represents the relation "the successor of x is y." To represent VF(R), take a new predicate Q and add the following axioms.

Pxl,...,xn, 1 - Qxl,...,xn, 1 Qxl, ... xn7 y ` Sy, Y

Pxl,... , xn, y

Qxl,... , xn, y1

(b) This follows from (a), since the complement of EIF(R) is VF(R), and the complement of VF(R) is EIF(R).

Theorem 6.2

(a) If R(xl,... , xn, z, y)

is

r.e. so are the relations (3y < z)R(xl,

.... xn, z, y) and (Vy < z)R(xl,... , xn, Z, y) (b) Same with "recursive" instead of "r.e." Proof

(a) Suppose R(xl,... , xn, z, y) is r.e. Then the relation (3y < z)R(xl, ... , xn, z, y) is equivalent to 3y(y < z A R(xl,... , xn, z, y)), which is

.-:

r.e. As to (Vy < z)R(xl,... , xn, z, y), let S(xl,... , xn, z, w) be the relation (Vy < w)R(xl,... , xn, z, y). By (a) of the preceding theorem, the relation S(xl,... , xn, z, w) is r.e. Then the relation S(xl, ... , xn, z, z), which is explicitly definable from S(xl,... , xn, z, w), is r.e., and this is the relation (Vy < z)R(xl, .... xn, z, y). cad

ova

(b) The proof of this, using (a) is similar to the proof of (b) of Theorem 6.1.

For any relation R(xl, ... , xn, y, z) and function f (x), we write (Ely < f (z))R(xl,... , xn, y, z) to mean zly(y _< f (z) A R(xl,... , xn, y, z)), and (Vy < f , xn, y, z) to mean Vy(y < f (z) D R(xl,... , xn, y, z)). (z))R(xl,...

Theorem 6.3 Suppose R(xl,... , xn, y, z) is r.e. and f (x) is a recursive function. Then

(a) The relations (3y < f (z))R(xl,... , xn, y, z) and (Vy < f (z))R(xl, ... , xn, y, z) are r.e.

(b) If R is recursive, so are the relations (ly < f (z))R(xl,... , xn, y, z) and (by Q112x12y.] Then we add to the n + 1 axioms already at hand, the axioms Xi , ... , X,*. In this dyadic arithmetic, P represents ...

(L)

`"'zoo

WO. Thus, if W is f.r. over K, then WO is r.e. Going in the other direction, suppose Wo is r.e. Without any real loss of generality, we can assume that 1,2 are symbols of K (if not, we use any two others in their place). Since WO is f.r. over the sub-alphabet D of K, it is f.r. over K (by Theorem 3.4). Also by (a) of Theorem 8.1, the relation

g(x) = y is f.r. over K, and since W = g(Wo), then W is f.r. over K by Theorem 2.4. We thus have

Theorem 8.3 W is f.r. over K if WO is r.e. Next we get

Theorem 8.4 Suppose that K has at least two symbols and that L is an extension of K. Then for any relation W of words in K, W is f.r. over L if W is f.r. over K.

Chapter 6. Introduction to formal systems and recursion

132

Proof We already know that if W is f.r. over K then it is f.r. over L (Theorem 3.4). Now suppose that W is f.r. over L. Let us order L so that the symbols of K occur at the beginning; let gl be the dyadic Godel numbering of the strings of K and let 92 be the dyadic Godel numbering of the strings of L. Thus 92 is an extension of the function gl, i.e., for any string X in K, 91(X) = 92 (X). Since W is f.r. over L, then g2(W) is r.e. (by Theorem 8.3), hence gi(W) is r.e. (since g1(W) = 92(W)), hence W is f.r. over K by Theorem 8.3.

§9 Lexicographical Godel numbering and n-adic notation We now wish to show that for each n > 2, a numerical relation is formally representable in n-adic notation if it is r.e. (formally representable 0..

in dyadic notation). For each n > 2, let Zn(x, y) be the relation "x is an n-adic numeral, y is a dyadic numeral, and the number designated by x in n-adic notation is the same as the number designated by y in dyadic notation."

Lemma 9.0 For each n > 2 the relation Zn (x, y) is formally representable over the alphabet Dn of n-adic digits.

Proof We are assuming that D2 is a sub-alphabet of Dn. By Theorem 4.1, the successor relation is f.r. in n-adic notation over Dn. Also by Theorem 6.1, the successor relation is f.r. in dyadic notation over D2, hence by Theorem 3.4, it is f.r. in dyadic notation over the larger alphabet D. Let (E) be an elementary formal system over Dn in which P represents the successor relation in dyadic notation and Q represents the successor relation in n-adic notation. [Thus, Px, y is provable in (E) if x and y are dyadic numerals and for some number k, x is the dyadic numeral for k and y is the dyadic numeral for k + 1, whereas Qx, y is provable in (E) if x and y are n-adic numerals for some k and k + 1 respectively.] Now take a new predicate Z and add to (E) the axioms

Z1,1 Zx, y -> Qx, X1 -> Py, yl -4 Zx1, yl

Then Z represents the relation Zn(x, z).

Theorem 9.1 For each n > 2, a numerical relation R(x1, ... , xt) is f.r. in n-adic notation if it is r.e.

§9 Lexicographical Godel numbering and n-adic notation

133

Proof Its

(a) Suppose R is representable in n-adic notation. Let (E) be an E.F.S. over Dn in which "R" represents R and "Z" represents the relation Zn(x, y) of the above lemma. Add a new predicate B and the axiom

Rxl,...,xt -4 Zxi,yi -+ ... -4 Zxt,yt -4 Byl,...,yt Then B represents the relation R in dyadic notation but over the alphabet Dn. However, R is then f.r. in dyadic notation over the smaller alphabet D2 by Theorem 8.4.

(b) Conversely, suppose R is f.r. in dyadic notation over D2. Then by Theorem 3.4, it is formally representable in dyadic notation over the

larger alphabet Dn. Let (E) be an E.F.S. over Dn in which R is represented in dyadic notation by a predicate "R" and in which "Z" represents the relation Zn(x, y). Then take a new predicate C and add the axiom

Ryi,...,yt -> Zxi,yi -+ ...Zxt,yt -4 Cxl,...,xt Then C represents R in n-adic notation (over the alphabet Dn).

Corollary 9.2 For each n > 2, a numerical relation is solvable in n-adic notation if and only if it recursive. n-adic concatenation For any n > 2, by x *n y we mean the number designated in n-adic notation

by the n-adic numeral for x followed by the n-adic numeral for y. The relation x *n y = z - which we call n-adic concatenation - is obviously f.r. in n-adic notation - it is represented by "C" in that E.F.S. over Dn whose sole axiom is Cx, y, xy. Hence by Theorem 9.1 we have

Corollary 9.3 For each n > 2, the relation of n-adic concatenation is r.e.

Lexicographical Godel numbering

Another method of Godel numbering that we have found useful is the following. For K an ordered alphabet (al, ... , an), arrange all strings in K in lexicographical order and then assign to each string X in K its position in that sequence. We call that number the lexicographical Godel number

Chapter 6. Introduction to formal. systems and recursion

134

Nip

-O-

'.Y

of X. Equivalently, if we look at the symbols a1,.. . , an as n-adic digits, then the number thus designated (in n-adic notation) is the lexicographical Godel number of X - in other words if X is a string a;,,.a;,,._1 ... ai0, then the lexicographical Godel number of X is io + iln + i2n2 + ... + irnr. We note that in this Godel numbering, all positive integers are Godel numbers. We let h(X) be the lexicographical Godel number of X. [We still sue.

write g(X) for the dyadic Godel number of X.] And for any relay tion W(xl,... , xt) of strings in K, we let Wh be the corresponding relation of lexicographical Godel numbers, i.e., the set of all t-tuples (h(X1),... , h(Xt)) such that W(X1,... , Xt). It is immediate that W is f.r. over K if Wh is a numerical relation that is f.r. in n-adic notation, ms's

31°

which is (by Theorem 9.1) in turn the case if Wh is r.e. From this it also follows that W is solvable over K if Wh is recursive. And so we have

Theorem 9.4 For h the lexicographical Godel numbering of all strings .'3

in an ordered alphabet K, a set or relation W of words in K is f.r. over K if Wh is r.e., and is solvable over K if Wh is recursive. On the whole, dyadic Godel numbering will play a much more important role in this volume than lexicographical Godel numbering.

Chapter 7

A universal system and its applications Part I A universal system We now wish to construct a "universal" system (U) in which we can, so to speak, express all propositions of the form that such and such a number is in such and such a recursively enumerable set, or more generally that

such and such an n-tuple of numbers is in such and such a recursively enumerable relation. A complete knowledge of which sentences in this C/)

system are true, and which are not, would imply a complete knowledge of all formalized mathematics. But we will see that the problem is undecidable - a result that Post[13] calls "Godel's Theorem in Miniature." Also, our universal system will be the basis for two fundamental results in recursion theory - the Enumeration Theorem and the Iteration Theorem, which are indispensable for many of the chapters that follow. (1)

§1 The universal system (U) Preparatory to the construction of (U), we need a device for "transcribing" all dyadic arithmetics (elementary formal systems over the alphabet {1, 2}) into one single finite alphabet. We now define a transcribed dyadic arith-

metic - abbreviated "T.A." - to be a system like a dyadic arithmetic mar,

except that instead of taking our variables and predicates to be individual symbols, we take three signs v, ',p and define a transcribed variable 'T3

- abbreviated "T.A. variable" - to be any of the strings v', v", v"', ..., and we define a transcribed predicate - abbreviated "T.A. predicate" to be a string of p's followed by a string of accents; the number of p's is to indicate the degree of the predicate. A transcribed dyadic arithmetic,

Chapter 7. A universal system and its applications

136

cad

then, is like a dyadic arithmetic except that we use transcribed variables instead of individual symbols for variables, and transcribed predicates instead of individual symbols for predicates. We thus have one single alphabet K7 (namely the seven symbols 1 2 v ' p , -+ ) from which all T.A.'s are constructed. It is obvious that representability in an arbitrary dyadic arithmetic is equivalent to representability in one in transcribed notation. We define "T.A. term," "atomic T.A. formula," "T.A. formula," as we do term, atomic formula, formula, sentence, using "transcribed variable", "transcribed predicate" instead of "variable," "predicate," respectively. We now construct our system (U) as follows. We first extend K7 to an alphabet K8 by adding the symbol *. To any T.A. whose axioms are X1,.. . , XK we wish to associate a string in K8; this string shall be X1*X2* ... *Xk (or X1 alone, if k = 1). Such a string B is called a base (after Post). The T.A. formulas X1,... , Xn are called the components of the base X1* ... *Xk. Next, we extend K8 to an alphabet K9 by adding the symbol I-, and we define a sentence (of (U)) to be an expression of the form B F- X, where B is a base and X is a T.A. sentence. We define the sentence B F- X to be true if X is provable in that T.A. whose axioms are the components of B. Thus X1* ... *Xk F- X is true if X is provable in that transcribed dyadic arithmetic whose axioms are X1i ... , Xk. By a predicate H of (U) (not to be confused with a T.A. predicate!) we shall mean an expression of the form B F- P, where B is a base and P is a transcribed predicate; the degree n of H is, by definition, that of P. A predicate H of (U) of degree n is said to represent in (U) the set of all n-tuples (al, ... , an) of numbers (dyadic numerals) such that the sentence Ha 1, . . . , an is true. Thus a predicate X1* ... *Xk F- P of degree n represents in (U) the set of all n-tuples (al, ... , an) such that Pal,... , an is provable in that T.A. whose axioms are X1, ... , Xk - in other words it represents the very same relation as P represents in that T.A. whose axioms are X1, ... , Xk. And thus a relation is representable in (U) if it is r.e. In this sense, (U) is called universal for the collection of r.e. relations. 'c7

ova

tip

(On

°-r

coo

O..

ova

.,,

coo

car

((DD

(`p

h-] .-.

car

+1'

+O'

We let T be the set of true sentences and To be the set of Godel numbers of the true sentences (under the dyadic Godel numbering). We now wish to prove the basic result that the set T is formally representable over K9 (and ..j

Boa

cm.

thus (U) is a formal system, though not an elementary formal system), and that the set To is recursively enumerable. [The entire development of recursive function theory from the viewpoint of elementary formal systems rests on this result, which is a variant of a result of Post[13].] We shall now construct an E.F.S. W over K9 in which the set T is represented.

§1 The universal system (U)

137

Construction of the system W

USA

We must first note that the implication sign of W is to be distinct from the implication sign of T.A.'s. We could continue to use "-p" for the implication sign of (U); we prefer however to use ....... for the implication sign of W, since it will occur so frequently, and we shall now take the implication sign of T.A.'s to be "imp." Similarly, we shall now use the ordinary comma for our punctuation sign of W, and "com" for the punctuation sign of T.A.'s. For variables of W (not to be confused with T.A. variables) we will use the letters x, y, z, w, with or without subscripts. Predicates of W (not to be confused either with predicates of (U) or T.A. predicates) will be introduced as needed. We now introduce the axioms of W in groups, first explaining what each newly introduced predicate of W is to represent. N represents the set of numbers (dyadic numerals). p.,

Ni N2

Nx-+ Ny-Nxy Acc represents the set of strings of accents. Acc' Acc x -> Accx'

V represents the set of T.A. variables. Accx -+ Vvx P represents the set of T.A. predicates.

Accx-Ppx Px -+ Ppx t represents the set of T.A. terms.

Nx-+tx tx-+ty-4 txy FO represents the set of atomic T.A. formulas.

Accx -+ty -+Fopxy Fox ->ty-4 Fopxcomy

Chapter 7. A universal system and its applications

138

F represents the set of T.A. formulas.

Fox->Fx "ti

Fox -+ Fy -4 Fx imp y So represents the set of atomic T.A. sentences.

Accx ->Ny ->Sopxy Sox-+ Ny ->Sopxcomy F-+

S represents the set of T.A. sentences.

Sox-+Sx Sox

->Sximpy

d represents the relation "x and y are distinct variables."

Vx-+Accy->dx,xy dx,y->dy,x Sub represents the relation "x is any string compounded from numerals, T.A. variables, T.A. predicates, com, imp; y is a variable; z is a numeral and w is the result of substituting z for all occurrences of y in x (that is, all occurrences which are not followed by more accents).

Nx->Vy-Nz-Subx,y,z,x Vx Sub x, x, z, z dx, y -+ Nz -+ Sub x, y, z, x

Subx,y,z,x Vy - Nz -+ Sub com, y, z, com Vy - Nz -4 Sub imp, y, z, imp Sub x, y, z, w -+ Sub xl, y, z, wl -> Sub xxl, y, z, wwl

pt represents the relation "x is a T.A. formula and y is the result of substituting numerals for some (but not necessarily all) variables of x (such a formula y is called a partial instance of x).

Fx -> Subx,y,z,w -+ ptx,w

ptx,y-+pty,z-4ptx,z in represents the relation "x is a T.A. formula and y is the result of substituting numerals for all variables of x (y is an instance of x).

ptx,y-+Sy-4 inx,y

§2 The recursive unsolvability of (U)

139

B represents the set of bases of (U).

Fx->Bx Bx -+Fy -+ Bx*y C represents the relation "x is a component of the base y."

Fx -+ Cx, x

Tr represents the set of true sentences of (U). Cx, y -4 in x, xl -+ Tr y F- x1

TryF- x -iTryF- ximpz ->Sox ->Tryl- z This concludes the construction of the system W in which T is represented. To represent To over {1, 2}, just take all the above axioms and replace each symbol of K9 by its dyadic Godel number. And so we have proved:

Theorem 1 The set T of true sentences of (U) is formally representable over K9 and its set To of Godel numbers is recursively enumerable.

§2 The recursive unsolvability of (U) One of the fundamental results of recursion theory is that there exists a recursively enumerable set that is not recursive. Well, we are about to see that To is such a set. The norm operation introduced in Chapter 1 will

car

Ova

now be used for a good purpose. For any string X in K9, we define its norm to be the string XXo, i.e., X followed by its own Godel number (written in dyadic notation). We note that if H is a predicate of (U) of degree 1, then its norm HHo is a sentence

that is true if the Godel number of H lies in the set represented by H in (U). If n is a dyadic numeral, then n itself has a Godel number no, hence also

a norm nno. Since our Godel numbering is an isomorphism with respect to concatenation, it follows that if n is the Godel number of X, then its norm

nno is the Godel number of the norm Xn of X. Thus the Godel number of XXo is XoXoo We let r(x, y) be the number xyo. Thus the norm of x is r(x, x). For any set A of numbers we let A# be the set of all numbers x whose norm is ..-"r

in A.

Chapter 7. A universal system and its applications

140

Lemma 2.0 (a) The function r(x, y) is recursive,

(b) If A is r.e. so is A#.

Proof (a) By (b) of Proposition 8.1 of Chapter 6, the relation xo = y is r.e. cad

Then r(x, y) = z is the r.e. relation 3w (w = yo A xw = z), hence r is a recursive function.

(b) Suppose that A is r.e. Then x c A# if ly(r(x,x) = yAy c A). Thus A# is r.e. Godel sentences

Call a sentence X of (U) a Gddel sentence for a set A of numbers if either X is true and its Godel number is in A, or X is false and its Godel number

is not in A - in other words X is true

X0 E A

Theorem 2.1 For any r.e. set A there is a Godel sentence X for A. Proof Suppose A is r.e. Then so is A# by the above lemma. Then A# is represented in (U) by some predicate H. Then for every number n,

Hn is true H n E A#, hence Hn is true H nno E A Taking h for n, where h is the Godel number of H, we have: Hh is true

hhoEA. However, hho is the Godel number of Hh, and so Hh is a Godel sentence for A.

Remark

Having proved (b) of Lemma 2.1, Theorem 2.0 is actually a corollary of Theorem 1 of Chapter 1. Can the reader see why this is so?

Exercise 1 Why is this so? Also, how is Theorem 2.1 a special case of a result proved in Chapter 4? What representation system is appropriate for the system (U)?

§3 The enumeration theorem

141

The non-recursiveness of To

There obviously cannot be a Godel sentence for the complement To of To, CDn

for such a sentence would be true if and only if its Godel number was not the Godel number of a true sentence, which is impossible. And so by Theorem 2.1, it follows that To is not r.e. Thus we have

Theorem 2.2 The set To is recursively enumerable but not recursive.

m_.

Exercise 2 A set A is said to be (many-one) reducible to a set B if there is a recursive function f (x) such that for all x, x E A f (x) E B (in other words, A = f-1(B)). A set B is called universal if every r.e. set is reducible to B. Show that the set To is universal,

boy

Exercise 3 Show that To has the stronger property that there is a recursive function f (x, y) such that for every r.e. set A, there is a number i such that for all x: x E A H f (i, x) E To.

Exercise 4 Show that every universal set B has this stronger property. [Hint: To is reducible to B.] C71

Exercise 5 Show that no universal set can be recursive. [This generalizes Theorem 2.2.]

Exercise 6 Show that if A is reducible to B and A is non-recursive, then B is non-recursive.

Part II Enumeration and iteration theorems We now turn to the study of two basic theorems of recursion theory - the enumeration theorem, due to Post and to Kleene, and the iteration theorem (a variant of the s-m-n theorem of Kleene, also discovered independently by Martin Davis). After having proved these two theorems, the universal system (U) will have served its purpose and will no longer be needed.

§3 The enumeration theorem We wish to arrange all r.e. sets in an infinite sequence w1, w2, ... , w,n, .. . (allowing repetitions) in such a way that the relation "wx contains y" is

Chapter 7. A universal system and its applications

142

an r.e. relation. For this purpose we use the universal system (U). We are letting T be the set of true sentences of (U), and To the set of Godel cad

numbers of the true sentences of (U). We are letting r(x, y) be the number xyo, and we have already shown that the function r is recursive, and since To is an r.e. set, then the relation r(x, y) E To is r.e. We now define wn to

be the set of all x such that r(n, x) E To. Thus X E wn H r(n, x) E To. Since the relation r(x, y) E To is r.e. and the r.e. relations are closed under introduction of constants, then for each n, wn is an r.e. set. We call n an index of wn. Thus every set with an index is r.e. Conversely, every r.e. set

ran

has an index, because if A is r.e., then it is represented in (U) by some predicate H with Godel number h, and so for all x, x E A Hx E T 2 we define rn+1(x, y1, ... , yn) oho

to be the number xz1cz2c... czn, where for each i < n, zi is the Godel number of (the numeral) yi. Thus if x is the Godel number of some expression X of (U), then r(x, yi, ... , yn) is the Godel number of the expresCSI

sion X Y1 com yl corn ... com yn. We could then define Rz (x1, ... , xn) if r(i, x1i ... , xn) E To. Then every r.e. relation R(xl,... , xn) is represented in (U) by some predicate H with Godel number h, and so R = R. However, we prefer the indexing using the functions J,- (xl, ... , xn),

since it is a technical advantage that R(xl,... , xn) is equivalent to J(xl, ... , xn) E wi Exercise 7 We have proved the enumeration theorem for r.e. relations of positive integers. From this we can obtain an enumeration theorem for r.e. relations of natural numbers as follows. For any positive n and natural number i define Si as the set of all n-tuples (x1, ... , xn) of natural numbers such that R+1(x1 + 1, ... , xn + 1). [Thus S? (xl, ... , xn) R +1(x1 + 1, ... , xn + 1).] Show that for every r.e. relation S(xl,... , xn) of natural numbers, there is a natural number i such that S = Sin.

Exercise 8 Let us go back to the enumeration theorem for r.e. relations of positive integers. Given an r.e. set wi, the number i is not necessarily the Godel number of 4.,

some predicate which represents wi in (U). Prove that there is a recursive function q(x) such that for any number i, 0(i) is the Godel number of a predicate which represents wi in (U). [Informally, this means that given an r.e. set, in the sense of given an index of it, we can effectively find a predicate of (U) to represent it.] In fact there is such a recursive function O(x) with the additional property that x < O(x), for every x. Prove this. cad

Exercise 9 Prove that every r.e. set has infinitely many indices.

Chapter 7. A universal system and its applications

144

§4 Iteration theorem The second basic tool of recursive function theory is the iteration theorem, which we will shortly state and prove. To fully appreciate the power and significance of this theorem, the reader should first try solving the following exercises. 'C7

Exercise 10 We know that for any two r.e. sets A and B, their union 04'

AU B is r.e. Is there a recursive function O(x, y) such that for all numbers i and j, wb(ij) = wiUwj? [The question can be phrased informally: given two r.e. sets (in the sense that we are given indices of them), can we effectively find an index of their union?]

Exercise 11 Show that there is a recursive function O(x) such that for every number i, R0(i) (x, y) is the inverse of the relation Ri (x, y) (i.e., for all x and y: Rb(i) (x, y)

r.e. or not, are of sufficient importance to warrant a name.] Of course car

-,e

Post's creative set C is immediately completely creative - a fact that Post should have capitalized on. Actually, creative sets and completely creative sets have turned out to be the same thing, but the proof of this involves a fixed point result known as the recursion theorem, which we study in Part III of this volume. In recursion theory, every universal r.e. set is generative, hence creative. A celebrated result of John Myhill is that every creative set is universal, and the proof of this appears to require the recursion theorem. As pointed out to me many years ago by Daniel Lacombe, no fixed point argument is required to show that every completely creative set is universal, and we shall study this later on. "'S

cad

c0.

(>.

(DS

am.

All I have said generalizes to standard universal relational systems, and we shall carry the above terminology "creative," "productive," "coproductive," "completely creative" intact, replacing "r.e. set" by "R-set," and "recursive function" by "E-function."

Chapter 10. Doubly indexed relational systems

182

Part II Double indexing §3 Double enumeration and double universality Until further notice we shall assume that R is universal and that E contains a pairing function J(x, y) and inverse functions K(x), L(x) such that

KJ(x, y) = x; LJ(x, y) = y; J(K(x), L(x)) = x. We shall also assume

o°n

In'

that as in the case of recursion theory, E contains a "separation" function a(x, y) such that for all i and j in N: (i) w,(i j) is disjoint from w,(j i); (ii) wi - Wi C wQ(i j); (iii) if wi is disjoint from wj, then wi = wa(ij). Then, as in recursion theory (see §3, Chapter 8), we take ai to be WQ(K(i) L(i)) and /3i to be w.(L(i),K(i)). Then every pair (A, B) of disjoint R-sets is (ai, /3i) for some i (and of course for every i, ai is disjoint from /3i). Such a system R might be called doubly indexable. For the rest of this chapter, R will be assumed to be both doubly indexable and universal. From this it follows that the relations "ax contains y" and "/3w contains y" are R-relations. We will also assume that our indexing of R-relations is "nice", i.e., that Rn (xl, ... , xn) t-- Jn(x1i ... , xn) E wi. Hence it follows that for all n, and any elements i and j: (1) RQ(i ) is disjoint from Rn(i i),

(2) R2 - Rjn C Rn(i,i)' (3) If RZ is disjoint from Rjn, then RZ = Rn(i

i).

Next,

for every n > 2 we define An = Rn(Ki Li) and B? _

Ra(Li,xi)

Since our indexing is "nice," it follows that AZ (xl, ... , xn) ), we shall mean the type of E. In all that follows, our sequential systems will be assumed to be at least of type 0. [In most of our applications, the sequential systems involved will be of types 1 and 2.]

Let us note that if E is of type 0, then any E-function g(xl, ... , xn) can be expressed as a E-function of x1, ... , xn, yl, ... , y,,,,, for the function f (x1, ... , xn, y1, ... , y,,,,) = g(x1, ... , xn) is explicitly definable from g. Also, for i < n, the function f (X 1, ... , xn) = xi is explicitly definable from the identity function I (f (x1, . . . , xn) = I (xi)) and so if E is of type 0, then each of X1 ,-- . , xn can be expressed as a E-function of x1, ... , xn. Connectivity

Now we come to a key notion: we shall say that S is weakly connected if for every element a and any E-functions fi(yi,... , yn), , fk(y1, , yn) (n > 1) there is an element b such that for all yl, ... , yn:

b, yl,..., yn -4 a, fl(yl)...)yn))...) fk(yl].... yn) In all that follows, S will be assumed to be at least weakly connected (and at least of type 0, as already noted). Illustrations

(A) Consider an indexed relational system R. We shall say that R is weakly connected if for every positive m, the sequential

Chapter 12. Sequential systems

234

system 5,,,, (R) is weakly connected, which is to say that for every Rrelation R(xl,... , xk, yl, .... ym) and any E-functions fl(x1, ... , xn),

..., fk(xl,//..) xn), the relation

R(fl(xl,...,xn),..., fk(xl,...) xn),Y1,...,ym) is again an R-relation. Of course this condition holds for the indexed relational system of recursion theory, i.e., when the R-relations are the r.e. relations and E is the class of recursive functions. [Indeed, any indexed relational system that is standard, as defined in Chapter 10, is weakly connected.] (B) Weak connectivity for applicative systems (i.e., for the sequential sys-

tems associated with applicative systems) boils down to this: For any positive n and any elements a, cl, ... , ck there is an element b

such that for all elements xl,... , xn: bxl ... xn = a(clxl ... xn) ...

... xn). This condition is a special case of what is called combinatory completeness, which is roughly this: given any well formed expression W(xl, ... , xn) in variables x1, ... , xn (and possibly also (Ckxl

names of elements of N), there is an element b in N such that for all elements a1, . . . , an the equation bal ... an = W(al,... , an) holds. As is well known, combinatory completeness is equivalent to the ex-

istence of elements S and K such that for all x, y, z: Kxy = x and Sxyz = xz(yz). [We give the proof of this in a later chapter.] (C) For a sentential system (T), to say that SO(T) is weakly connected is to say that for any positive integers n and k and any element a and any E-functions fl(xl,... , xn), ... , fk(xl, ... , xn) there is a number b such that for any numbers al, ... , an, the sentence Hb(al,... , an)

is equivalent to Ha(fl(al,...,an),...,fk(al,...,an)). [This condition holds for the sentential system associated with Peano Arithmetic, with E the class of recursive functions. It also holds for the complete theory (N), with E the class of arithmetic functions.

Indeed, in both cases, Sr(T) is weakly connected for any natural number r.]

Exact functions

For any n > 2, we shall call a function E(xl,... , xn) an exact function (for S) if for all elements x1 ,. .. , xn:

E(xl,... xn) -> xl, ,

... , xn

Let us see what exact functions are like in some of our intended applications.

§1 Types 0,1,2

235

(A) Given an indexed relational system R, for any positive k and any n > 2, to say that E(xl,... , xn) is an exact function for Sk(R) is to say that for all elements a 1, ... , an, x1, ... , xk: RE(ai,...,a,.) (x1, ... , xk) -4 Rai (a2, ... , an, x1, ... , xk)

For recursion theory, the existence of exact recursive functions for all

n > 2 and k > 0 is given by the iteration theorem (or Sn theorem, for partial recursive functions instead of r.e. relations). Indeed, if R is any standard system (as defined in Chapter 10), then for each k > 0, n > 2, there is an exact function E(xi, ... , xn) for Sk(R).

pr..

?rte

(B) For applicative systems, if N contains an element I (an identity combinator) such that Ix = x for all x, then for any positive n, [I]n(xl, ... , xn) = x1 ... xn, hence [I]n is an exact function. And >0e

so if N contains an identity combinator, then it is trivial that exact functions of n arguments exist in E for all n > 2.

(C) For a sentential system (T), an exact function for So(T) is a function E(xl,... , xn) such that for all numbers a,,..., an: SE(al,...,an) is equivalent to Hal (a2, ... , an). Well, we simply take E(al, a2, ... , an) to be the index of the sentence Hal (a2, ... , an), and this SE(aI....,a, ) is the sentence Hal ( a 2 ,- .. , an). In standard applications, the Godel numbering is such that for each n > 2, the function E(al,... , an) is a recursive function (in fact a primitive recursive function) and so if E contains all recursive functions (or even all primitive recursive functions) then E contains all exact- functions of two or more arguments for So(T).

.S2

More generally, for any natural number k, to say that E(xl,... , xn) is an exact function for Sk(T) is to say that for all elements x1, ... , xn, and every k-tuple 0 of elements of N, HE(al,...,a,,.)(0) is equivalent to Hal (a2, ... , an, 0). Again, in standard applications, with standard Godel numberings, there is for each n > 2, and each k > 0, a (primitive) recursive function E(xl,... , xn) such that a,,.) (0) is the sentence Ha, (a2, ... , an, 0). cad

Al

(CD

HE(a......

Some notations

4-a

We are using the symbol 0 to denote any finite possibly empty sequence of elements of N. As an example of our use, to say that S is weakly connected is to say that for any E-functions f l (x, 0), ... , fk (x, 0) of n + 1 arguments (n > 0)

Chapter 12. Sequential systems

236

and any element a there is an element b such that for every x and every n-tuple 0:

b,x,0-4a, fl(x,B),..., fk(x,0) To say that a function E(x, y, 0) of n + 2 arguments (n > 0) is an exact function is to say that for all elements x, y of N and for any n-tuple 0:

E(x,y,0)->x,y,0. [For n = 0, this reduces to E(x, y) -> x, y.]

cad

Exercise 1 Suppose that we are not given that S is weakly connected, but that we are given that -> is transitive and that for every element a and every E-function f (xl, ... , xn) and every m > 0, the following two conditions hold:

(1) There is an element b such that for all x1i ... , xn and every m-tuple 0:

b,x1i...,xn,9 - a,f(xl,...,xn),0 (2) There is an element c such that for all x1, ... , xn and every m-tuple 0:

c,x1i...,xn,0 - a,xl,...,xn,f(xle...,xn),0 Prove that S is weakly connected.

Part III Some fixed point properties §2 The weak fixed point property Now things will get more interesting. Diagonalizers

We shall define a function D(x, 0) of n + 1 arguments (n > 0) to be a diagonal function, or more briefly, a diagonalizer if for every element x and in'

every n-tuple 0: D(x, 0) -> x, x, 0. [For n = 0, this of course reduces to D(x) -> x, x.] It is obvious that if E(x, y, 0) is an exact function, then the function E(x, x, 0) is a diagonal function. [We shall sometimes refer to the function E(x, x, 0) as the diagonalization of the function E(x, y, B).] Let us note that if E(x, y, 0) is a E-function, so is its diagonalization E(x, x, 0)

§2 The weak fixed point property

237

a,'° dew

(which is explicitly definable from E(x, y, 0)), and so, since E is of type 0, we see that for any natural number n, if E contains an exact function of n + 2 arguments, then E contains a diagonalizer of n + 1 arguments. We shall define a function Q(x, 0) of n + 1 arguments (n > 0) to be a quasi-diagonalizer if for every element a there is an element a' such that for every n-tuple 0: Q(a', 0) -> a, a', 0. [Obviously any diagonalizer is also a quasi-diagonalizer - just take a' = a.] We might call a function F(x, y, 0) quasi-exact if for every a there is some a' such that for all y, 0: F(a', y, 0) -* a, y, 0. An exact function is also quasi-exact (take a' = a) and its diagonalization is a quasi-diagonalizer. We have observed (somewhat to our surprise) that many of the things normally accomplished using diagonalizers can be accomplished with quasidiagonalizers [A little foretaste of this was given in Part III of Chapter 2. Another example is Theorem 1.1 below.] We recall that we are defining S to have the weak fixed point property if for every a there is some b such that b -4 a, b.

Theorem 1.1 If E contains a quasi-diagonalizer Q(x) of one argument, then S has the weak fixed point property. Proof Assume hypothesis. Since S is weakly connected, then given any element a there is some element c such that for every element x: (1) c, x --4a, Q(x) S-+

Also, since Q(x) is a quasi-diagonalizer, there is some element c' such that Q (c') -> c, c'. Now, by (1), taking c' for x, we have c, c' -4 a, Q(c'). Thus Q(c') , c, c' --4a, Q(c'), and since --f is transitive, we have Q(c') -> a, Q(c'). Thus b -- a, b, where b = Q(c').

Corollary 1.2 If S contains a diagonal function of one argument, then S has the fixed point property.

Corollary 1.3 If E contains an exact - or even quasi-exact - function E(x, y) of two arguments, then S has the weak fixed point property. The following theorem can be obtained as a corollary of Theorem 1.1.

Theorem 1.4 For n > 1, suppose E contains a function Q(x) such that for any element a there is an element a' such that for all x1 , .. , xn:

Q(ar ), xl,... , xn -> a, ar , x1i ... xn

Chapter 12. Sequential systems

238

Then for every a there is some b such that for all x1, ... , xn: b, x1, ... , xn ->

a,b,xli...,xn. Proof One can, of course, prove this from scratch by only a light modification of the proof of Theorem 1.1, but the following proof (using Theorem 1.1, which has already been proved) is more interesting. Given n, we define a new relation T on non-empty finite sequences as follows. W e define a1 ,.. . , ak -; b 1 , . .. , br if for all elements x1, ... , xn:

a1,...,ak,x1,...,xn --> b1,.... br,x1,.... Xn We then let Sn be the sequential system (N, E, ). It is a routine matter is) and that Sn is weakly connected to verify that n is transitive (since (since S is). The hypothesis of the theorem is then that Q(x) is a quasidiagonalizer for the system Sn, and so by Theorem 1.1 applied to Sn, for any a there is some b such that b

ab, which means that for all x1, ... , xn:

b,x1,...,xn - a,b,xl,...,xn Exercise 2 Verify that in the above proof, n is transitive and that Sn is weakly connected. Next we have:

Theorem 1.5 The weak fixed point property is equivalent to the following condition: for any element a and any E-functions f, (x), ... , fk (x) there is an element b such that b --+ a, f1(b),..., fk(b).

Proof Obviously the weak fixed point property is a special case of this

condition - the case k = 1 and fi (x) = x (f, is the identity function). But the weak fixed point property implies this more general condition as follows:

Suppose S has the weak fixed point property. Then given a and Efunction f, (x), .

. .

,

fk (x) there is some element c such that for all x: c, x --f

a, fi(x),..., fk(x). But by the weak fixed point property there is some b such that b -p c, b. Also c, b -4 a, f, (b),

.

.

.

,

fk(b), so, since -> is transitive,

b -> a, fl(b),..., fk(b).

Exercise 3 Show that the hypothesis of Theorem 1.4 implies that for any E-functions f, (x, y1, ... , yn), ... , fk (x, y1, ... , yn), and any element a, there is some b such that for all xi,... , xn:

b,xl,...,xn - a,fl(b,xl,...,xn),..., fk(b,xl,...,xn)

§3 Universal systems and the weak Kleene property

239

§3 Universal systems and the weak Kleene property For any positive n, we shall say that S is n-universal if there is an element

e - which we call an n-universal element - such that for all x1, ... , xn: e, x1i ... , xn -> x1, ... , xn. We call S universal if S is n-universal for every positive n. Examples

(A) Consider an indexed relational system R. To say that S(R) is n-universal is to say that there is an element e such that for all x1,.. . , xn: e, xl, ... , xn -> x1,... , xn, which is to say that for all x, x1i ... , xn: Re(x, xl, ... , xn) iff Ry(x1i X2.... , xn) - in other

...

.C'

words, the relation R.(yl, ... , yn) (as a relation among x, yl, . . . , yn) is itself an R-relation. We call R universal if for each n, Sn (R) is universal. The indexed system of r.e. relations is universal, as we have seen. [An important example of an indexed relational system that is not universal is the system of arithmetic relations, as indexed by the Godel numbers of formulas that express them.] z.,

(B) For an applicative system A, if N contains an identity combinator I, then S(A) is obviously universal (for each n, just take e = I).

(C) For a sentential system (T), to say that Sk(T) is n-universal (k >

0, n > 1) is to say that there is a number c such that for every x1i ... , xn and every k-tuple 9, the sentence He(xl,... , xn, 9) is equivalent to ( X 2 ,- ... , xn, 9) - in other words, for n > 1, He(xl, ... , xn, 9) is equivalent to HH1(x2, ... , xn, 9), but for n = 1, He(x1, 9) is equivalent to H.,1(9). For n = 1 and k = 0, this reduces

to He(x) - Sx, for all x. We shall say that (T) is k-n universal if Sk(T) is n-universal, and we shall say that (T) is universal if (T) is k-n universal for every k > 0, n > 1. Note

The definition of unversality for a sentential system (T) is independent of the class of E-functions; it depends merely on the representation system Z and the equivalence relation -. And so for a representation system Z and an equivalence relation - on the sentences of Z, we can speak of the pair (Z, -) as being universal or not. If it is, then we shall say that Z is universal with respect to the equivalence relation -. Similar remarks apply to "k-n universal" in place of "universal." Consider now the representation system Zp of Peano Arithmetic. It is indeed universal with respect to weak equivalence (assuming consistancy),

Chapter 12. Sequential systems

240

but it is definitely not universal - not even 0-1 universal -,with respect to very strong equivalence, i.e., there is no predicate H such that for all n, the sentence H(n) a, xl, ... , xn, Oxl, ... , xn). Also, for any element a and any E-functions h1(x, y),. .. , hk (x, y) there is a E-function O(x) such that for all x: O(x) - a, hi(x, O(x)), ... hk(x, O(x)) ,

Chapter 13. Strong fixed point properties

260

(B) If S is unified - or even of type B - then S has the extended recursion property: for any element a and any E-functions 91(x), ... , gk (x) there is a E-function O(x) such that for all x:

0(x) - a, x, 0(91(x)), ... 0(9k(x)) ,

(C) If S is unified - or even of type C - then S has the Myhill property - and, more generally, for any n > 1 there is a E-function such that for all x1,..., x,,,: xl, .... x,n, O(xl, ... , x,a)

O(xl, ...

Also, under the same hypothesis, for any E-function g(x) there is a E-function '(x) such that for all x: '(x) -4 g(x), z/ (x). (D) If S is unified and universal - or even if S is universal and of either

type A or type B - then S has the strong Kleene property: for any E-function F(x, y) there is a E-function '(x) such that for all x: O(x) - F(x, O(x)).

Applications

ate-'

We shall call an indexed relational system 1Z unified if for every n > 1, the sequential system SOR) is unified. An example is the indexed relational system of recursion theory - or, for that matter, any system that is standard in the sense of Chapter 10. We shall say that a sentential system (T) is unified if for every n > 0 the sequential system S,n(T) is unified. An example is Peano Arithmetic (with E the class of recursive functions). We shall call an applicative system A unified if the sequential system S(A) is unified. Any complete applicative system (such as combinatory .V4"

p.,

logic) is unified. Here are some applications:

a+"

I Indexed relational systems For any unified indexed relation system R: (a) [Recursion theorem] For any R-relation R(x, y, z1i ... , zn,) there is a E-function 0(x) such that for all x, yl, ... , yn: RO(x) (yl, ... , y-n) F--* R(x, 0(x), y1, ... , yn)

II Sentential systems

261

More generally, for any R-relation R(xl,... , xn, y, z1i ... , zyn) there is a E-function q(x1 i ... , xn) such that for all x1, ... , xn, y1, ... , Ym: RO(xi,...,x,) (y1, .... ym) R(xl, ... , xn, O(xl,... , xn), y17 .... ym) IVY

r-1

gyp

(b) [Extended recursion theorem] For any R-relation R of k + n + 1 arguments (k > 1, n > 1) and any E-functions gl (x), ... , gk (x) there is a E-function 0(x) such that for all x, Y1, , yn: RO(x) (yl, ... , yn) - R(x, 0(g1(x)), ... , 0(gk (x)), y1, ... , yn)

(c) [Myhill properties] For any n > 1 there is a E-function q(x) such that

for all x,yl,...,yn: RO(x) (y,.... , yn) F--* Rx (0(x), yl, ... , yn) '0-

More generally, for any E-function g(x) there is a E-function O(x) such that for all x, yl, , yn: RO(x) (y1, ... , Yn) H R,(x) (O(x), y1, ... , yn)

(d) If R is universal, then for every n > 1 and any E-function F(x, y) there is a E-function O(x) such that for all x, Rn(x) = RF(x fi(x)).

II Sentential systems For any unified sentential system (T): G.,

(a) For any n > 0 and any predicate H of n + 2 arguments there is a E-function O(x) such that for all x and every n-tuple 0, Hk(x)(8) is equivalent to H(x, O(x), 0). In particular (taking n = 0), this reduces to SO(x) - H(x, O(x)).

(b) For any predicate H of k + n + 1 arguments (k > 1, n > 0) and any E-functions 91 W, ... , gk(x) there is a E-function O(x) such that for

all x,y1,...,yn: He(x) (yl, ... , yn) = H(x, 0(gl(x)), ... , 0(gk(x)), yl, ... , yn)

(c) For every n > 0 there is a E-function q(x) such that for every x and every n-tuple 0: HO(x) (0) - Hx (0(x), 0)

[For n = 0 this reduces to Sq5(x) - Hx(O(x)).]

Chapter 13. Strong fixed point properties

262

(d) If (T) is also universal then for any E-function F(x, y) there is a (CD

E-function O(x) such that for every number x, SO(x) - SF(x,c(x)) More generally, if (T) is unified and universal, then for any n > 0 and any E-function F(x, y) there is a E-function O(x) such that for all x and every n-tuple 6: HO(x)(0) - HF(x,O(x))(0). III

a-=

III Applicative systems For any complete applicative system A there is an element 6 (called a "fixed point combinator") such that for every element x, bx = x(bx) (for every x, the element bx is a fixed point of x). [This is the Myhill property.]

Also, for any element a there is an element b such that for all x, bx = ax(bx). [This is the recursion property.]

Chapter 14

Multiple fixed point properties In this chapter we unify various "double" fixed point properties, and other multiple fixed point properties that arise in recursion theory, combinatory logic, and metamathematics.

Part I Double fixed points §1. The weak double fixed point property We shall say that S has the weak double fixed point property if for any elements al, a2 there are elements bl, b2 such that: (1) b1 -* al, bl, b2 (2) b2 -> a2i bl, b2

Let us see what the weak double fixed point property is in applications.

(A) For an indexed relational system R, to say that Sl (R) has the F-3

cad

weak double fixed point property is to say that for any R-relations Rai (x, y, z), Rae (x, y, z) there are elements bl and b2 such that wbi = {x : Ral (b1, b2, x)} and wb2 = {x : Rae (b1, b2, x)}. More

generally, for any positive n, to say that Sn (R) has the weak double fixed point property is to say that for any R-relations Rai (xl, x2, Y1.... , Yn), Ra2 (x1, x21 y1, .... yn,), there are elements bl, b2 such that for all yl,... , yn: Rbi(yl, ... , Yn) Ral (b1, b2, y1, .... yn), and Rb2 (y1, ... , yn) + Ra2 (b1, b2, y1, ... , yn). As we will see, the indexed relational system of recursion theory has "

this property (this is known as the weak double recursion theorem),

Chapter 14. Multiple fixed point properties

264

and so does the indexed system of arithmetical relations - and indeed, so does any standard system.

Q.,

car

CDs

car

(B) For an applicative system, A, to say that S(A) has the weak double fixed point property is to say that for any elements al, a2 there are elements bl, b2 such that alblb2 = bl and a2blb2 = b2. As we will see, a complete applicative system (such as that of combinatory logic) has this property (this result is known as the double fixed point theorem for combinatory logic).

(C) For a sentential system (T), to say that So(T) has the weak double fixed point property is to say that for any two-place predicates Hal, Hat, there are numbers b1, b2 such that Sbl - Hal (bl, b2) car

Imo

and Sb2 - Ha2(bl,b2). More generally, to say that Sn(T) has the double weak fixed point property is to say that for any predicates Hal, Ha2 of n + 2 arguments, there are numbers bl, b2 such that for every n-tuple B, Hbl (B) - Hal (bl, b2, B) and Hb2 (B) - Ha2 (b1, b2, B). [This condition for Peano Arithmetic is a special case of the generalized diagonal lemma of Boolos[2].]

Now let us get back to sequential systems in general.

Theorem 1.1 If E contains a quasi-diagonalizer Q(x, y) of two arguments, then S has the weak double fixed point property.

Proof Suppose Q(x, y) is a E-function and a quasi-diagonalizer. By Lemma Q of the last chapter, Q(x, y) is a fixed point function, hence for any al and a2 there are elements Cl, c2 such that for every x: (1) Q(cl, x) -. a,, Q(Cl, x), Q(x, C2) (2) Q(C2, x) -* a2, Q(x, C2), Q(Cl, X)

In (1) we take c2 for x and in (2) we take cl for x and we have: (1)' Q(C1, c2) -* a1, Q(C1, C2), Q(C2, Cl) (2)' Q(C2, CO -+ a2, Q(Cl, C2), Q (C2, CO a-'

And so we take b1 = Q(cl, c2); b2 = Q(C2, c1).

Corollary 1.2 If E contains an exact - or even quasi-exact - function E(x, y, z) of three arguments, then S has the weak double fixed point property.

§1 The weak double fixed point property

265

Discussion

I would like to point out another proof of the above corollary modeled after Boolos' proof of his generalized diagonal lemma[2]. Given elements al, a2, take elements cl, c2 such that for all x and y:

(1) E(cl, x, y) - a,, E(x, x, y), E(y, x, y) (2) E(c2, x, y) - a2, E(x, x, y), E(y, x, y) Then take b1 = E(cl, cl, c2) and b2 = E(c2, ci, c2).

Exercise 1 Show that the weak double fixed point property is equivalent to the following property: for any elements al, a2 there are elements bl, b2 such that: (1) b1 -* al, b1, b2

(2) b2 -> a2, b2, b1

Exercise 2 Suppose E contains a quasi-diagonalizer Q(x, y, z) of three arguments. Prove that S has the weak "triple" fixed point property, i.e., for any a1i a2, a3 there are elements b1, b2, b3 such that: (1) b1 -> a1, b1, b2, b3

(2) b2 -* a2, b1, b2, b3

(3) b3 - a3, b1, b2, b3

Solution (sketch) Take elements c1, c2, c3 such that: (1) Q(cl, x, y) -. a1, Q(cl, x, y), Q(x, y, cl), Q(y, cl, x) (2) Q(C2, x, y) -* a2, Q(y, C2, x), Q(C2, x, y), Q(x, Y, C2) (3) Q(C3, x, y) -* a3, Q(x, Y, C3), Q(y, C3, X), Q(C3, x, y)

Then take b1 = Q(cl, c2, c3), b2 = Q(c2, c3, cl), b3 = Q(c3, Cl, c2). '"'

Exercise 3 Suppose E contains a quasi-diagonalizer Q(xli... , xn) of n C!]

arguments. Show that S then has the weak "n-fold" fixed point property, i.e., for any elements a1i . . . , an there are elements b1, ... , bn such that for each i G n: bi -> ai, b1,.. . , bn. ,.p

Chapter 14. Multiple fixed point properties

266

.-.

Exercise 4 An easier way of achieving the n-fold fixed point property is possible if E contains a quasi-exact function E(x, yl, ... , yn) of n + 1 arguments. For then, given elements al, ... , an, for each i < n, let ci be such that E(ci, yl, .. , yn) ai, E(yl, yl, , yn), E(y2, yl, , yn), E(yn, Y1.... , yn). Then take bi = E(ci, c1, ... , cn) and show that for each i < n, bi -> ai, b1,... , bn.

Exercise 5 Suppose E is of type 1 and contains a normalizer N(x) of one argument. Does S necessarily have the weak double fixed point property? [The solution is very tricky and will emerge from a result of a later chapter.] Another weak double fixed point property

Theorem 1.3 If E contains a quasi-diagonalizer Q(x, y, z, w) of four arguments then for any elements a1i a2 there are elements b1, b2 such that (1) b1

al,a2,bl,b2

..o

(2) b2 -* a2, a1, b1, b2

Proof By the quasi-diagonal lemma, given a1 and a2 there are elements c1, c2 such that for all x, y, z: (1) Q(cl, x, y, z) -+ a1, z, Q(cl, x, y, z), Q(x, ci, y, z) (2) Q(c2, x, y, z) -> a2, y, Q(x, c2, y, z), Q(c2, x, y, z)

In (1) we substitute c2 for x, a1 for y and a2 for z. In (2) we substitute c1 for x, a1 for y and a2 for z, and we have: (1)' Q(cl, c2, al, a2) -. al, a2, Q(cl, c2, al, a2), Q(c2, cl, al, a2) (2)' Q(c2, c1, a1, a2) --a a2, a1, Q(c1, c2, a,, a2), Q(c2, cl, a,, a2)

And so we take b1 = Q(c1i c2, a1, a2) and b2 = Q(c2i c1, a1, a2).

The weak double Kleene property

We will say that S has the weak double Kleene property if for any Efunctions F, (x, y), F2(x, y) there are elements b1 and b2 such that b1 -> F1 (b1, b2) and b2 -. F2 (b1, b2).

Theorem 1.4 If S is universal and has the weak double fixed point property, then S has the weak double Kleene property.

§2 Double recursion

267

Proof Exercise. Exercise 6 Prove Theorem 1.4 and give the applications.

Part II Double recursion properties §2 Double recursion There are three properties that we call "double recursion properties" that a system S may or may not have, which will interest us: DRo: For any elements a1 and a2 there are E-functions 01(x), 02 (x) such that for/ all x: r-1

/

F'S

(1) O1(x) -* a1, x, 01(x), 02(x) (2) 02 (x) - a2, x, 01(x), 02 (x)

DR1: For any elements a1 and a2 there are E-functions 01(x, y), 02(x, y) such that for all x and y: (1) 01(x, y) - a1, x, 01(x, y), 02(x, y) (2) 02 (x, y) - a2, y, 01(x, y), 02 (x, y)

DR2: For any elements al and a2 there are E-functions 01(x, y), 02(x, y) such that for all x and y: (1) 01(x, y) - a1, x, y, 01(x, y), 02(x, y) (2) 02 (x, y) - a2, x, y, 01(x, y), 02 (x, y) CAD

These three properties are of increasing strength: DR1 obviously implies

DRo (just identify the variables x and y) and DR2 implies DR1, because given a1 and a2 we can take a', a2 such that for all x, y, z, w: ai, x, y, z, w -. a, x, z w and a2 i x, y, z, w -* a, y, z, w, and so applying DR2 to a', a2 we have DR1 for a1 and a2. We will now prove the following "double" analogues of Theorem 1.2 of the last chapter.

Theorem 2.1 Suppose S is of type 1. Then (a) If E contains a normalizer N(x, y) of two arguments then S has property DR1 (and hence also DRo).

(b) If E contains a normalizer N(x, y, z) of three arguments then S has property DR2 (and hence also DR1 and DRo).

Chapter 14. Multiple fixed point properties

268

Proof Suppose S is of type 1. (a) Suppose E contains a normalizer N(x, y). Then N(x, y), N(y, x) are both E-functions of x and y, hence for any elements a1 and a2 there are elements a', a2 such that for all x, y and z: a'1, x, y, z -* al, x, N(y, z), N(z, y) and a', x, y, z -. a2i y, N(z, y), N(y, z). Now take E-functions tl (x), t2 (x) such that N(tl (x), y) -> a', x, .,.,

t1(x), y and N(t2 (x), y) -> a2, x, t2 (x), y. Then

(1) N(tl(x),y) -* a1, x,N(ti(x),y),1V(y,tl(x)) (2) N(t2(x), y) -+ a2, x, N(y, t2(x)), N(t2(x), Y) In (2) we interchange x and y and get:

(2)' N(t2(y), x) - a2, y, N(x, t2(y)), N(t2(y), X) In (1) we take t2(y) for y and in (2)' we take tl(x) for x and we have:

(1)" N(ti(x),t2(y)) -> a1, x,N(ti(x),t2(y)),N(t2(y),ti(x)) (2)" N(t2(y),ti(x)) -> a2, y, N(ti(x),t2(y)), N(t2(y),ti(x))

We thus take 01(x,y) = N(tl(x),t2(y)) and 02(x,y) = N(t2(y), ti(x)). (b) Suppose E contains a normalizer N(x, y, z). Given al, a2, by Lemma N, §1 of the last chapter, there are E-functions t1(x), t2(x) such that for all x, y, z:

(1) N(tl (x), z, y) -' al, x, y, N(tl (x), z, y), N(z, ti (x), y) a2, x, y, N(z, t2(x), y), N(t2(x), z, y) (2) N(t2(x), z, y)

We then take 01(x, y) = N(tl(x), t2(x), y) and q52(x, y) = N(t2(x),

tl(x),y) From Theorem 2.1 and from Proposition 1.1 of the last chapter we have:

Theorem 2.2 Suppose S is integrated and of type 1. Then (a) If E contains a diagonalizer (or even a near diagonalizer) D(x, y), then S has properties DR1 and DRo. (b) If E contains a diagonalizer (or even a near diagonalizer) D(x, y, z), then S also has property DR2.

§3 A type 2 approach

269

Corollary 2.3 Suppose S is integrated and of type 1. Then (a) If E contains an exact function E(x, y, z) of three arguments then S has properties DR1 and DRo.

(b) If E contains an exact function E(x, y, z, w) of four arguments then S has property DR2. Question

Suppose S is integrated and of type 1 and that E contains a diagonal Q)-

function D(x, y) of two arguments. Does S necessarily have property DR2? We do not know (but see exercise 7 below).

CD.

fir"

Exercise 7 Call S n-integrated (n > 1) if for every natural number m and every element a there is a E-function t(xl, ... , xn) such that for all x1, ... , xn and every m-tuple 9: t(xl,... , xn), 0 -. a, x1, ...,x,9. [Thus 1-integrated is what we have been calling "integrated."] Suppose S is 2-integrated and of type 1 and that E contains a diagonalizer D(x, y) of two arguments. Prove that S has property DR2. coy

§3 A type 2 approach

As with recursion properties, we have a type 2 approach to double recursion

properties as well as a type 1 approach. The type 2 approach to double recursion is particularly neat.

Theorem 3.1 Suppose E is of type 2. Then (a) If E contains a quasi-diagonalizer Q(x, y, z) of three arguments, then S has property DRo. (b) If E contains a quasi-diagonalizer Q(x, y, z, w) of four arguments, then S has property DR2 (and hence also DR1 and DRo).

We first prove

Lemma QQ [Double quasi-diagonal lemma] Suppose Q(x, y, 9) is a quasi-diagonalizer of n+2 arguments (n > 0). Then for any elements a1 and a2 and any E-functions fl (x, y, 9), ... , f k (x, y, 9), 91(x, y, 9), ... , 9, (x, y, 9) of n + 2 arguments there are elements b1, b2 such that for every n-tuple 9:

Chapter 14. Multiple fixed point properties

270

(1) Q(bi, b2, 0) -> ai, fl (bl, b2, 0), ... , fk (bl, b2, 0) (2) Q(b2, b1, 8) -+ a2, 9i (bi, b2, 0), ... , 9r (bi, b2, 0)

Proof By Lemma Q, Chapter 13, there are elements b1, b2 such that for every 'y and n-tuple 8: (1) Q(bi, y, 8) -> ai, fi (bi, y, 0), ... , fk (b1, y, 8) (2) Q(b2, y, 8) -+ a2,91(y,b2,0),...,9r(y,b2,8)

Then take b2 for y in (1) and b1 for y in (2) and the conclusion follows. Proof of Theorem 3.1

Suppose E is of type 2. cad

(a) The functions z, Q(x, y, z), Q(y, x, z) are all E-functions of x, y, z, so by the above lemma there are elements b1, b2 such that for all x: ..p

(1) Q(b1, b2, x) -' ai, x, Q(b1, b2, x), Q(b2, bi, x) (2) Q(b2, b1, x)

a2, x, Q(b1, b2, x), Q(b2, b1, x)

And so we take 01(x) = Q(bi, b2i x) and 02(x) = Q(b2, bi, x). (b) Again by Lemma QQ there are elements b1, b2 such that for all x and

(1) Q(bi, b2, x, y) - a1, x, y, Q(bi, b2, x, y), Q(b2, bi, x, y) (2) Q(b2, bi, x, y) -> a2, x, y, Q(b1, b2, x, y), Q(b2, bi, x, y)

And so we take 01(x, y) = Q(b1i b2, x, y) and 02(x, y) = Q(b2, b1, x, y).

Corollary 3.2 Suppose E is of type 2. Then (a) If E contains a quasi-exact function of four arguments, then S has property DRo.

(b) If E contains a quasi-exact function of five arguments then S has property DR2.

§3 A type 2 approach

271

Discussion a--

I would like to point out another proof of the above corollary. Suppose E is of type 2 and contains a quasi-exact function E(x, y, z, w) of four arguments. Given al, a2, take cl so that for all y1 i Y2, x: E(cl, y1, y2, x) -* al, x, E(yl, y1, y2, x), E(y2, y1, y2, x)

and take c2 such that E(c2i y1, y2, x) -> a2, x, E(y1, y1, y2, x), E(y2, y1, y2, x)

Then take 01(x) = E(cl, cl, c2i x) and 02(X) = E(c2i c1, c2, x). We then get (a). We get (b) the same way, everywhere replacing "x" by "x, y."

Exercise 8 Prove that S has property DR, if and only if for every al, a2 there are E-functions 01(x, y), 02(x, y) such that for all x and y: (1) 01(x, y) - al, x, 01(x, y), 02 (x, y)

(2) 02(x, y) -a2, y, 02(x, y), 01(x, y)

We will later need

Theorem 3.3 Suppose S has property DRo. Then for any elements a1 and a2 and any E-functions gl (x, y), g2 (x, y) there are E-functions 01(x), 02(x) such that for all x: (1) 01(x) -> a1,x,g1(01(x),02(x)) (2) 02(x) _+ a2,x,92(01(x),02(x))

Proof Take ai, a2 such that ai, x, y, z - a1, x, g1(y, z) and a2/,x , y, z -> a2i x, g2(y, z). Then take E-functions 01(X), 02(X) such that 01(x) -* al, x, 01(x), 02(x) and 02(X) - a2, x, 01(x), 02(x)

Exercise 9 Suppose S has property DR2.

Show that for any ele-

ments al, a2 and any E-functions gl (x, y), g2(x, y) there are E-functions 01(x, y), 02(x, y) such that:

(1) o1(x,y) - a1,x,y,gl(o1(x,y),02(x,y)),92(01(x,y),02(x,y)) (2) 02 (x, y) -* a2, x, y, 9101(x, y), 02(x, y)), 9201(x, y), 02 (x, y))

Chapter 14. Multiple fixed point properties

272

Unified systems and applications

We see from Corollaries 2.3 and 3.2 that if S is either of type 1, integrated and has an exact function of four arguments, or of type 2 and has an exact

function of five arguments, then S has the double recursion properties DR2, DR1, and DRo. This gives two very different ways of proving that any unified sequential system has the double recursion properties, which in turn has the following applications: (A) For any R-relations M1(x, y, z1, z2, w1, ... , wn) and M2 (x, y, z1, z2, car

w1,. .. , wn) of a unified indexed relational system R, there are Efunctions 01(x, y), 02(x, y) such that for all x, y, wl, ... , wn:

(1) Rol

(x,y)(wl,...,wn) H Ml(x,y,01(x,y),02(x,y),w1i...,wn)

(2) R02(x,y)(wl, .... wn) (B)

M2(x, y, 01(x, y), 02(x, y), wl, ... , wn)

For a unified sentential system (T), for any predicates K1i K2 of k + 4 CAD

arguments (k > 0) there are E-functions 01(x, y) and 02(x, y) such that for every x, y and k-tuple 0: (1) Hm1(x,y) (0) = K1(x, y, 01(x, y), 02 (x, y), 0)

(2) H02(-,Y)(0) = K2(x, y, 01(x, y), 02(x, y), O)

p..

(C) The double property DR2 for complete applicative systems is of less interest than the double Myhill property, which we consider in Part III, but for the sake of the records, for any elements a1, a2 there are elements b1, b2 such that for all x and y: bixy = alxy(bixy)(b2xy), and b2xy = a2xy(blxy)(b2xy). Strong double Kleene property .tee

ry.

its

We will say that S has the strong double Kleene property if for any Efunction Fl (x, y, z, w) and F2 (x, y, z, w) there are E-functions 01(x, y), 02(x, y) such that for all x and y: (1) q51(x, y) -* F1(x, y, 01(x, y), 02 (x, y))

(2) 02(x, y) -> F2(x, y, 01(x, y), 02(x, y))

Theorem 3.4 If S is universal and has property DR2, then it has the strong double Kleene property.

§4 The double Myhill property

273

Exercise 10 Prove Theorem 3.4. Conclude that every unified universal sequential system has the strong double Kleene property. What does this tell us about unified indexed relational systems and unified sentential systems?

Exercise 11 Suppose S is universal and has property DRa. Show that for any E-functions Fi(x, y, z) and F2(x, y, z) there are E-functions 01(x) and 02 (x) such that for all x: 01(x) -. Fl (x, 01(x), 02 (x) ), and 02 (x) -* F2(x, 01(x), 02(x))

Part III Double Myhill properties and related results §4 The double Myhill property We shall say S has property DM (the double Myhill property) if there are E-functions 01(x, y), 02(x, y) such that for all x and y: (1) 01(x, y) -* x, 01(x, y), 02 (x, y)

(2) 02(X, y) - y, 01(x, y), 02 (x, y)

It is obvious that property DM implies the weak double fixed point coo

property (it is the uniform version of it - given a1, a2, take b1 = 01(a1, a2) and b2 = 02(a,, a2)). It is also obvious that if S universal and has property DR1i then it has property DM(just take al to be a three-universal element and a2 = a1). We are now interested in seeing how DM can be obtained without universality. ...

c0.

coo

As the reader might expect, we will prove that if E is of type 1 and contains a strong fixed point function F(x, y) of two arguments, then S has the double Myhill property. As we are at it, we will prove something more general.

For any k > 0 we will say that S has property DMk if there are Ecad

functions 01(x, y, 0), 02 (x, y, 0) of k2 arguments such that for every x, y and k-tuple 0:

(1) q1(x,y,B)-*x,0,01(x,y,B),02(x,y,e) (2) 02(x, y, 0) - y, 0, 01(x, y, 0), 02(x, y, 0)

Thus DMo is the double Myhill property DM. Property DM1 is a uniform version of the double recursion property DRo, and implies it if E is of type 2. DM2 is a uniform version of DR2, and implies it, if E is of type 2. Thus

274

Chapter 14. Multiple fixed point properties

for a system S of type 2, property DM2 is even stronger than DR2, and DM1 is even stronger than DRo.

Theorem 4.1 If E is of type 1, then for any natural number k, if E contains a strong fixed point function F(x, y, 0) of k + 2 arguments, then S has property DMk.

Proof [Very much like that of (a) of Theorem 2.1] - Take E-functions t1(x), t2(x) such that for every x, y, and k-tuple 0: (1) F(ti(x), y, 0) -* x, 0, F(ti(x), y, 0), F(y, ti(x), 0) (2) F(t2(y), x, 0) -* y, 0, F(x, t2(y), 9), F(t2(y), x, 0)

Then take q1(x, y, B) = F(tl (x), t2 (y), B) and 02 (x, y, B) = F(t2 (y), ti (x), 0)

By Theorem 4.1 and by Lemma F of the last chapter we have:

Theorem 4.2 Suppose S is strongly connected and of type 1. Then for any natural number k: (a) If E contains a diagonalizer D(x, y, 0) of k + 2 arguments, then S has property DMk.

(b) If E contains an exact function E(x, y, z, 0) of k + 3 arguments, then S has property DMk. Remark

We will subsequently prove stronger versions of (a) and (b) above - that is, the hypotheses lead to stronger conclusions than those stated. The special case (k = 0) of Theorem 4.2 is:

Theorem 4.3 Suppose S is strongly connected and of type 1. Then (a) If E contains a diagonal function D(x, y) of two arguments, then S has the double Myhill property. (b) If E contains an exact function E(x, y, z) of three arguments, then S has the double Myhill property.

§4 The double Myhill property

275

It of course follows that any unified sequential system has the double Myhill property. This has the following applications.

(A) For a unified indexed relational system R, for any n > 1 there are E-functions 01(x, y), 02(x, y) such that for all x, y, wl,. , w,,,: (1) R01(x,y)

(wl,... ,

wn) E---* Rx(c1(x, y), 02(x, y), w1i

... wn)

(2) R02(x,y) (wl, .... wn) - Ry(01(x, y), 02(x, y), w1, ... , wn) Al

(B) For a unified sentential system (T), for any k > 0 there are Efunctions ct1(x,y), g52(x,y) such that for every x, y, and k-tuple 0: (1)

Hx(01(x, y), 02(x, y), 0)

(2)

H02(x,y) (0) = Hy(01(x, y), 02(x, y), 0)

(C) For a complete applicative system A, there are elements 7l,'Y2 such that for all elements x and y: (1) 71xy = x('Y1xy)('Y2xy)

(2) 72xy = y('Y1xy)(l2xy)

[This is a uniform version of the double fixed point theorem of combinatory logic.] The property DM

Later we shall need a variant of the property DM. We shall say that S has property DM' if for any E-functions g1(x), g2(x) there are E-functions 01(x), 02(x) such that for all x: r-1

(1) 01(x) - 91(x), 01(x), 02(x) (2) 02(x) -* 92(x), 01(x), 02(x)

Theorem 4.4 Suppose S is of type 1 and has the double Myhill property. Then G".

(a) For any E-functions 91(X),92(X) there are E-functions V)1(x, y), b2(x, y) such that for all x and y: (1) 01(x, y) - 91(x), 4'1(x, y), 02(x, y)

(2) 02(x, y) - 92(y), 01(x, y), 02(x, y)

(b) S has property DM'.

Chapter 14. Multiple fixed point properties

276

Proof (a) Given E-functions 01(x, y), 02(x, y) that witness the double Myhill property, take 01(x,y) = 01(91(x),92(y)), and 02(x,y) = 02(91(x), 92(y))

(b) This follows from (a), by taking 01(x) = 01(x,x) and 072(x) _ 02(x, x).

Remark

We will see in the next chapter that if E is of type 1 and contains a pairing function J(x, y) with inverses K(x), L(x), then the properties DM and DM' are equivalent. Later we will need: C-'

Theorem 4.5 If S has property DM' and is strongly connected and of F'S

type 1, then for any E-functions g1(x), 92(x), hl(x, y), h2(x, y) there are E-functions 01(x) and 02(x) such that for all x: (1) 01(x) -. 91(x), h1(01(x), 02(x)) -0-

(2) 02(x) -+ 92(x), h2(01(x), 02(x))

Proof (sketch)

Assume the hypothesis. Since S is strongly connected then there are Efunctions t1(x), t2(x) such that for all x, y, z: (1) tl (x), y, z - 91(x), hl (y, z) (2) t2(x), y, z -. 92(x), h2(Y, z)

Then there are E-functions 01(x), 02(x) such that for all x: (1)' 0/,1(x) -4 t1(x), 01(x), 02(x) - 91(x), h1(01(x), 02(x)) (2)' 02(x) -* t2(x), 4'1(x), 02(x) - 92(x), h2(01(x), 02(x))

§5

The properties DMk

277

coy

§5 The properties DMk For any natural number k, we will say that S has property DMk if there

are E-functions 01(x, y, B), 02(x, y, B) of k + 2 arguments such that for all x, y, and every k-tuple B: (1) 01 (X, y, e) -. x, 01(x, y, e), 02 (x, y, e)

(2) 02(x, y, B) -* y, 02(x, y, B), 01(x, y, e)

[The significance of these properties will be seen in Part IV.] An obvious modification of the proof of Theorem 4.1 yields:

Theorem 4-i If E is of type 1 and contains a strong fixed point function F(x, y, B) of k + 2 arguments then S has the property DMk. We remark that we will see in Part IV that the above hypothesis yields an even stronger conclusion.

It of course follows that if E is strongly connected and of type 1, if E contains a diagonal function D(x, y, B) of k + 2 arguments, then S has the property DMk. But this also follows from (a) of Theorem 4.2 and the following result:

Theorem 5.1 If S is strongly connected and of type 1, then the properties DMk and DMk for S are equivalent.

Proof Suppose S is strongly connected and of type 1. We will show that DMk implies DMk (the proof that DMk implies DMk is quite similar). And so suppose that S has property DMk. Since S is strongly connected, there is a E-function g(y) such that for any y, z, w, and any k-tuple B: g(y), B, z, w -+ y, B, w, z. Now take E-functions 01(x, y, B), 02(x, y, B) such

that (1) 01(x, y, e)

x, B, 01(x, y, 0),'2(x, y, e)

(2) 02 (X, y, B) -' y, e, 02(x, y, B), 061(x, y, B)

Then (1)' 01(x, 9(y), B) -* x, e,01(x, 9(y), e),'b2(x, 9(y), e) (2)'

02(x, 9(y), B) -* 9(y), B,02(x, 9(y), e),'b1(x, 9(y), e) - y, e, 01(x, 9(y), e), 02(x, 9(y), B)

And so we take 1(x, y, B) = 01(x, g(y), B) and 02 (x, y, B) = 02 (x, g(y), B) and so we have property DMk.

Chapter 14. Multiple fixed point properties

278

Exercise 12 Without assuming strong connectivity, or that E is of type 1, prove:

(a) If S has property DM then S has the weak double fixed point property.

(b) If S is of type 2, then: (1) property DM1 implies DRa; (2) property DM implies DR2.

§6 Very Strongly Connected Systems .-y

We know that if E contains an exact function of n + 2 arguments, then E contains a diagonalizer of n + 1 arguments. We now show: .'3

CAD

Proposition 6.0 If S is very strongly connected of type 1 and E contains an exact function E(x, y, B) of n + 2 arguments, then E contains a diagonalizer D(x, y, B) of n + 2 arguments.

Proof Assume the hypothesis. Then there is a E-function t(x) such that for every x, y, and n-tuple B: t(x), y, 8 -* x, x, y, B, hence E(t(x), y, B) -> x, x, y, B, and so E(t(x), y, B) is a diagonalizer.

CDR

By (b) of Theorem 4.2, if S is strongly connected of type 1 and if E contains an exact function E(x, y, z, B) of k + 3 arguments, then S has the property DMk. But now, by the above proposition and (a) of Theorem 4.2 we have: CAD

Q..

.'3

Theorem 6.1 If S is very strongly connected of type 1 and if E contains an exact function E(x, y, B) of k + 2 arguments, then S has the property DMk.

§7 Integrated systems of type 1 The following theorem contains "double analogues" of Theorem 5.1, Chapter 13.

Theorem 7.1 Suppose S is integrated and of type 1. Then (a) If S has property DM', it has property DRo.

(b) If S has property DM, it has property DR1.

§8 Symmetric functions

279

Proof Suppose S is integrated and of type 1. (a) Suppose S has property DM'. Then given elements a1, a2 there are E-functions tl (x), t2 (x) such that for all x and y: tl (x), y -. al, x, y and t2(x), y -> a2i x, y. Since S has property DM' then there are E-functions 01(x), 02(x) such that: (1) 01(x) - t1 (x), 01(x), 02(x) - > a1, x, 01(x), 02(x)

(2) 02(x) -* t2 (X), 01(x), 02(x) - a2, x, 01(x), 02(x)

(b) Suppose S has property DM. Since S is integrated, there are Efunctions tl (x), t2 (x) such that for all x, y, and z: tl (x), y, z -* a1i x, y, z and t2(x), y, z - > a2i x, y, z. By Theorem 4.4 there are Efunctions 01(x, y), 02(x, y) such that for all x and y: (1) 01(x, y) -* t1(x), 01(x, y), 02(x, y) . a1, x, 01(x, y), 02(x, y) (2) 02(X, Y) - > t2(y), 01(x, y), 02(x, y) -* a2, y, 01(x, y), 02(x, y)

Thus S has the property DR1. Remark

The hypothesis that S is of type 1 was necessary for (b), but not for (a).

Part IV Symmetric functions and nice functions am.

We now consider two other approaches to double fixed point properties which in fact give strengthenings of earlier results.

§8 Symmetric functions ..r

By a symmetric function of k + 2 arguments we mean a function S(x, y, 9) satisfying the following condition: S(x, y, 8) -. x, 8, S(x, y, 8), S(y, x, 8) r0,

Of course this condition also implies: S(y, x, 8) -. y, 8, S(y, x, 8), S(x, y, 9) '-'

And so if we take 01(x, y, 9) = S(x, y, 8) and 02(x, y, 9) = S(y, x, 9) we have:

Chapter 14. Multiple fixed point properties

280

(1) 01(x,y,0) - x, 0,01(x,y,0),02(x,y,0) (2) ct2 (x, y, 8) - y, 0, g52 (x, y, e), 01(x, y, B) (CD

Therefore, if S(x, y, 0) is a E-function, then 01(x, y, 0), 02 (x, y, 0) are both E-functions, hence S then has the property DMk, and then by Theorem 5.1 we have:

Proposition 8.0 If S is strongly connected of type 1 and E contains a 2S'

symmetric function S(x, y, 8) of k+2 arguments then S has property DMk. Now comes the crucial fact: ',3

.'3

F-+

Theorem 8.1 If E is of type 1 and contains a fixed point function car

F(x, y, 0) of k+2 arguments then E contains a symmetric function S(x, y, 0) of k + 2 arguments.

Proof Assume hypothesis. Then there is a E-function t(x) such that for every x, y, and k-tuple 0: F(t(x), y, 0) -> x, 0, F(t(x), y, 0), F(y, t(x), 8) '.O

Hence F(t(x), t(y), 0) -> x, 8, F(t(x), t(y), 0), F(t(y), t(x), 8). And so we take S(x, y, 0) = F(t(x), t(y), 0).

pp,

The above theorem, together with Lemma F of the last chapter, gives the following strengthening of (a) of Theorem 4.2.

Theorem 8.2 Suppose S is strongly connected of type 1 and E contains a diagonal function D(x, y, 0) of k + 2 arguments. Then (1) E contains a symmetric function S(x, y, 0) of k + 2 arguments.

(2) S has the property DMk (and also DMk).

BCD

From (1) of the above theorem it of course follows that for a unified system S, for every k > 0, E contains a symmetric function S(x, y, 8) of k + 2 arguments. This has the following applications: (A) For any unified indexed relational system R, for every k > 0, n > 1, there is a E-function S(x, y, 0) of k + 2 arguments such that for every x, y, z 1 , . . . , zn and k-tuple 0: Rs(x,y,e) (zl, ... , zn)

Rx(B,S(x,y,B),S(y,x,e),z1,...,zn)1

§9 Nice functions

281

(B) For a unified sentential system (T), for any k > 0, n > 0, there is a E-function S(x, y, 0) of k + 2 arguments such that for every x, y, k-tuple 0, and n-tuple 01: Hs(x,y,e) (01) = HH(9, S(x, y, 9), S(y, x, 0), 01)

In particular (for k = 0, n = 0), there is a E-function s(x, y) such that for all x and y: III

S8(X,y) = H.(s(x, y), s(y, x)) f')

(C) For any complete applicative system A, there is an element II such that for all x and y: IIxy = x(IIxy)(IIyx) car

CD'

Also, for any n > 1 there is an element II such that for all x,y,z1,...,zn: Ilxyz1 ... zn = x(Ilxyzl ... zn)(Ilyxz1 ... zn)

§9 Nice functions We now come to a particularly attractive approach to double fixed point properties. By a nice function of k + 3 arguments (k > 0) we mean a function H(x, y, z, 9) such that for all x, y, z, and every k-tuple 8: (*) H(z, x, y, 9) -+ z, H(x, x, y, 9), H(y, x, y, 8)

If we substitute x for z, we get

(1) H(x,x,y,9) -* x,H(x,x,y,0),H(y,x,y,8) If in (*) we instead substitute y for z, we get (2) H(y, x, y, 0) -> y, H(x, x, y, 8), H(y, x, y, 9) E-+

Thus, if we let 01(x, y, 9) = H(x, x, y, 0), and 02(x, y, 0) = H(y, x, y, 0), we get: (1)' 01(x, y, 0) - x, 01(x, y, 0), 02(x, y, 0) (2)1 02(x, y, 0) -. y, 01(x, y, 0), 02 (x, y, 9)

Thus, if H is a E-function, so are 01 and 02, and we then have property DMk! And so we have: 1For recursion theory, this was stated and proved in R.M.

Chapter 14. Multiple fixed point properties

282

Theorem 9.1 If E contains a nice function H(x, y, z, 8) of k + 3 arguments, then S has property DMk.

Corollary 9.2 If E contains a nice function H(x, y, z) of three arguments, then S has the double Myhill property. We showed (Theorem 4.2, (a)) that if S is strongly connected of type 1 and E contains an exact function E(x, y, z, 8) of k + 3 arguments, then S has property DMk. But now we have the following better result.

Theorem 9.3 If S is strongly connected of type 1 and E contains an exact function E(z, x, y, 8) of k + 3 arguments, then E contains a nice function H(z, x, y, 0) of k + 3 arguments. .4"

Proof Assume hypothesis. Then there is a E-function t(x), such that for all z, x, y, and every n-tuple 8:

t(z),x,y,8 -+ z,8,E(x,x,y,0),E(y,x,y,8) Then E(t(z), x, y, 0) -> z, 0, E(x, x, y, 0), E(y, x, y, 0). Hence

E(t(z), t(x), t(y), 8) -* z, 8, E(t(x), t(x), t(y), 8), E(t(y), t(x), t(y), 8) (CD

And so we take H(z, x, y, 8) to be E(t(z), t(x), t(y), 0), and H is a nice function To summarize, suppose S is strongly connected and of type 1. Then for

anyk>0: (a) If E contains a diagonal function of k + 2 arguments, then E contains a symmetric function of k + 2 arguments and S has property DMk (and also DMk).

(b) If E contains an exact function of k + 3 arguments, then the conclusions of (a) follow, and also E contains a nice function of k + 3 arguments.

From (b) above, it of course follows that if S is unified, then for every k > 0, E contains a nice function H(x, y, z, 8) of k + 3 arguments. This has the following applications:

(A) For a unified indexed relational system R, for any k > 0, n > 1 there is a E-function H(z, x, y, 0) of k + 3 arguments such that for all

§10 Universal systems of type 2

283

X, y, z, w1, ... , w,,, and every k-tuple 0: RH(z,x,y,e) (w1, ... , wn) Rz(8,H(x,x,y,0),H(y,x,y,0),wl,...,wn)2

(B) For a unified sentential system, for any k > 0, r > 0 there is a E-

III

function O(z, x, y, 0) of k + 3 arguments such that for every z, x, y, k-tuple 0, and r-tuple 01: Hq,(z,x,y,e) (0l) = Hz Mx, x, y, 0), O(y, x, y, 0), 01) CAD

(C) For a complete applicative system A, there is an element N (a "nice" combinator) such that for all z, x, y:

Nzxy = z(Nxxy)(Nyxy)3

§10 Universal systems of type 2 For universal systems of type 2, there is the following neat way of obtaining nice functions and symmetric functions: F".

,-a

Theorem 10.1 If S is universal of type 2 and if E contains a quasidiagonalizer Q(w, z, x, y, 9) of k + 4 arguments, then E contains a nice function H(z, x, y, 0) of k + 3 arguments.

Proof By Corollary Q2 of the last chapter there is some element b such that

Q(b,z,x,y,0) --> z,Q(b,x,x,y,0),Q(b,y,x,y,0) And so we take H(z, x, y, 0) = Q(b, z, x, y, 0).

It is of course a corollary of Theorem 10.1 that if S is universal of type 2 and if E contains an exact function of k + 5 arguments, then E contains a nice function of k + 3 arguments - a result of apparently incomparable strength with Theorem 9.3. 2For recursion theory, this was stated in [30] and proved in R.M. 3This was stated and proved in [29] and [30].

Chapter 14. Multiple fixed point properties

284

Theorem 10.2 If S is universal of type 2 and if E contains a quasiQ21

diagonal function Q(w, x, y, 8) of k + 3 arguments, then E contains a symmetric function S(x, y, 8) of k + 2 arguments.

Proof Exercise.

§11 Multiple fixed point properties 'JO

Virtually all the preceding results of this chapter can be generalized from double fixed point properties to n-fold fixed point properties, n > 2. We shall give the appropriate definitions and theorems here; proofs of these theorems will be left as exercises.

For n > 2, k > 0, by a grand function of type n-k we shall mean a .

.

, xn, 0) of n + k + 1 arguments such that for every z, 2r'

function G(z, x1,

xl, ... xn, and k-tuple 0: G(z, X1.... , xn, 9)

- z,8,G(xl,xl,...,xn,B),G(x2ix1,...,xn,B),..., G(xn,x1,...,xn,B) [Note that a nice function of k + 3 arguments is thus a grand function of type '2-k.]

By a multiple symmetric function of type n-k (n > 2, k > 0) we shall mean a function S(x1 i ... , xn, 0) of n + k arguments such that

S(x1i...,xn,8) xlA SY(xl, ... , xn, 8) i S(x27 ... , x,n,i xl7 O)7 ... , S(xn, xl, ... xn-l, 0)

We shall say that S has the n-k Myhill property (n > 1, k > 0) if there are E-functions 01(x1, ... , xn, 8), ... , On(xl,.. , xn, 8) of n + k arguments such that for each i < n and every X1.... , xn and k-tuple 8:

Oi(x1,...,x-n,) - xi,8,Ol(xli...,xn,8),...,On(x1,...,xn,0) [Note: the 2-k Myhill property is DMk.]

In'

We shall say that S has the n-m recursion property (n > 1, m > 1) if for any elements al,... , an there are E-functions 01(x1, . , xm), ... , On(xl, ... , xm) such that for each i < m and all x1, ... , xm,: Oi(x1, ... , xm,) -4 ai,x1,...,Xm701(x1,...,xm),... On,(x1,...,xm)

Note: The 1-1 recursion property is simply the recursion property. The 2-2 recursion property is DR2. Now, here are the facts:

X11 Multiple fixed point properties

285

Theorem 11.1 Suppose S is strongly connected and of type 1. Then: (a) If E contains an exact function E(z, x1 i ... , xn, 8) of n + k + 1 arguments then E contains a grand function G(z, x1i ... , xn, B) of type n-k. urn

(b) If E contains a grand function of type n-k then S has the n-k Myhill property.

Theorem 11.2 .ti

(a) If E is of type 1 and contains a fixed point function F(xl,... , xn, B) of n + k arguments (n > 2) then E contains a multiple symmetric function of type n-k. (b) If S is strongly connected of type 1 and E contains an exact function of n + k + 1 arguments then E contains a multiple symmetric function of type n-k.

(c) If S is strongly connected of type 1 and E contains a multiple symmetric function of type n-k then S has the n-k Myhill property.

Theorem 11.3 If S is of type 2, then for any positive n and m, if S has the n-(n + m) property, then S has the n-m recursion property.

Theorem 11.4 Suppose that S is universal of type 2. Then: F".

(a) If E contains a quasi-diagonal function Q(w, X1.... , xn, 8) of n+k+2

arguments then E contains a grand function G(z, xl, ... , xn, B) of type n-k. h-4

(b) If E contains a quasi-diagonal function Q(w, x1, ... , xn, B) of n+k+l arguments, then E contains a symmetric function S(x1i... , xn, B) of

type n-k.

Exercise 13 Prove Theorems 11.1 through 11.4 Exercise 14 State and prove appropriate generalizations of Theorems 3.4 and 4.4.

Chapter 15

Synchronization and pairing functions

.ii

In this chapter we consider a totally different approach to double and multiple fixed point properties. This approach is an outgrowth of the original form of the double recursion theorem[28] which involves a pairing function J(x, y) and its inverse functions K(x), L(x) for its very statement. We will deal with this in Part III of this chapter.

Part I Synchronized fixed point properties §0 Synchronized sequences of functions Let II be a finite sequence (Ill, ... , II,,,) of E-functions of one argument. We will say that II is synchronized if the following two conditions hold:

(1) For any n-tuple. (a1, .. , an) of elements of N, there is an element a of N such that for each i < n, IIi(a) = ai (thus (a1, ... , an) is the sequence (II, (a), ... , Ile(a)).

(2) For any n-tuple (fl, ... , fn) of E-functions of the same number of arguments - say, k arguments - there is a E-function f of k arguments such that for each i < n, fi(x1i ... , xk) = IIi(f (x1, ... , xk)). Synchronization arises with systems in which E contains a pairing function J(x, y) and inverse functions K(x), L(x), which we will study in Part III of this chapter. If II is a synchronized sequence II, (x), ... , IIe (x), we call II a synchronizer of order n (for S) or an n-synchronizer. We let ord(II) be the order of II. We shall say that S is n-synchronized if there is an n-synchronizer II

287

§0 Synchronized sequences of functions

for S. As we will see later, if S is 2-synchronized, then for every positive n, S is n-synchronized. Synchronized fixed point properties

.-.

We let S be a sequential system (N, E, -->) and II a synchronizer for S - let m be its order. We now define the sequential system Sr, to be the ordered triple (N, E, n ), where n is defined as follows. For any sequences 01, B2, we say that a, 81 n b, B2 if for every i < m, Hi (a), B1 ----> Hi (b), 02- [We

will shortly see that if II is synchronized, then n is transitive and Sr, is

'1"

weakly connected (assuming of course that -> is transitive and S is weakly connected), and so Sr, then satisfies our minimal assumptions concerning sequential systems.] We will now say that S has property SW - the synchronized weak fixed point property - if for every synchronized II, the system Sr, has the weak .7'

7-a

fixed point property - which means that for every element a there is an element b such that for all i < ord(II): Hi (b) ---> Hi(a), b

We will say that S has property SR - the synchronized recursion property - if for every synchronized IT, the system Sr, has the recursion property, which means that for every element a there is a E-function O(x) such that for every i < ord(II) and element x: Hi(0(x)) --i Hi(a), x, 0(x)

We say that S has the property SM - the synchronized Myhill property - if for every synchronizer II the system Sr, has the Myhill property, which means that there is a E-function O(x) such that for every i < ord(H) and every element x:

Ili(0(x)) -> IIi(x), 0(x)

More generally, we might say that for any property P of sequential systems, a system S has property SP if for every synchronizer 11, the system

Sr, has property P. Let us note the obvious fact that if S has property SP, then it certainly also has property P (because if II is the trivial synchronizer

of order 1, Sr, is the system S). What we shall now do is to show that in several of our earlier theorems in which a certain hypothesis implies a certain fixed point property P, the same hypothesis implies the stronger property SP. One reason for our interest in synchronized fixed point properties is their relation to double fixed point properties. Later we will see that if S is of type 1 and II is a synchronizer for S of order 2, then Sr, has the recursion

Chapter 15. Synchronization and pairing functions

288

property if and only if S has the double recursion properties DR2, DR,, DRo, and Sr, has the Myhill property if S has the double Myhill properties DM and DM' (and also Sr, has the weak fixed point property.if and only if S has the weak double fixed point property).

§1 Synchronized fixed point theorems

+.n

At this point it will be helpful to summarize some of the results proved (or all but proved) in Chapters 12 and 13. Summary A

(1) If E contains a quasi-exact function F(x, y) of two arguments then S has the weak fixed point property (Corollary 1.3, Chapter 12).

(2) If S is integrated of type 1 and E contains an exact function E(x, y) of two arguments then S has the recursion property (Theorem 1.4, Chapter 13). a-=

(3) If E is of type 2 and contains a quasi-exact function F(x, y, z) of three arguments, then S has the recursion property (Theorem 2.2, Chapter 13).

(4) If S is strongly connected of type 1 and E contains an exact function E(x, y) of two arguments then S has the Myhill property. [This is immediate from Theorem 4.2, Chapter 13.] (5)

If S is very strongly connected of type 1 then S has the Myhill property (Theorem 6.1, Chapter 13).

We now wish to prove the following "self-strengthening" of Summary A.

Theorem A* For S of type 1, all five parts of Summary A hold good, replacing "weak fixed point property" by "synchronized weak fixed point property", "recursion property" by "synchronized recursion property", and "Myhill property" by "synchronized Myhill property." Although the five parts of Summary A are but special cases of the corresponding parts of Theorem A*, we will derive Theorem A* from Summary A by showing that for S of type 1, in each of the five parts (1)-(5) of Summary A, if the hypothesis holds for S, then it also holds for Sn, where 11 is any synchronizer, and hence the conclusion also holds for Srj. However, we must first show two lemmas.

§1 Synchronized fixed point theorems

289

Lemma 1.0 If H is a synchronizer for a sequential system S, then Sr, is a sequential system, i.e., n is transitive and Sr, is weakly connected.

c(D

Proof Transitivity is obvious. Next, since S is weakly connected, then for any element a and E-function fl (xl, ... , xn), ... , fk(x1, .... xn), for

each i < ord(II) there is an element bi such that for all X 1 ,- .. , xn: bi, xl, ... , xn --> IIi (a), fl (xl, ... , xn), ... , fk (x1, ... , xn). Then, since II car

is synchronized, there is an element b such that for each i < ord(n), IIi(b) = bi, and so IIi(b), xl, ... , xn -- IIi(a), fl(xl, ... , xn), fk(x1, ... , xn), which means that b, xl, .... xn II a fl (xl,... xn) ... fk (xl . ,

.... xn). Thus Sr, is weakly connected. Now we have

Lemma 1.1 [main lemma] Suppose that S is of type 1 and that II is synchronized for S. Then

(al) If E contains an exact function of n + 2 arguments for S (n > 0) then E contains an exact function of n + 2 arguments for Sll. (a2) Same with "quasi-exact" in place of "exact."

(b) If S is integrated, so is Sri (c) If S is strongly connected, so is Sll. (d) If S is very strongly connected, so is Sll.

Proof We let m be the order of H.

VIA

(al) and (a2) Given a E-function E(x, y, 8) of n + 2 arguments, for each i

IIi(x), y, 8.

Hence

F(x, y, 9) n x, y, 8, so F(x, y, 9) is an exact function for S. (ii) Suppose E(x, y, 9) is only quasi-exact. Let a be any element.

Then for each i < in, there is some element bi such that C'1

E(bi, y, 8) - IIi(a), y, 8; hence by synchronization there is some

b such that for all i < ord(II): E(Hi(b), y, 9)

-->

IIi(a), y, 8.

Chapter 15. Synchronization and pairing functions

290

Hence for all i < m, IIi(F(b, y, B)) = E(IIi(b), y, 0) --> IIi(a), y, 0, and so F(b, y, 0) n a, y, 0. Thus F(x, y, 0) is a quasi-exact func-

tion for Sn.

(b) Suppose S is integrated. Let a be any element and n be any natural

number. Then for every i < m, there is a E-function ti(x) such that for all x and every n-tuple 0: ti (x), 0 --> IIi(a), x, 0. Then by synchronicity there is a single E-function t(x) such that for all i < m, Ili(t(x)) = ti (x), and hence Ili(t(x)), 0 ---> IIi(a), x, 0, and therefore

t(x), 0 n a, x, 0. Thus Sr, is integrated. [Note: the hypothesis that S is of type 1 is not necessary for this part.] (c) Suppose that S is strongly connected. Given E-functions f1(yl, ... , , yn), there is a E-function h(x) such that for all , ,fk(y1, yn),

x,y1,...,yn: h(x),yl,...,yn --i x,fl(y1.... ,yn,),...,fk(y1,.... Yn) and so for each i < m, h(IIi(x), yi, .... yn) -4 IIi(x), fl(y1.... , Then by synchronization there is a E, fk (yl, ... , yn). yn),

function t(x) such that for each i < m, IIi(t(x)) = h(Hi(x)), hence Ili (t(x)), y1, ... , yn - IIi (x), f1(y1, ... , yn), ... , fk (y1, ... , yn). Thus

t(x),yl,...,yn hI x,fl(yl,...,yn),...,fk(yl,...,yn), and so Sr, is strongly connected.

(d) Suppose S is very strongly connected. Then given E-functions fl(x, yl, ... , yn),... , fk(x, yl, ... , yn) and a E-function g(x), for each i < ord(II) there is a E-function ti (x) such that for all x, yl, ... , yn: ti(x), yl, ... , yn - IIi(9(x)), fl(x, y1, ... , yn), ... , fk(x, y1, ... , yn), hence

there is a E-function t(x) such that for all i < in, Hi(t(x)), yl, ...,yn -p IIi(9(x)),fl(x,yl.... ,yn).... ,fk(x,y1,.... yn), and so t(x), yl, ... , yn n 9(x), f1(x, y1, ... , yn), ... , fk (x, y1, ... , yn). Sr, is very strongly connected.

Thus

E-+

This concludes the proof of Lemma 1.1, and so the proof of Theorem A* is now complete.

§2 Relation to multiple fixed point properties Next we have:

Theorem B Suppose that 11 is a 2-synchronizer for S. Then (1) If Sr, has the weak fixed point property then S has the double weak fixed point property.

(2) If Sr, has the recursion property and S is of type 1, then S has the property DRo.

§2 Relation to multiple fixed point properties

291

(3) If Sr, has the Myhill property and S is strongly connected of type 1 then S has property DM'. As we are at it, we can just as easily prove the following generalization:

Theorem B* Suppose that II is an n-synchronizer for S (n > 1). Then

(1) If Sr, has the weak fixed point property then for any elements al, ... , an there are elements bl, ... , bn such that for each i < n:

bi-->ai,bl,...,bn (2) If Sr, has the recursion property and S is of type 1, then for any elements al, ... , an there are E-functions 01(x), ... , On (x) such that for each i < n and all x: Oi(x) - ai, x, 01(x), ... On(x) ,

(3) If Sri has the Myhill property and S is strongly connected of type 1 then for any E-functions gl (x), ... , gn(x) there are E-functions q1(x), .... 02(x) such that for all i < n and all x: Oi(x) - gi(x), 01(x), ... On(x) ,

Proof Suppose that II(x) is an n-synchronizer for E. (1) Suppose that Sr, has the weak fixed point property. Let al, ... , an be any elements of N. For each i < n let ai be such that for all x: a', x --> ai, IIi(x), ... , IIn(x). Then let a be such that for all i < n: IIi(a) = a? (such an a exists since II is n-synchronized). By hypothesis there is some b such that b n a, b, and so for each i < n: IIi (b) ---> IIi (a), b. But Hi(a) = a', hence Hi(b) -- a', b --> ai, II, (b) ... , IIn(b). We thus take bi = IIi(b) and we have bi ---> ai, b1, ... , bn.

(2) Suppose Sr, has the recursion property and S is of type 1. Given elements a1,... , an we can now take a such that for each i < n: IIi (a), x, y --> ai, x, IIi (y), ... , IIn (y). [We first take a? such that az, x, y -> ai, x, IIi (y), .... IIn (y)7 then take a such that for each i < n: IIi(a) = a'.] Then there is a E-function q(x) such that for all x, O(x) n a, x, O(x), which means that for each i < n, IIi(O(x)) ---> IIi(a), x, q(x), hence IIi(q(x)) -> a'i, x, O(x) -+ ai, x, II1(q(x)), ... , IIi(O(x)). We thus take Oi(x) = IIi(O(x)) and we have Oi (x) - ai, x, q1(x), .... On (x).

Chapter 15. Synchronization and pairing functions

292 (3)

Suppose S is strongly connected of type 1 and that Sr, has the Myhill property. Given E-functions gl (x), ... , gn(x) let g(x) be a E-function such that for all i < n: IIi(g(x)) = gi(x). Since S is strongly connected, so is Sr, (by (c) of Lemma 1.1), hence there is a E-function g(x) such that for all x, y: h(x), y n x, II'(y), ... , IIn(y). Then h(g(x)),y n g(x), Ei(y), ... , IIn(y). We let t(x) = h(g(x)) and since S is of type 1, t(x) is a E-function . Thus we have a E-function t(x) such that t(x), y n g(x), IIW(y), ... , IIn(y).

'0-

Since Sr, has the Myhill property, then by Theorem 4.4, Chapter 13 applied to Srj, there is a E-function O(x) such that for all x: O(x) n t(x), O(x). Thus O(x) n g(x), IIn(O(x)) (since t(x)) O(x) -> g(x), IIl(O(x)), ... IIn(O(x))). Thus for each i < n: IIi(O(x)) --i IIi(9(x)),IIl(O(x)),...,IIn(O(x)). Thus IIi(O(x)) 9i (X), 1110(x)), ... , IIn(O(x)). We thus take Oi(x) = Hi (O(x)) and we have Oi(x) ---p 9i (X), 01(x), ... , On(x).

Al

Exercise 1 Suppose that S is of type 1 and that there is a 2-synchronizer for S. Prove that for all n > 2 there is an n-synchronizer for S. [Use mathematical induction.]

Exercise 2 Suppose S is of type 1 and Sr, has the recursion property, where 11 is a 2-synchronizer for S. Then by Theorem B, S has the double recursion property DRo. Does S necessarily have the property DR2?

Exercise 3 Suppose S is strongly connected of type 1 and II is a 2synchronizer for S and Sr, has the Myhill property. Does S necessarily have the double Myhill property DM?

Part II Pairing functions We shall say that S is of type 1* (type 2*) if it is of type 1 (type 2) and E contains a 1-1 function J(x, y) and functions K(x), L(x) such that for all x and y:

(1) K(J(x, y)) = x (2) L(J(x, y)) = y We do not require that J be onto N, nor that J(K(x), L(x)) = x. [These requirements can be met in recursion theory but not in combinatory logic.]

§3 Exact and quasi-exact functions

293

n-tupling functions VV]

We assume S to be of type 1*. As in recursion theory, for every n > 2 we define the functions ( X1 ,- ... , xn) by the following inductive scheme:

J2(x1,x2) = J(x1,x2) J3 (x1, ... , x3) = J(J2 (x1, x2), x3)

Jn+l(xl,...,xn) = J(Jn(x1i...,xn),xn+1) As in recursion theory, for each n > 2 we define the inverse functions

Kl , ... , Kn by the following inductive scheme: (1) K12 =Kand K2 = L (2) For i < n, K? +1(x) = K? (K(x)). But Kn+l (x) = L(x) An obvious induction argument shows that for all n > 2:

Ki (Jn(xl, ... , xn)) = x1

Kn(Jn(x1i...,xn) = xn

m--

i.e., for each i < n: K? (Jn(x1, ... , xn)) = x2. It is obvious that for every n > 2 the sequence Ki (x), ... , Kn (x) of Efunctions is synchronized, for given any E-functions gl (B), ... , gn (B) we can take g(9) = Jn(gl(B),... , gn(9)), and so for each i < n: K2 (g(9)) = g2(8).

Thus if S is of type 1*, then for every n > 1 the system S is niii

synchronized. Thus all results of Part I of this chapter have applications to systems of type 1*, which we will consider in Part III of this chapter.

00.

We might note that any 2-synchronized system of type 1 is of type 1*, for suppose S is of type 1 and II is a 2-synchronizer for S. Let fl (x, y) = x and f2(x, y) = y. The functions fl and f2 are both E-functions (since they are explicitly definable from the identity function) and so there is a E-function f (x, y) such that II1(f (x, y)) = fl (x, y) and n2 (f (x, y)) = f2 (x, y); hence IIl (f (x, y)) = x and T12 (f (x, y)) = y. We can thus take J(x, y) = f (x, y),

K(x) = IIi(x),L(x) = 112(x), and so S is of type 1*. It then further follows that if S is of type 1 and is 2-synchronized, then for every n > 2, S is n-synchronized - a fact mentioned earlier in this chapter.

§3 Exact and quasi-exact functions Theorem 3.1 Suppose that S is of type 1*. Then

Chapter 15. Synchronization and pairing functions

294

(1) If E contains a quasi-exact function of two arguments, then for every n > 2, E contains a quasi-exact function of n arguments. Imo

(2) If E contains an exact function of two arguments and S is strongly connected, then for every n > 2, E contains an exact function of n arguments.

Proof Suppose S is of type 1*. car

(1) Suppose E(x, y) is quasi-exact. Let n > 2 and let F(x, x1, ... , xn) _ E(x, Jn(xl, ... , xn)). We assert that F is also quasi-exact.

To prove this, given any element a there is an element a' such that for any element y: a', y -> a, Ki (y), . . . , Kn (y). Then there is some b such that for all y: E(b, y) - > a', y, hence E(b, Jn(xl, ... , xn)) --> a', Jn(xl,... , xn) --> a, x1i ... , xn. Thus E(x, J(xl, ... , xn)) is quasiexact.

(2) Suppose E(x, y) is exact and S is strongly connected. Then there is a E-function t(x) such that t(x), y -> x, Ki (y), ... , Kn(y), hence E(t(x), y) -> x, Ki (y), ... , Kn(y). Then for all x, x1, ... , xn: E(t(x), Jn(x1, ... , xn)) --> x, x1, ... , xn, and so the function E(t(x), Jn(x1, .... xn)) is exact.

Corollary 3.2 Suppose S is of type 1*. Then (a) If E contains a quasi-exact function of two arguments, then for every positive n, E contains a quasi-diagonalizer of n arguments.

(b) If S is strongly connected and E contains an exact function of two arguments, then for every positive n, E contains a diagonalizer of n arguments.

We next wish to prove that if S is integrated and of type 1* and if E contains an exact function E(x, y) of two arguments, then for every positive

n, E contains a normalizer of n arguments. To this end we introduce a definition: .F,

Let us call a function M(x, 9) of n + 1 arguments a master function if for every element a there is a E-function t(x) such that for every x and every n-tuple B: M(t(x), B) --> a, x, 9.

§3 Exact and quasi-exact functions

295

Lemma 3.3 (a) If S is integrated, then every exact function is a master function. (b) If M(x, y, 0) is a master function then its diagonalization M(x, x, 0) is a normalizer.

Proof Obvious. Lemma 3.4 Suppose that S is of type 1* and that E contains a master function of two arguments. Then for every n > 2, E-contains a master function of n arguments.

E3y

Proof This proof is similar to that of (2) of Theorem 3.1. Suppose that S is of type 1* and that E contains a master function M(x, y). We assert that for every n > 2: M(x, J,,,(xl,... , x.,,,)) is a master function. For given a, let a' be such that a', x, y > a, x, K1 (y), .... Kn (y). Then let t(x) be a E-function such that M(t(x), y) --> a', x, y. Then M(t(x), y) --> a, x, Ki (y), ... , Kn (y) Then M(t(x), Jn(x1, ... , xn)) -' a, x x1 ... , xn. And so M(x, Jn (x1, ... , xn)) is a master function. Note

If E(x, y) is an exact function and S is strongly connected of type 1*, then there is a E-function t(x) such that E(t(x), Jn(x1i... , xn)) is a master function, whereas if M(x, y) is a master function, then it is M(x, Jn(x1i... , xn)) itself that is a master function. From Lemmas 3.3 and 3.4 we have:

Theorem 3.5 Suppose that S is of type 1* and integrated and that E contains an exact function of two arguments. Then for every positive n, E contains a master function of n + 1 arguments and a normalizer of n arguments.

We next have the following rather curious result:

Theorem 3.6 Suppose that S is of types 1* and 2* and that E contains a quasi-exact function of two arguments. Then for every positive n, E contains a normalizer of n arguments (in fact for any n > 0, E contains a master function of n + 1 arguments).

Chapter 15. Synchronization and pairing functions

296

We first prove:

Lemma 3.7 Suppose S is of type 2*. Then if F(x, y, B) is quasi-exact, then the function F(K(x), L(x), 9) is a master function. Proof Suppose S is of type 2* and that F(x, y, 9) is a quasi-exact function for S. Let M(x, B) = F(K(x), L(x), B). Since F(x, y, 9) is quasi-exact, then given any element a there is an element b such that for all x, 9: F(b, x, 6) --> a, x, 9. Also M(J(b, x), B) = F(b, x, 9) (because M(J(b, x), 9) = F(K(J(b, x)), L(J(b, x)), 9) = F(b, x, B)).

Hence M(J(b, x), 9) -> a, x, 6. We let t(x) = J(b, x). Since S is of type 2 then t(x) is a E-function and M(t(x), B) --> a, x, 9. Thus M(x, B) is a master function.

Proof of Theorem 3.6 Assume hypothesis. Since S is of type 1* and E contains a quasi-exact function of two arguments, then by (1) of Theorem 3.1, for any n, E contains a quasi-exact function F(x, y, B) of n + 2 arguments. Let M(x, 9) = F(K(x), L(x), B). Since S is of type 2 then M(x, 6) is a master function, and it is a E-function, since S is of type 1.

§4 Some consequences We proved in Chapter 14 that if S is integrated of type 1 and E contains a diagonalizer of three arguments, then S has the double recursion properties DR2, DR,, and DRo. Therefore if S is integrated of type 1 and if E contains

an exact function of four arguments then S has these double recursion -.z

properties.

CAD

Now, suppose that S is integrated of type 1* and that E contains an exact function of two arguments. Then by Theorem 3.6 E contains a normalizer of three arguments, and hence by (b) of Theorem 2.1, Chapter 14, S has the double recursion properties. And so we have: G".

Ham;

Theorem 4.1 If S is integrated of type 1* and if E contains an exact gym,

function E(x, y) of two arguments, then S has the double recursion properties DR2, DR,, DRa. Remarks

As we will see, an entirely different proof of Theorem 4.1, not using any material of Chapter 14, can be obtained from some of the synchronized results of Part I and in the next chapter, using yet another method, we

§5 Strongly connected systems of type 1*

297

will prove the following strengthening of Theorem 4.1: if S is integrated of type 1* and if E contains a diagonalizer D(x) of one argument, then S has the properties DR2, DR1, and DRo.

We also proved in Chapter 14 (Corollary 3.2) that if E if of type 2 and contains a quasi-exact function of five arguments, then S has the properties DR2, DR1, DRo. And now we have:

Theorem 4.2 If E is of types 1* and 2* and contains a quasi-exact function of two arguments, then S has properties DR2iDR1, and DRo.

Proof 1 Assume hypothesis. Then by (a) of Corollary 3.2,E contains a CAD

p_,

quasi-diagonalizer of four argument. The conclusion then follows by (b) of Theorem 3.1, Chapter 14.

pro

Proof 2 Under the same hypothesis, by Theorem 3.5, E contains a norr-1

malizer of three arguments. The conclusion then follows by (b) of Theorem 2.1, Chapter 14. Next we have: so.

Theorem 4.3 If S is of type 1*, then the properties DR2, DR1, DRo for S are all equivalent. G".

110

?+o

.s:

Proof Assume hypothesis. We already know from Chapter 14 (without the assumption that S is of type 1*) that DR2 DR1 DRo. S-'

But now that we are given that S is of type 1*, we can show that So suppose S has property DRo. Given elements al, a2 there are elements ai, a2 such that for all x, y, z: ai, x, y, z --> DRo implies DR2.

1-S -s0

al, K(x), L(x), y, z and a2, x, y, z --> a2i K(x), L(x), y, z. Then by DRo there are E-functions V)1(x), ' b2 (x) such that b1(x) - > a', x, 01(x), 02 (x) and b2(x) --> a2,x,V)1(x),02(x). Then take 01(x,y) = V)1(J(x,y))

and 02(x,y) = V)2(J(x,y)) and we have 01(x,y) = V)1(J(x,y)) all, J(x, y),')1(J(x, y)),')2(J(x, y)) - al, x, y, q5l(x, y), 02(x, y)

Similarly, 02 (x, y) ---> a2, x, y, 01(x, y), 02 (x, y), and so we have property DR2. coy

§5 Strongly connected systems of type 1* Theorem 5.1 Suppose S is strongly connected of type 1* and that E 'ca

contains an exact function of two arguments. Then S has the double Myhill

Chapter 15. Synchronization and pairing functions

298

properties DM and DM'.

Proof Assume hypothesis. Then by (b) of Corollary 3.2,E contains a diagonalizer of two arguments. Then by (a) of Theorem 4.3, Chapter 14, S has property DM. Then by (b) of Theorem 4.4, Chapter 14, S also has property DM'.

Theorem 5.2 If S is of type 1* then the properties DM and DM' for S are equivalent.

Proof We already know that DM implies DM'. Now suppose S is of type 1* and has property DM'. Then, taking gl(x) = K(x) and g2(x) = L(x), there are E-functions 01(x),02(x) such that for all x: 01(x) -> K(x),01(x),02(x) and 02(x) --> L(x),b1(x),02(x). Then for all x and y: V)1(J(x, y)) --> x, 01(J(x, y)), 02(J(x, y)) and V)2(J(x, y)) -->

y, /1(J(x, y)), 02(J(x, y)), and so we take 01(x, y) = 01(J(x, y)) and 02(x, y) _ 0(J(x, y)) and we have property DM. By (2) of Theorem 3.1 and by the results of Chapter 14 we have:

Theorem 5.3 Suppose S is strongly connected of type 1* and that E contains an exact function of two arguments. Then for every n, E contains

a nice function of n + 3 arguments and a symmetric function of n + 2 arguments (and hence S has properties DMn and DMn).

Exercise 4 Suppose S is strongly connected and of type 1*. (a) Show that if S has property DM3 then for every n > 3, S has property DMn.

-I,

(b) Show that if E contains a symmetric function of three arguments, then for every n > 3, E contains a symmetric function of n arguments. C!1

(c) Show that if E contains a nice function of four arguments, then for every n > 4, E contains a nice function of n arguments.

§6 The property DR*

299

Part III Synchronized recursion resumed §6 The property DR* In what follows, we shall let v be the 2-synchronizer (K, L), i.e., the domain of a is {1, 2} and vl(x) = K(x), 0'2 (X) = L(x).

We shall say that S has property DR* if for any elements al, a2 there is a E-function O(x) such that for all x: (1) K(O(x)) -' al, x, O(x) (2) L(O(x)) -' a2, x, O(x)

This can be equivalently stated that for any element a there is a E-function O(x) such that for all x:

(1) K(O(x)) - K(a), x, O(x) (2) L(0(x)) -> L(a), x, 0(x) Or, more concisely, for any element a there is a E-function O(x) such that for all x: (x) o a, x, 0(x)

Thus S has property DR* if and only if So has the recursion property. The original form of the double recursion theorem[28] was that for R the indexed relational system of recursion theory, for any n > 0, the system

Sn(R) has property DR* - which means that for any r.e. relations M1(x, y, z1, ... , zn) and M2(x, y, zl,... , zn) there is a recursive function

O(x) such that/ for all x, z1, ... , zn: (1)

(2) RL(O(x))(z1, ... , zn) H M2(x, c(x), z1i ... , zn)

We have the following generalization for sequential systems:

Theorem 6.1 If S is integrated of type 1* and if E contains an exact function E(x, y), then S has property DR*.

Proof By Theorem A*. Theorem 6.1 is also a consequence of the following apparently stronger result, which we will prove from scratch.

Chapter 15. Synchronization and pairing functions

300

Theorem 6.1# Suppose S is integrated of type 1* and that E contains functions di(x) and d2(x) such that for all x:

(1) di(x) -> K(x), y (2) d2(x) --> L(x), y

Then S has property DR*.

Proof Assume the given conditions. Let d(x) = J(dix, d2x). Then d(x) is a E-function and Kdx = dlx and Ldx = d2x. Thus K(d(x)) --> K(x), x and L(d(x)) --> L(x), x.

Next, given elements al and a2, since S is integrated, there are Efunctions t1(x),t2(x) such that for all x and y: (1) t1(x), y --> al, x, d(y)

(2) t2 (X), y - a2i x, d(y)

Next, we let t(x) = J(tl(x),t2(x)), and so Ktx = tlx and Ltx = t2x. Then for any x:

(1)' Kdtx - Ktx, tx = tlx, tx --> al, x, dtx (2)' Ldtx --> Ltx, tx = t2x, tx --> a2i x, dtx

And so we take O(x) = d(t(x)), and we have KOx

al, x, Ox and

Lax -> a2i x, Ox.1

Of course, the hypothesis of Theorem 6.1 implies the hypothesis of Theorem 6.1# (just take di(x) = E(K(x),x) and d2(x) = E(L(x),x)), and so Theorem 6.1 is indeed a consequence of Theorem 6.1#. Theorem 6.1 is also a consequence of Theorem 4.1 and the following basic result:

Theorem 6.2 If S is of type 1*, the properties DR2,DR1,DR0, and DR* for S are all equivalent.

Proof Suppose S is of type 1*. By Theorem 4.3 the properties DR2, G0.

DR1iDR0 for S are equivalent. We now show that DRo for S is equivalent to DR* for S. That DR* for S implies DRo for S follows from (2) of Theorem B (taking c for II). For the converse, suppose S has property DRo. Then 1For recursion theory, this is essentially the original proof of the double recursion theorem.

§7 The property DM*

301

by Theorem 3.3, Chapter 14, taking gl (x, y) = 92 (x, y) = J(x, y), for any elements al, a2 there are E-functions 01(X), 02(X) such that for all x: 01(x) -' a,, x, J(01 (X), 02(x)) and 02(x) - a2, x, J(01 (X), 02(x)). We let O(x) = J(01(x), 02(x)), and so K(O(x)) - a1, x, q(x) and L(q(x)) --> a2, x, 0(x), and so S has property DR*. Discussion

We now see that Theorem 6.1 can be alternatively derived as a consequence of Theorem 4.1, by virtue of Theorem 6.2. And so a synchronized approach (¢D

is not necessary for a proof of Theorem 6.1. But it is just as true that Theorem 4.1 can be derived as a corollary of Theorem 6.1 - again by virtue of Theorem 6.2. Thus Theorem 4.1 can be proved without appeal to any results of Chapter 14, but instead, by a synchronous fixed point argument.

Next, suppose S is of types 1* and 2* and that E contains a quasi-exact

function of two arguments. Then by (1) of Theorem 3.1, S contains a quasi-exact function of three arguments, and since S is of type 2, it follows from (3) of Theorem A* that S has property DR*. And so we have:

Theorem 6.3 If S is of type 1* and 2* and if E contains a quasi-exact function of two arguments, then S has property DR*.

cad

car

Alternatively, Theorem 6.3 can be proved as follows. By Theorem 4.2 the hypothesis implies that S has the double recursion properties DR2iDR1, and DRo, and so S then has property DR* by Theorem 6.2. Let us note that, alternatively, Theorems 6.3 and 6.2 yield another proof of Theorem 4.2. (_y

Exercise 5 Let DW be the weak double fixed point property and DW* the property that SQ has the weak fixed point property (S is assumed to be of type 1*). Prove that DW and DW* are equivalent. coy

§7 The property DM* ((DD

We shall say that S has property DM* if SQ has the Myhill property, i.e., if

there is a E-function q(x) such that for all x: K(q(x)) -4 K(x), O(x) and L(a(x)) -> L(x), O(x). By Theorem A* we have:

Theorem 7.1 For S of type 1*, if either S is very strongly connected, or if S is strongly connected and E contains an exact function of two arguments,

Chapter 15. Synchronization and pairing functions

302

then S has property DM*. We next wish to prove:

Theorem 7.2 If S is strongly connected of type 1*, then the properties DM*, DM, and DM' for S are all equivalent.

Proof Suppose S is of type 1* and strongly connected. Then by Theorem 5.2, the properties DM and DM' for S are equivalent. Also DM* implies DM' by (3) of Theorem B. It remains to show that DM' implies DM*. And so suppose that S has property DM'. Then by Theorem 4.5, Chap-

ter 14 (taking gl(x) = K(x),g2(x) = L(x), hl(x, y) = J(x, y), h2(x, y) _ J(x, y)) there are E-functions 01(x), 02 (x) such that for all x: 01(x) K(x), J(01(x), 02(x)) and 02(x) --' L(x), J(01(x), 02(x)). We take q(x) _ coo

J(01 (X), 02(x)) and we then have K(O(x)) --> K(x), O(x) and L(O(x)) --> L(x), q(x), and so S has property DM*.

§8 Integrated systems of type I* Lemma 8.0 Suppose S has property DM* (S, of course, is of type 1*). Then for any E-functions gl(x),92(x) there is a E-function 0(x) such that for all x: (1) K(O(x)) -' 91(x),O(x)

(2) L(0(x)) -' 92(x), 0(x)

cc-

a_0

Proof Assume hypothesis. Given E-functions gl(x),g2(x) we let g(x) = J(g1(x),g2(x)), and so K(g(x)) = gi(x) and L(g(x)) = g2(x). By property DM* there is a E-function O(x) such that K(O(x)) ---> K(x), fi(x) and L(&(x)) -- L(x), O(x). Then K(O(g(x))) -> K(g(x), O(g(x))) = 91(x),&(9(x)) and L(V)(9(x))) - L(9(x),V)(9(x))) = 92(x),"(9(x)). And (DD

so we take O(x) = V)(g(x)).

Remark

Of course property DM* is a special case of the conclusion of the lemma (the case gl(x) = K(x) and g2(x) = L(x)). And so DM* is equivalent to this seemingly stronger property. AO-.

Theorem 8.1 If S is integrated of type 1* and if S has property DM* then S has the double recursion properties DR2iDR1,DR0, and DR*.

§8 Integrated systems of type I*

303

Proof Suppose S is integrated of type 1* and has property DM*. Since S is integrated, then for any elements a1 and a2 there are E-functions gl (x), 92 (x) such that for all x and y: gl (x), y ---> al, x, y and 92 (x), y --> a2, x, y. Since S has property DM* then by Lemma 8.0 there is a E-function O(x) such that K(O(x)) -- g1 (x), q(x) and L(O(x)) --> 92 (X), O(x), and so

K(O(x)) - > a1, x, 0(x) and L(0(x)) -- a2i x, 0(x), and so S has property DR*. The rest follows by Theorem 6.2 From Theorems 7.1 and 8.1 we have:

Theorem 8.2 If S is integrated and very strongly connected and of type 1*, then S has the double Myhill properties DM, DM', DM* and the double recursion properties DR2iDR1,DRo, and DR*.

By Proposition 7.1, Chapter 13, if S is universal and integrated it is also very strongly connected (in fact hyperconnected), hence by Theorem 8.2 we have:

Theorem 8.3 If S is universal, integrated, and of type 1*, then S has properties DM, DM', DM*, and DR2iDR1,DRO, and DR*.

Chapter 16

Some further relations between fixed point properties Part I Single and double fixed point properties compared We shall now establish some interconnections between single and double fixed point properties. First we will show that if S has the recursion

property, then S also has the weak double fixed point property. [This generalizes results that are known for recursion theory and for combinatory logic.] Suppose S has the recursion property; does it necessarily have the double recursion properties DR2, DR1, and DRo? This appears

doubtful, but we will show that if S is of type 1*, then the answer is yes. Our proof will be divided into two parts as follows. For m > 1, we let R,1,L be the 1-m recursion property, i.e., the property that for every element a there is a E-function 0(x1,. .. , x,,,,) such that 0(xl,... , x,,,,) --> a, X1, ... , x,,,,, 0(1i ... , x,,,,) (m > 1). [Thus R1 is the recursion property.] Well, we will show that for S of type 1 (not necessarily of type 1*), R2 implies the double recursion property DR1 (it can also be shown that R3 implies DR2), and then (the easy part) we will show that for S of type 1*, R1 implies RZ (in fact R1 then implies Rn for any n > 1). We will also discuss some related results on Myhill and double Myhill properties.

§1 Recursion and double fixed points We first need two lemmas.

§1 Recursion and double fixed points

305

Lemma 1.0 If S has the recursion property, it also has the weak fixed point property.

'.O

Proof Suppose S has the recursion property. Given a, take a' such that a', x, y -> a, y. Then there is a E-function 0(x) such that for all x: 0(x) --> a', x, 0(x), hence 0(x) -> a, 0(x). Pick any particular x (say a) and let b=O(x). Then b ----> a, b.

Lemma 1.1 If S has the weak fixed point property then for any element a and any E-function g(x), there is an element b such that b --> a, b, g(b).

Proof Let a' be such that for all x: a', x --> a, x, g(x). Then let b be such that b --> a', b. Then b -- a, b, g(b). Remark

Actually this lemma is immediate from Theorem 1.5, Chapter 12, taking k = 2, fl the identity function, and f2 = g. Now we can prove:

Theorem 1.2 If S has the recursion property then S has the weak double fixed point property.

Proof Suppose S has the recursion preoprty. Take any elements al and a2. Since S has the recursion property then there is a E-function O(x) such that for all x: (1) O(x) -+ a2, x, q(x)

Next, since S has the recursion property, it has the weak fixed point property (by Lemma 1.0) and then by Lemma 1.1 there is an element c such that: (2) c -> al, c, O(c)

Then by (1), taking c for x, we have:

(1)' 0(c) - a2, c, 0(c)

We thus take bl = c and b2 = ct(c), and by (2) and (1)' we have bl --> al, bl, b2 and b2 -> a2, b1, b2, and so S has the weak double fixed point property.

Chapter 16. Some further relations between fixed point properties

306

Corollary 1.3 If S is of type 1 and E contains a normalizer N(x) of one argument, then S has the weak double fixed point property.

Proof This follows from the above theorem and Theorem 1.2 of Chapter 13.

Remark

This solves Exercise 5, Chapter 14. cap

§2 Recursion and double recursion '.Y

We now wish to prove that for S of type 1, R2 DR1. The following lemma plays the analogous role for this proof that Lemma 1.1 played in the proof of Theorem 1.2.

Lemma 2.1 Suppose that S has property R. Then for any element a and any E-function b(x, y) there is a E-function t(x, y) such that for all x,

t(x, y) - - > a, x, t(x, y), 0(y, t(x, y))

Proof Suppose that S has property R. Then given an element a and a E-function 0(x, y) there is an element a' such that for all x, y, and z: a', x, y, z -> a, x, z,b(y,z) By R2 there is a E-function t(x, y) such that for all x, y. t(x, y)

->

a', x, y, t(x, y)

->

a, x, t(x, y), V) (y, t(x, y))

oho

s0.

Theorem 2.2 Suppose S is of type 1 and has property R. Then S has `rd

property DR1.

Proof Assume hypothesis. Take any elements al and a2. By R2 there is a E-function b(x, y) such that for all y, z: (1) 0(y, z) --> a2, y, z,O(y, z)

By Lemma 2.1 there is a E-function 01(x, y) such that for all x, y

(2) 01(x,y) - a,,x,01(x,y),'b(y,01(x,y))

§3 Systems of type 1*

307

In (1) we replace z by 01(x,y) and we have (1) 1 0(y, 01(x, y)) - a2, y, 01(x, y), 0(y, 01(x, y))

We thus take 02(x, y) = L(y, 01(x, y)) and we have

01(x,y) - a1,x,01(x,y),02(x,y) 02 (x, y)

a2, y, 01(x, y), 02 (x, y)

Thus S has the double recursion property DR1.

Exercise 1 Prove that if S is of type 1 and has the property R3, then S has property DR2.

§3 Systems of type 1* Now we need:

Lemma 3.1 If S is of type 1* and has the recursion property R1, then for every n > 1, S has property R.

cad

Proof Assume hypothesis. We are given R1. Now let n be any number > 2 and let a be any element. Let a' be such that for all x and y: a', x, y -+ a, Ki (x), ... , Kn (x), y. Then take a E-function O(x) such that for all x, 0(x) --> a', x, 0(x) and take 0(x1, ... , xn) = 0 (Jn(x1i ... , xn)), and then O(xl,...,x,n,)

--i

a,Jn(xl,...,Xn,),O(x1,...,xn) a,x1i...,xn,Y'(x1,...,xn)

ace

Suppose now that S is of type 1* and has the recursion property. Then by the above lemma it also has property R2, hence by Theorem 2.2 it has property DR1. From this and Theorem 6.2, Chapter 15 we have:

Theorem 3.1 If S is of type 1* and has the recursion property, then S has the double recursion properties DR2, DR1, DRo, DR*.

By the above theorem and Theorem 1.2, Chapter 13 we have:

Theorem 3.2 If S is of type 1* and if E contains a normalizer N(x) of one argument then S has the double recursion properties DR2iDR1,DRo,DR*.

308

Chapter 16. Some further relations between fixed point properties

mom.

mo=d

We proved (Theorem 4.1, Chapter 15) that if S is integrated of type 1* and E contains an exact function E(x, y) then S has the double recursion properties. But now from Theorem 3.2 above and Proposition 1.1 of Chapter 13, we have the following stronger result (which we announced in Chapter 15 right after the proof of Theorem 4.1):

Theorem 3.3 If S is integrated of type 1* and E contains a diagonal function D(x) of one argument, then S has the double recursion properties.

mom"

§4 Myhill and double Myhill properties We shall let Mk' (k > 0) be the 1-k Myhill property. Thus Mo is the Myhill

property (there is a E-function q(x) such that for all x: O(x) --> x, 0(x)) and M1 is the property that there is a E-function O(x, y) such that for all x and y: O(x, y) - x, y, O(x, y). We will now show that if S is strongly connected of type 1, then if S has property M1, it also has the double Myhill property DM. We will also show that for a very strongly connected system of type 1*, the Myhill property implies property M11. But we showed in

Chapter 13 (Theorem 6.1) that a very strongly connected system of type 1 has the Myhill property. Putting all this together, it will follow that a very strongly connected system of type 1* has the double Myhill property! We first need the following analogue of Lemma 2.1:

Lemma 4.1 Suppose S is strongly connected of type 1 and has the property M11. Then for any E-function '(x, y) there is a E-function t(x, y) such that for all x and y: t(x, y) - x, t(x, y), 0(y, t(x, y))

Proof Assume hypothesis. By strong connectivity, given a E-function O(x, y), there is a E-function g(x) such that for all x, y, z: g(x), y, z --> x, z, 0(y, z). Then by M1 there is a E-function O(x, y) such that O(x, y) -> x, y, O(x, y), therefore O(g(x), y) --a g(x), y, q5(g(x), y) --> x, O(g(x), y),

0(y, q(g(x), y)). We thus take t(x, y) = q(g(x), y) and we have t(x, y) --> x, t(x, y), 0(y, t(x, y))

Theorem 4.2 If S is strongly connected of type 1 and has property Mil , then S also has the double Myhill property DM. Proof Like that of Theorem 2.2, deleteing "al" and "a2" and using Lemma 4.1 in place of Lemma 2.1.

§5 Bar fixed points

309

Exercise 2 Fill in the details of the above proof.

Theorem 4.3 If S is very strongly connected of type 1* then S has property Mil .

Proof Assume the hypothesis. By very strong connectivity, there is a E-function t(x) such that for all x and y: (1) t(x), y --> K(x), L(x), y

Also, by Theorem 6.1, Chapter 13, S has the Myhill property, hence

by Theorem 4.4, Chapter 13, there is a E-function b(x) such that for all x, O(x) --> t(x), &(x), but also t(x), O(x) --> K(x), L(x), /(x) and so O(x) --> K(x), L(x), b(x). Taking J(x, y) for x, we have '/ (J(x, y))

->

KJ(x, y), LJ(x, y), V (J(x, y)

=

x,y,i(J(x,y))

We thus take O(x, y) = V(J(x, y)), and we have O(x, y) -> x, y, O(x, Y). By Theorems 4.2, 4.3 above we have:

Theorem 4.5 If S is very strongly connected of type 1* then S has the double Myhill property.

Part II More on systems of type 1* §5 Bar fixed points In what follows, S will be assumed to be of type 1* and, furthermore, for III

VIII

all x and y, J(K(x), L(x)) = x. As in Chapter 10, for any x we define x = J(L(x), K(x)). Thus J(x, y) = J(y, x); K(x) = L(x); L(x) = K(x) and x = x. [We have called x the involute of x.] For any natural number n we will say that S has property B,,, if there is a E-function O(x, 0) of n + 1 arguments such that for every x and every n-tuple 9: O(x, 0) --> K(x), 0, O(x, 9), O(x, 0)

Exercise 3 Suppose S is of type 1*. (a) Show that if S has property Bo then S has the weak double fixed point property.

310

Chapter 16. Some further relations between fixed point properties inn

(b) Show that if S has property B1 and S is also of type 2, then S has the double recursion properties.

(c) Show that if S has property B1 then for every n > 1, S has property

B.

(d) Show that if E contains an exact function of two arguments then for

every n, S has property B. The following theorem (together with earlier theorems) yield solutions of (a) and (b) of Exercise 3.

Theorem 5.1 Suppose S is of type 1*. Then for any natural number n the following two conditions are equivalent: r-4

(1) S has property B. (2) S contains a symmetric function S(x, y, 6) of n + 2 arguments.

Proof Suppose S is of type 1*. (1) Suppose there is a E-function O(x, 0) of n + 1 arguments such that O(x, 0) --> K(x), 0, O(x, 0), O(x, 0). Let S(x, y, 0) = O(J(x, y), 6). Then S(x, y, 0) = O(J(x, y), 0) -> K(J(x, y)), 0, O(J(x, y), 0), q(J(x, y), 0) = x, 0, cb(J(x, y), 0), O(J(x, y), 0) = x, 0, ct(J(x, y), 0), O(J(y, x), 0) = x, 0, S(x, y, 0), S(y, x, 0). Thus S(x, y, 0) is a symmetric function.

(2) Conversely, suppose E contains a symmetric function S(x, y, 0) of n + 2 arguments. Let O(x, 6) = S(K(x), L(x), 6). Then O(x, 0) = S(K(x), L(x), 6) --> K(x), 6, S(K(x), L(x), 0), S(L(x), K(x), 0) = K(x), 0, S(K(x), L(x), 6), S(K(x), L(x), 6) = K(x), 0, O(x, 0), O(x, 6).

Thus S has property B.

Corollary 5.2 Suppose S is strongly connected of type 1*. (a) If S has property Bo then S has the double Myhill properties DM, DM', DM*.

(b) If S has property B2 then S has property DM1 (and hence also the double recursion properties, if S is also of type 2).

(c) If E contains an exact function of two arguments, then for every positive n, S has property B.

311

§ 6 Further topics

Exercise 4 Suppose S is of type 1*. Show that if E contains a symmetric function of three arguments then for every n > 3, E contains a symmetric function of n arguments. How does this solve (c) of Exercise 3? a-?

'.O

Exercise 5 Suppose that S is universal of type 1* and that for every element a, E contains a function O(x) such that for all x: O(x) --> a, x, O(x), O(T). Show that E contains a symmetric function S(x, y) of two arguments.

§6 Further topics 1 The extended recursion theorem revisited

We will say that S has property EM - the extended Myhill property - if for any E-functions fl (x), ... , fr(x), 91(4 ... , gk (x) there is a E-function O(x) such that for all x:

Y'(x) - fi(x),.. . , fr(x), 0(91(x)), ... , Y'(9k(x)) We showed in Chapter 13 (Theorem 3.3) that if S has the extended recursion property then for any a and any E-functions f, (x), ... , fr(x), gi (x), ... , gk (x) there is a E-function O(x) such that for all x: O(x) ---> a, fi (x), . . . , f, (x), 0(gi (x)), ... , 0(gk(x)), from which it obviously follows that if S has the extended recursion property and if S is universal, then (by suitable choice of a) S has the extended Myhill property. The extended Myhill property is of interest in that if S is of type 1* and has the extended Myhill property, then it not only has the double Myhill

properties - and in fact properties DMn for all n - but also E then contains nice functions and symmetric functions of any desired number of arguments (and all this without assuming strong connectivity). More generally (and we will soon see why this is more general) we have the following useful result.

Theorem EM Suppose S is of type 1* and has the extended Myhill property. Then for any n > 2 and any E-functions fi (x1, ... , x ) , .. , xn) there is a E-function O(x) fr(xl, , xn), 91(xl, , xn), .. , 9k (x1, .

,

such that for all x1 ,- .. , xn: O(Jn(x1, ... , xn)) -4

fi(xi,.... xn),...,fr(xl,...,xn),0(91(xl,...,xn))...... O(9k(x1, ... , xn))

312

Chapter 16. Some further relations between fixed point properties Before proving Theorem EM, we note the following._Let S be of type

1*.

For any function f (xl, ... , xn), n > 2, we let f (x) = f (Ki (x),

.... Kn (x)). [The function f is sometimes called the collapse' of f.] Then f (J(xl, ... , xn)) = f (xl, ... , xn). Of course for any E-function f of two or more arguments, its collapse f is also a E-function. Proof of Theorem EM `ti

Assume hypothesis. Then given the functions fi, ... , fr, 91, ... ,9k, by property EM there is a E-function O(x) such that for all x:

0(x)-

fl (X), ... , f, (4 0(91(x)), ... , 0(9k(x))

Replacing x by J(xl,... , xn) we get our desired conclusion.

..d

Now, let EM* be the property given by the conclusion of Theorem EM. Then the existence of nice functions and symmetric functions are not only consequences of property EM*, but are special cases of it, as we can now see:

(a) A special case of EM* is that for every n > 0 there is a E-function O(x) such that for all x and y and every n-tuple 0: c;,

0(Jn+2(x, y, e))

x, 9, 0(Jn+2(x, y, 0)), 0(Jn+2(y, x, e))

Then 0(Jn+2(x, y, B)) is a symmetric function. (b) Another special case of EM* is that for any n > 0 there is a E-function O(x) such that for every x, y, z, and every n-tuple 0: 0(Jn+3(z, x, y, B))

-4 z,0,0(Jn+3(x,x,y,e)),O(Jn+3(y,x,y,e)) Then cb(Jn+3(z, x, y, B)) is a nice function. coo

Still more generally, property EM* implies that E contains grand functions and multiple symmetric functions of all types, and hence also that for any n > 1, k > 0, S has the n-k Myhill property.

Exercise 6 Prove the above assertion.

§ 6 Further topics

313

2 Ideal functions Call a function b(x, 6) of k+1 arguments ideal (k > 0) if for any E-functions f, (x, 0), ... , fn (x, 6) there is an element b such that for every k-tuple 0:

0(b, 0) - fl(b, B), ... , fk(b, 0)

Call the system S ideal if for every k > 0, E contains an ideal function of k + 1 arguments.

Exercise 7 (a) Show that if S is universal then every quasi-diagonalizer is an ideal function.

(b) Show that if S is of type 2, then every ideal function is a quasidiagonalizer.

(c) Suppose that S is ideal. Show that E contains grand functions and multiple symmetric functions of any numbers of arguments and that S has the extended Myhill property. 3 Another type of synchronization

..S

H.0

There is another type of synchronization that is applicable to doubly indexed relational systems in the sense of Chapter 10. Define S,(R) (n > 1) to be the triple (N, E, ), where x, 6 n y, I' is defined to mean that for all z1, ... , xn, A.(0, zl, ... , zn) - Ay (I', zi, ... , zn) and also that Bx(0, z1, ... , zn) By J', zl, ... , zn). It is easily verified that for any n > 0, S2(R) is a unified sequential system (assuming that the

n

double indexing satisfies all the given conditions of Chapter 10) and hence

0-r,

that S2(R) has the recursion property. In particular, S2(R) then has the recursion property, which means that for any element a, there is a E-function O(x) such that for all x and is x E ao(i) E-4 Aa(x, i, 0(i)) and x E 00(i) - Ba(x, i, 0(i)). The special case of this for recursion theory is Lemma A of Chapter 11. We remark that this lemma is also an easy consequence of the fact that S1(R) has property DR*.

IV

Combinators and Sequential Systems

Chapter 17

Some special fixed point properties of combinatory logic Although many fixed point theorems of combinatory logic are special cases of fixed point theorems of sequential systems, others are not - in fact they are not even statable in terms of sequential systems. Applicative systems have very special features which we will look at in this and the next two chapters. In our final chapter, we will consider some other fixed point theorems for combinatory logic that do have generalizations to sequential systems.

In Part I of this chapter we give an introduction to combinatory logic by means of a series of problems concerning various relations between com-

binators - how some can be derived from others. Solutions are given at the end of the chapter. In Parts II and III we turn to various fixed point theorems that are special to applicative systems.

Part I Interdefinability of some 11.2

combinators Preliminaries

We recall that an applicative system A consists of a set N together with an operation called application that assigns to each ordered pair (x, y) of elements of N an element of N denoted xy. We also recall that parentheses are to be associated to the left, e.g., xyz is (xy)z; xyzw is ((xy)z)w, and so forth.

Given a set a1,... , a,,, of elements of N, an element b is said to be derivable from the set - or from the elements of the set - if it belongs

Chapter 17. Fixed point properties of combinatory logic

316

to every class C of elements such that C contains each of the elements It-

a1, ... , an, and such that C is closed under application, i.e., for any elements x, y in C, the element xy is in C. Thus the following facts hold:

(1) Each of the elements a1,... , a,,, is derivable from the set {al,... , a,,}.

(2) If x and y are derivable from the set {al, ... , ate,}, so is the element xy. (3)

No element is derivable from the set {a1, ... , an } unless its being so is a consequence of conditions (1) and (2). An element b is said to be derivable from an element a if b is derivable

from the unit set {a}.

§1 The combinator B We now consider an applicative system in which the set N of elements contains an element B satisfying the following condition:

Bxyz = x(yz)

..o

Of course the presence of B guarantees that the set E1 of all E-functions of one argument is closed under composition, for given any two such functions [a]1, [b]1, if we take c = Bab, then for all x, ex = Babx = a(bx), and so [c]1(x) = [a]1([b]1(x))

Problem 1 From B, derive elements D, B2, E, B3, D1, D2, D3, f satisfying the conditions below (for all elements x, y, z, w, v, etc.).

(a) Dxyzw = xy(zw) (b) B2xyzw = x(yzw) (c) Exyzwv = xy(zwv) (d) B3xyzwv = x(yzwv)

(e) Dlxyzwv = xyz(wv) (f) Dxxyzw = x(y(zw)) (g) D3xyzwv = x(yz)(wv) (h) Exyly2y3zlz2z3 = x(yly2y3)(zlz2z3)

§2 We add T

317

Problem 1.1 Using mathematical induction, prove the following generalization of (b) and (d) above. For every n > 1 there is a combinator Bn derivable from B satisfying the following condition (for all elements

X,y1,...,yn,z): Bnxy1 ... ynz = x(y1 ... ynz)

[For example, one can derive an element B4 satisfying the condition: B4xyly2y3y4z =

§2 We add T We continue to assume that N contains B. Now we assume further that N contains an element T satisfying the following condition: Txy = yx

The element T (unlike B) has a permuting effect. From B and T we can derive a host of combinators that parenthesize and permute.

Problem 2 (a) Show that from B and T we can derive an element R satisfying the condition:

Rxyz = yzx

(b) Show that from R (and hence from B and T) one can derive an element C satisfying

Cxyz = xzy (c) If one expresses C in terms of B and T via R, one obtains an expression of nine letters. It can be shortened by one letter. How?

(d) We have derived C from R. Also, if we start with C we can obtain R (that is, an element R satisfying Rxyz = yzx). How? Note

For any element x; Cx = RxR and Cx = B(Tx)R. car

Problem 2.1 Show that from B and C we can derive an element Q satisfying the relation

Qxyz = y(xz)

Chapter 17. Fixed point properties of combinatory logic

318

Note

Q is more usually written B' and is a standard combinator. We will have a special use for it in fixed point problems.

Problem 2.2 It then follows that Q is derivable from B and T (since C is). Reducing C to B and T as in the solution to Problem 2(c), and expressing Q in terms of B and C, as in the solution to the last problem, we obtain an expression of nine letters. But this can be simplified to an expression of only six letters. How?

Problem 2.3 From Q alone, derive an element Qo satisfying the condition Qoxyzw = xz(yw)

Problem 2.4 It then follows that Qo is derivable from B and C, but it can be done with an expression of only four letters. How? Problem 2.5 From B and C - also from Q and C - derive elements Q1, Q2, Q3, Q4, Q5 satisfying the following conditions:

(a) Qlxyz = x(zy) (b) Q2xyz = y(zx)

(c) Q3xyz = z(xy) (d) Q4xyz = z(yx)

(e) Q5xyzw = z(xyw) (f) Q6xyzw = w(xyz) [We will be using Q3i Q5i and Q6.]

Problem 2.6 (a) We know that Q is derivable from B and C. Show that B is derivable from Q and C.

(b) It is also true that B can be derived from Q and T - moreover by an expression of only four letters. How?

(c) Since B is derivable from Q and T, and C is derivable from B and T, it follows that C is derivable from Q and T, but in fact it can be done with an expression of only four letters. How?

§2 WeaddT

319

Some terminology and discussion

An element A is called a proper combinator if there is some word

:t.

p..

W (X1, ... , xn) in variables xl,... , x,,, (but with no constants) such that the equation Ax1x2 ... xn = W(xl, ... , xn) holds. Any element derivable from proper combinators is called a combinator. All the combinators we have derived so far are proper. An example of a combinator that is not proper is TT, for TTx1 ... xn is x1Tx2 ... xn, which cannot be reduced further (and likewise TTx1 = x1T, which cannot be reduced further). A proper combinator A is called regular if it satisfies an equation of the form Axyl ... yn = xW1W2 ... Wk, where W1,. .. , Wk are words in the variables yl, ... , yn only. For example B and C are regular; T, R and Q are not. Order and bases

By the order of a proper combinator A is meant the smallest n such that there is a word W(xl,... , xn) in n variables such that the equation Ax1 ... xn = W(x1i ... , xn) is satisfied. As examples, T is of order 2; B and C are of order 3. For any combinators Xl,... , Xn, by {X1,. .. , Xn}+ we shall mean the class of all combinators derivable from X 1 ,.. . , Xn (they are all combinators, of course). We now see that the classes {B,T}+ and {Q,T}+ are the same, and also {B, C}+ = {Q, C}+. Also {B, C}+ C {B, T}+ (since C is derivable from B and T), but {B, C}+ # {B, T}+ - {B, C}+ is a strictly smaller class than {B,T}+ since it is known that T is not derivable from B and C. An important point to note about B and Q is that for any elements x and y, Bxy = Qyx. The reason is that we are taking CB as our definition of Q and so Qyx = CByx = Bxy. We shall have several occasions to make use of this fact. The combinator G

We shall have special use for a combinator G satisfying the following condition:

Gxyzw = xw(yz)

This combinator is derivable from B and T - in fact from B and C.

Problem 2.7 Derive G from B and C.

Chapter 17. Fixed point properties of combinatory logic

320

The combinator V

This combinator is used to construct pairing functions, and will play a key role in the next several chapters. It is defined by the condition:

Vxyz = zxy

Problem 2.8 Derive V from B and C. [It is easier to derive it from B, C, and T.]

Problem 2.9 Show that for each n > 2 there is a combinator Vn derivable from B and C satisfying the condition: Vnxy1 ... YnZ = XZY1 ... yn

Problem 2.10 Show that for each n > 2 there is a combinator Cn satisfying the condition: Cnxzy1 ... Y. = xy1 ... ynz

Problem 2.11 Show that from B and C one can derive a combinator F satisfying the following condition:

Fxyz = zyx

§3 We add M We now assume that in addition to B and T, we have a combinator M of order 1 satisfying the following condition:

Mx = xx This element M is sometimes called a duplicator and plays a basic role in fixed point constructions.

A basic standard combinator is W, defined by the condition: Wxy = xyy.

W can be derived from B, T, and M - in fact from B, C, and M. It is easier to first derive its converse W', defined by the condition:

W'xy = yxx

§3 We add M

321

Problem 3 (a) Derive W' from B, M, and R. Then express W' in terms of B, M, and C. Then express W' in terms of B, M, and T - which can be done in two ways, each involving only five letters.

(b) Then express W in terms of W' and C. (c) Now express W in terms of B, T, and M. This can be done with an expression of only ten letters, and there are two such expressions. Remarks (CD

Alonzo Church[3] was the first to show that W is derivable from B, T, M, and the identity combinator I. [Ix = x.] His method was both bizarre and ingenious - his expression involved twenty-four letters and thirteen

pairs of parentheses. Shortly after, J. Barkley Rosser found a ten letter expression, which moreover did not use the identity combinator I.

Problem 3.1 We have seen that W is derivable from B, T, and M. If we started instead with B, T, and W, could we derive M?

Problem 3.2 (a) Show that from B, T, and M, one can derive an element L satisfying the following condition:

Lxy = x(yy)

In fact L can be derived from B, C, and M, or alternatively from B, R, and M.

(b) Show that it is also true that L is derivable from B and W. Then express it in terms of Q, C, and W. (c) Also, L is derivable from M and Q. How? [This is a particularly neat derivation of L.] Remark

The combinator L (introduced in T.M.M.) plays a particularly significant role in the study of fixed points, as we will see.

Chapter 17. Fixed point properties of combinatory logic

322

Problem 3.3 (a) Show that from B and M one can derive a combinator M2 satisfying the following condition: M2xy = xy(xy)

(b) One can also derive such a combinator M2 from Q and W, and this fact will prove to be quite useful! What is the derivation? [It is tricky!]

Problem 3.4 Show that for any positive integer n, one can derive from B and W a combinator W,,, satisfying the condition: Wnxy1 ... 2Jnz = x7J1 ... Yn ZZ

Problem 3.5 We shall have good use for combinators P, P1, P2, and P3 satisfying the conditions:

(a) Pxyz = z(xyz) (b) Pixyz = y(xxz)

(c) P2xyz = x(yyz) (d) P3xyz = y(xzy)

They can be derived from B, T, and M - in fact from B, Q, and W (and hence from B, C, and W). How? The combinator S

0-0

rte

The classic combinator S is defined by the condition Sxyz = xz(yz). [We have already remarked that an applicative system is complete if and only if it contains S and the combinator K defined by Kxy = x.] There are several ways of deriving S from B, C, and W (and hence from B, T, and M). We shall consider several such derivations.

Problem 3.6 We will first derive the standard expression for S in terms of B, C, and W. We will do this in stages.

(a) From B, C, and W derive an element So satisfying the condition Soxyz = xzyz.

§3 We add M

323

(b) From So and B derive an element S' satisfying the condition S'xyzw = xywzw.

(c) Then construct S from B and S'. (d) Then reduce to B, C, and W.

Problem 3.7 The standard expression thus obtained has seven letters. In T.M.M. we found a more economical expression for S which has only six letters. It will be helpful to use the combinator G of Problem 2.7 (Gxyzw = xw(yz)). S can be easily derived from B, G, and W. How?

Problem 3.8 We know that S is derivable from B, C, and W. It is also true that W is derivable from B, C, and S - in fact from just C and S. How? [It helps to derive W' from S and R first.] Remark

We now see that the classes {B, C, W J+ and {B, C, S}+ are the same.

Problem 3.9 Since W is derivable from S and C, and C is derivable from B and T, then W is derivable from S, B, and T. Actually, W is derivable from just S and T - and quite economically. Also M is derivable from S

and T - in fact from W and T (a) Derive W from S and T. (b) Derive M from W and T. (c) Now write an expression for M in terms of S and T. Remark

We now see that the classes {B, T, W J+ and {B, T, S}+ are the same.

Some derivatives of B and S

From B and S can be derived the standard combinator (D satisfying the condition:

(Dxyzw = x(yw)(zw)

Then for each n > 2 one can derive from B and 4) a combinator (Dn satisfying the condition: (Dnxyzw1 ... wn = x(ywl ... wn)(zwl ... ton)

324

Chapter 17. Fixed point properties of combinatory logic

Problem 3.10 Derive (D, (D2i ... , (Dn,... (that is, show how for each n, (bn can be derived).

Problem 3.11 From B and S (and any other combinators already derived from them) derive combinators S1, S2, and S3 satisfying the following conditions:

(a) Sixyzw = xyw(zw) (b) S2xyzw = xzw(yzw) (c) S3xyzwv = xy(zv)(wv)

§4 Two special combinators We now turn to two combinators that will play a special role in the study of fixed point combinators. The first is the combinator 0 defined by the condition: Oxy = y(xy).

Problem 4 Derive 0 from Q and W. Problem 4.1 Show that P of Problem 3.5 can be derived from 0 and B.

Turing combinators

By a Turing combinator we shall mean a combinator U satisfying the condition

Uxy = y(xxy) Such a combinator U (discovered by Alan Turing[36]) allows for a fantastically simple construction of a fixed point combinator, as we will see.

It can be derived from B, Q, and W - in fact from L, Q, and W - in many different ways. It can also be derived from M, Q, and W, also from Q, M, and 0, and also from B, M, and 0, as we shall now see.

Problem 4.2 (a) Derive a Turing combinator from W1 and P1.

(b) Derive one from L and O.

(c) Derive one from P and W.

(d) Derive one from P and M.

§5 The combinator I

325

Problem 4.3 (a) Using (a) of the last problem, derive a Turing combinator from Q, L, and W, and reduce the expression to one in Q, B, and W. -4-=

(b) Using (b) of the last problem, derive a Turing combinator from L and 0 and then reduce to L, Q, and W (getting a different expression from that of (a)).

(c) Derive a Turing combinator from W, B, and 0 and then reduce to B, Q, and W (thus getting different expressions from those of (a)). (d) Derive a Turing combinator from Q, M, and 0, and then reduce to Q, M, and W.

§5 The combinator I This combinator (the "identity" combinator) is defined by the condition:

Let us now assume that I is present.

Problem 5 (a) Derive T from C and I. (b) Derive M from W and I. (c) Derive M from L and I.

(d) Derive M from S and I.

(e) Derive 0 from S and I.

(f) Derive M from 0 and I. (g) Derive 0 from P and I. (h) Derive M from P and I.

Chapter 17. Fixed point properties of combinatory logic

326

Bases

Let us say that a set {X1,.. X,,,} of combinators is a basis for the .

class {X1,..

.

,

,

X,,,}+ of all combinators derivable from X1,. .. , X,,,. Let

us call two bases equivalent if they are bases of the same class in other words, any combinator derivable from elements of the one is derivable from elements of the others. We now know that the bases

+'d

{B, T, M, I}, {B, C, W, I}, {B, C, S, I} are all equivalent (also the bases {B, T, I} and {B, C, I} are equivalent, since we have just seen that T is derivable from C and I). The combinators derivable from any of these three equivalent bases are known as combinators of the Al-calculus we might call them AI-combinators. Alonzo Church took {B, T, M, I} as the basis. Curry preferred the basis {B, C, W, I}. The class of AIcombinators is not combinatorially complete - it doesn't contain a combinator K satisfying the condition Kxy = x. The essential property of the AI-combinators is that for any word W (xl, ... , x7,,) which contains all the variables x1,. .. , x.,,,, there is a )I-combinator A such that the equation Ax, ... x,,, = W (xl, ... , x,,,) holds (e.g., there is a )I-combinator A such

that Axyz = (z(xxzy))(zx), but there is no )J-combinator A satisfying V.)

the condition Axyz = xz(zxx)z, since the variable y is missing in the expression on the right hand side of the equation). A proof of this partial completeness can be found in many books on combinatorial logic, or in

yeti

T.M.M. Another basis for the class of AI-combinators due to Rosser[21] is {I, J},

where J satisfies the condition Jxyzw = xy(xwz). For we can take T

to be JII, C to be JT(JT)(JT), B to be C(JIC)(JI), and M to be C(C(C(BJT)T)T)T.

§6 A basis equivalent to {B, T, I} The sub-class of the )J-combinators consisting of those derivable from just B, T, and I has been studied by Rosser and others. We showed in T.M.M. that these combinators B, T, and I could be replaced by the two combinators G and I. [We recall that Gxyzw = xw(yz).] Thus the bases {B, T, I} and {G, I} are equivalent.

Problem 6 Show that B and T are derivable from G and I. [Hint: first derive Q3 of Problem 2.5 from G and I, then C, then Q, then B, then T.]

§7 Some curiosities Problem 7 We say that x commutes with y if xy = yx.

§8 We now consider K

327

Find an element A derivable from B, T, and M that commutes with every element.

Problem 7.1 Is there an element X derivable from B, T, and M such that XXX = X (XX )?

Problem 7.2 Find an element A derivable from B, T, and M such that for all x, Ax = A(xx).

§8 We now consider K We recall that K is defined by the condition Kxy = x.

Problem 8 Show that B, T, M, and I are all derivable from S and K. Remark

We have stated that the class of combinators derivable from S and K is complete. In the next chapter we will give the standard proof of this.

Problem 8.1 (a) Prove the following cancellation law for K: if Kx = Ky then x = y. (b) Show that unless K is the only element of N, there is no element x such that Kx = K. (c) Assuming that K is not the only element of N, prove that there are infinitely many elements derivable from K. Some inequalities

Assuming that there is more than one element present, many inequalities can be derived - for example the following: K=A I

SCI S#K

CC =A C

VV #V W1 #1

T#K SSrS

RRrR QQrQ O0,

We leave the proofs of these inequalities to the reader'. Actually it 'They can be found in T.M.M., Chapter 22.

Chapter 17. Fixed point properties of combinatory logic

328

can be shown that no two of the elements B, C, W, S, R, T can be identical (assuming there is more than one element).

A pairing function and its inverses

For any elements x and y, we take the "representative" of the ordered pair (x, y) to be Vxy (V is a combinator satisfying the condition Vxyz = zxy. We are taking BCT for V). The element Vxy is sometimes written: "[x

.7]')

Problem 8.2 (a) Prove that the function [V]2 is 1-1, i.e., if Vxy = Vzw then x = z and y = w. sm.

(b) Find combinators K1 and K2 such that for all x and y, K1 (Vxy) = x and K2(Vxy) = y. Note

It is not the case that for all z there are elements x and y such that Vxy = z, and so we don't have the law: V(Klx)(K2x) = x. Klx is sometimes written (x)o and K2x, (x)1. Propositional combinators

We let t = K and f = KI. We might call t and f truth-combinators.

Problem 8.3 Find combinators neg (negation), c (conjunction), d (disjunction), i (material implication), and e (equivalence), all derivable from B, T, K, and I, such that for any elements p and q, each of which is either t or f, the following conditions hold: (a) negt = f and neg f =t (b) cpq = t iff p = t and q = t; otherwise cpq = f (c) dpq = t iff p = t or q = t; otherwise dpq = f

(d) ipq = t iff p = for q = t; otherwise ipq = f (e) epq = t if p = q; otherwise epq = f

§9 Idempotent and voracious elements

329

Problem 8.4 We call an applicative system extensional if for all elements a and b, if ax = bx for all x, then a = b.

Show that if A is extensional then neg can be taken to be a proper combinator. C09

§9 Idempotent and voracious elements We call an element X idempotent if XX = X. Of course I, is idempotent.

Problem 9 (a) Find an idempotent element that is derivable from B and M. (b) Find one derivable from L and M. (c) There is even one derivable from only L! Can you find one? We call an element X voracious if X y = X for every y.

Problem 9.1 171

(a) Find a voracious element that is derivable from B, K, and M.

(b) Find one that is derivable from K and L.

Part II Fixed points #A. The weak fixed point property §10 Some sufficient conditions We recall that in an applicative system, b is called a fixed point of a if ab = b.

Problem 10 (a) Show that if B and M are present, then every element has a fixed point.

(b) Show that if just L is present, then every element has a fixed point.

Chapter 17. Fixed point properties of combinatory logic

330

§11 Fixed points in relation to idempotent and voracious elements We shall now see how the solutions to Problems 9 and 9.1 were found.

Problem 11 (a) Show that any fixed point of M must be idempotent. This, together with the solution of Problem 10 yields solutions of (a) and (b) of Problem 9. How?

(b) Show that if x is a fixed point of LL, then xx must be idempotent. How is this related to (c) of Problem 9?

Problem 11.1 Prove that any fixed point of K must be voracious. This, together with the solutions of Problem 10, yield solutions to (a) and (b) of Problem 9.1. How?

§12 A fixed point principle We are considering an applicative system that is not necessarily combinatorially complete. We shall call a function f (xl,... , x,,,) from, N'z into N realizable if there is an element a such that axi ... x,,, = f (xi, ... , xn,) for all xi, ... , xn in N. We will say that such an element a realizes the function f. [What we call "realizable" is more commonly called "representable," but we prefer to use the latter term in a different context, which will be discussed in a later chapter.] A result that is commonly called the "fixed point theorem of combinatory logic," or sometimes the first fixed point theorem of combinatory logic," is the following: Tai

Theorem F If A is a complete applicative system, then for any expression W (x, yl,

. ,

yn,) in the variables x, yi, .

.

.

, yn

(and possibly constants

as well) there is an element a such that for all elements yi, ... , yn the cad

equation ayi ... yn = W (a, yl, ... , yn,) holds.

As an illustration of the use of this theorem, if we take the expression xlx(x3x2x)x, there is a combinator a such that for all elements xi, x2i x3: axix2x3 = xia(x3x2a)a. This basic result is a corollary of either of the following two theorems (which hold for a system not necessarily combinatorially complete).

§13 Some fixed point combinators

Theorem F1

331

If every element has a fixed point, then for every realizable

function f (x, y1, ... , yn) there is an element a such that for all Y1.... , yn,

ayl,...,yn = f(a,yl,...,yn). Theorem F2 For any function f (x, y1, ... , yn), if the function f (xx, Y1, , yn) is realizable, then there is an element a such that for all

y1,...,yn: ayl,...,yn=f(a,yl,...,yn). Theorems F1 and F2 provide different proofs of the fixed point theorem of combinatory logic.

Problem 12 Prove Theorems F1 and F2 and show how each one provides a proof of the fixed point theorem of combinatory logic.

#B. Some fixed point combinators We recall that we are defining a fixed point combinator as an element B such

that Ox = x(Ox), for all x. A fixed point combinator will also be called a sage2.

Here are some of the combinators of Part I (all derivable from B, M, and T - and all but M derivable from B, C, and W) with their defining characteristics, that will be useful in constructing sages.

Bxyz = x(yz) Rxyz = yzx Cxyz = xzy Qxyz = y(xz) Qoxyzw = xz(yw) Gxyzw = xw(yz) Lxy = x(yy)

Wxy = xyy M2xy = xy(xy) Sxyz = xw(yz) Oxy = y(xy) Pxyz = z(xyz) P1xyz = y(xxz) Uxy = y(xxy)

§13 Some fixed point combinators Problem 13 (a) Find a sage derivable from M, B, and R. (b) Now express one in terms of B, C, and M. (c) Find one derivable from M, B, and L. 21 introduced this term in T.M.M., but it has subsequently been used by others.

Chapter 17. Fixed point properties of combinatory logic

332

(d) Find one derivable from M, B, and W.

Problem 13.1 One of our favorite constructions of a sage is in terms of Q and M. How can this be done?

Let us define an assistant sage as a combinator A such that for every x, Axx is a fixed point of x.

Problem 13.2 (a) Show that if A is an assistant sage, then a sage is derivable from A and W. -ti

(b) Find an assistant sage that is derivable from Q and L. This with (a) yields a sage derivable from Q, L, and W. What sage is that? (c) One can not only derive an assistant sage from Q and L, but one can even derive one from Qo and L. How?

(d) There is also a combinator X derivable from Q and W such that XL

is a sage. What X is that? [This yields another proof that a sage can be derived from Q, L, and W.]

Problem 13.3 (a) Derive an assistant sage from G and L.

(b) Derive an assistant sage from B, C, and L.

Problem 13.4 Derive a sage from B, G, and W. Problem 13.5 Derive a sage from B, C, and W. Problem 13.6 Derive a sage from S and L. ova

Problem 13.7 Derive a sage from S, B, and W. [This is Curry's sage.]

§14 Turing sages

333

Problem 13.8 (a) Derive a sage from 0, B, and M. (b) Derive one from L and 0.

(c) Derive one from M and P. [Recall Pxyz = z(xyz).] (d) Derive one from P and W. (e) Derive one from P and I.

Problem 13.9 (a) There is a combinator X that we have derived from B, T, and M such that for any Y, if XYY = YY, then YY is a sage. What X is that? (b) Show that for any X with the above property, WX(WX) must be a sage.

§14 Turing sages We let U be a Turing combinator.

Problem 14 The marvelous thing about a Turing combinator U is that a sage can be derived from just U alone! How?

Problem 14.1 Prove that for any X, if UX = XX then XX must be a sage.

§15 Some special properties of 0 Problem 15 As a preliminary problem, show that if B is a sage and A is an element such that Ax = Ox for all x, then A must also be a sage. Remarks

ti'

L".

If course for an extensional system, the above problem is pointless, since the hypothesis would imply that A and B are the same element. But it is of interest that the solution of the above problem is valid without the hypothesis of extensionality.

Chapter 17. Fixed point properties of combinatory logic

334

Problem 15.1 Prove that if 0 is a sage, so is 00. Problem 15.2 0 has the marvelous property that every fixed point of 0 is a sage! Why is this? .'3

Problem 15.3 If the system is extensional, then, conversely, every sage is a fixed point of 0, and thus the fixed points of 0 are precisely the fixed point combinators! Prove this.

Problem 15.4 Without the assumption of extensionality, we do not know whether or not all sages are fixed points of 0 (even assuming combinatory completeness). [This might be an interesting open problem.] However, Cs'

even without the hypothesis of extensionality, some of the sages that we have derived can be shown to be fixed points of 0. Which ones?

Problem 15.5 Sate whether the following is true or false: for any sage B, the element 90 is a sage.

Problem 15.6 By a sage producer let us mean a combinator A that for every element x, the element Ax is a sage.

Construct a sage producer from B, C, and M. [Hint: it can be constructed from P2, M2, and C.]

§16 Sages from B and W We have seen that a sage can be constructed from the one proper combiowe

nator U, but U, though proper, is not regular. In T.M.M. we left open the following two questions: (1) can a sage be constructed from just one regular combinator?; (2) if not, can a sage at least be constructed from B and just one other regular combinator? Richard Statman has answered the second question affirmatively (and, I believe, the first one negatively). He constructed a sage from B and W. Since then a host of B, W-sages have been discovered by Wos and McCune

at Argonne National Laboratories. We shall now turn to the study of Statman's sage constructed from just B and W.

Problem 16 (a) From B and W construct a combinator L1, satisfying the condition:

Lixyz = x(yyz)

335

§17 n-sages

.ti

[In fact, L1 can be constructed from B and L and then reduced to an expression in B and W. Indeed, it is simpler this way.] (b) Show that Lix(Lix)(Lix) is a fixed point of x.

(c) From B and W construct a combinator M3 with the property: M3xy = xy(xy)(xy).

(d) Now construct a sage from M3 and L1 and reduce to B and W. Note 090

I also raised the problem of whether a sage can be constructed from B and M. According to Larry Wos, who is an expert on sage constructions, this particular problem has not been solved to this day (16 June, 1992), though he states that some progress may be in sight. Another problem I would like to suggest: is there an effective means of deciding which finite sets of combinators are sufficient to produce a sage?

§17 n-sages Looking at applicative systems from the viewpoint of sequential systems, to say that A has the recursion property is to say that for every element a there is an element b such that for all x, bx = ax(bx) ([b]1(x) -+ a,x, [b]1(x)). A

stronger property is that there is an element r such that for all x and y, rxy = xy(rxy). Such an element r we will call a 2-sage. Actually, the existence of a 2-sage is equivalent to the sequential system S(A) having the 1-1 Myhill property (the existence of a E-function O(x, y) such that for all x and y, q(x, y) -+ x, y, q(x, y)), hence every combinatorially complete system has a 2-sage. But we are interested in applicative systems that are not necessarily combinatorially complete.

Problem 17 (a) Suppose that every element has a fixed point and that the combinator S is present. [Sxyz = xz(yz).] Prove that there is a 2-sage. (b) Specifically, construct a 2-sage from S and L.

Problem 17.1 Show that a 2-sage can be constructed from B and W. [Tricky!]

Chapter 17. Fixed point properties of combinatory logic

336

Problem 17.2 Suppose A is extensional. We know that there is then a combinator derivable from B, C, and W (namely, 0) whose fixed points are those and only those elements that are sages. Is there necessarily a combinator derivable from B, C, and W whose fixed points are those and only those elements that are 2-sages (assuming extensionality)?

By an n-sage we mean a combinator I' satisfying the condition: Pxl ... xn, = xl ... xn(rxl ... xn). Problem 17.3 Find two regular combinators whose presence is enough to guarantee the existence of n-sages for all n > 1.

Part III Multiple fixed points In preparation for the study of multiple fixed point properties, it will be handy to have some more proper combinators available for immediate use.

§18 Some more useful combinators Problem 18 From B, C, and W, derive combinators el, e2, S4, C1, C2, D1, D2 satisfying the following conditions.

(a) elxy = xyx (b) e2xy = yxy

(c) S4xyzwv = z(xwv)(ywv)

(d) Clzxy = z(yxy)(xyx) (e) C2zxy = z(xyx)(yxy)

(f) Dlxyzw = xz(yw)(xz) (g) Dlxyzw = xw(yz)(xw)

Double fixed points §19 The cross-point property We will say that A has the cross-point property if for any elements al and a2 there are elements bl and b2 such that albs = b2 and a2b2 = bl.

§20 The weak double fixed point property

337

Problem 19 Show that if the combinators M and B are present then A has the cross-point property.

Problem 19.1 We have just, seen that the cross-point property can be obtained using two proper combinators (M and B). Actually, it can be obtained with only one proper combinator derivable from B, C, and W. What combinator works? cap

§20 The weak double fixed point property

a°;

E-+

The weak double fixed point property for an applicative system A is that for any elements a1 and a2, there are elements b1 and b2 such that a1b1b2 = b1 and a2b1b2 = b2. We will call a pair (b1, b2) of such elements a double fixed point of the pair (al, a2). For a complete applicative system, there are many interesting ways to establish the double fixed point property, as the following problems reveal. For applicative systems not necessarily complete, the different ways yield different results.

Problem 20 Show that if the combinators C1 and C2 of Problem 18 are present, then A has the weak double fixed point property.

Problem 20.1 Suppose that every element has a fixed point and that for any element a there are elements b1 and b2 such that for all x and y: bixy = a(xy)(yx) and b2xy = a(yx)(xy). Show that the system has the weak double fixed point property.

Problem 20.2 The following is a more interesting one: Show that if every element has a fixed point and if S is present, then A has the weak double fixed point property. [This of course implies that if S and L are present, then A has the weak double fixed point property.]

Problem 20.3 Show that if B and W are present then A has the weak double fixed point property.

Problem 20.4 Here is a way of deriving the weak double fixed point property using the pairing combinators V and its inverse combinators K1 and K2 (see Problem 8.2). Suppose that V, K1, and K2 are present and that every element has a fixed point and that the combinator S2 is present. [S2xyzw = xzw(yzw)

Chapter 17. Fixed point properties of combinatory logic

338

- Problem 3.11.] Show that the system has the weak double fixed point property.

Problem 20.5 For those who like curiosities, suppose that there are elements al and a2 such that for all z, x, y, w: (1) alzxy = x(yy)(zzxy) (2) a2zwxy = w(yy)(zxy) Prove that A has the weak double fixed point property.

§21 The property DM o`)

The double Myhill property for an applicative system A is that there exists combinators M1 and M2 satisfying the conditions:

(1) Mlxy = x(Mlxy)(M2xy) (2) M2xy = y(Mixy)(M2xy) Such a pair (Ml, M2) will be called a DM-pair. We say that such a pair (M1, M2) is derivable from a given set of combinators if M1 and M2 are both derivable from the set. There are many different ways of deriving a DM-pair from B, C, and W. Let us consider some.

Problem 21 Derive a DM-pair from the combinators C1i C2, Dl, D2 of Problem 18.

Cdr

Problem 21.1 There exist two proper combinators A1, A2 derivable from Z.,"

B, C, and W, such that a DM-pair is derivable from Al and A2. Find such combinators Al and A2. [The ones we have found are of order four.]

---

We recall the combinator 0 satisfying the condition: Oxy = y(xy), and we know that every fixed point of 0 is a sage. This fact has a double analogue, namely that in a complete applicative system, there are combinators 01 and 02 such that every double fixed point of (01, 02) is a DM-pair. Indeed, 01 and 02 can be derived from B, C, and W. [This yields yet another method of obtaining a DM-pair from B, C, and W.]

Problem 21.2 Construct such a pair (01, 02) from B, C, and W.

§22 Nice combinators and symmetric combinators

339

§22 Nice combinators and symmetric combinators By a nice combinator we shall mean a combinator N satisfying the condition:

Nzxy = z(Nxxy)(Nyxy) By a symmetric combinator we shall mean a combinator s satisfying the condition: sxy = x(sxy)(syx)

There are several ways in which nice combinators and symmetric combinators can be derived from B, C, and W:

Problem 22 Find proper combinators n1 and n2 derivable from B, C, and W such that (a) A nice combinator is derivable from n2.

(b) All fixed points of n1 are nice combinators, and if the system is extensional, then all nice combinators are fixed points of n1.

Problem 22.1 Find proper combinators sl and s2 derivable from B, C, and W such that: (a) A symmetric combinator is derivable from s2.

(b) All fixed points of s1 are symmetric combinators, and if the system is extensional, then all symmetric combinators are fixed points of sj. Discussion

Nice combinators and symmetric combinators provide another approach to double fixed point properties. It is obvious that the presence of a nice combinator implies the weak double fixed point property, and so does the presence of a symmetric combinator together with the combinator C. Also, from a symmetric combinator s together with B and C, a DM-pair can be constructed (Problem 22.2 below), and a DM-pair can also be constructed from a nice combinator N together with C, W, and W1.

Problem 22.2 Prove the last statement.

Chapter 17. Fixed point properties of combinatory logic

340

Discussion: Grand and multiple symmetric combinators

For any n > 2 there is a )I-combinator A (in fact one derivable from B, C, and W) satisfying the following condition:

Awzxl ... xn = z(wxlxl ... xn)(wx2xl ... xn) ... (wxnxl ... xn,) Any fixed point g of A satisfies the condition: gzxl...xn = z(9xlxl...xn)(9x2xl...xn)...(gxnxl...xn)

Such a combinator g we could call a "grand" combinator. [Thus a nice combinator is a grand combinator for n = 2.1

Of course the presence of g guarantees the weak n-fold fixed point property. But also, for each i < n, there is a combinator gi of the )Jcalculus (in fact one derivable from B, C, and W) satisfying the condition:

gill ... xn = gxnxl ... xn. And therefore we have the "uniform" n-fold fixed point property (the n-0 Myhill property) that for all i < n: 9ixl...xn = xi(91x1...xn)...(gnxl...in)

`°a

Of course for a combinatorially complete system (the AK-calculus) the existence of such combinators g, gl, ... , gn follows from earlier results on unified sequential systems, but the point of the above discussion was to indicate that all this could be done in the )J-calculus3 (in fact using just

B,C, and W). Also for each n > 2, from B, C, and W one can get a "multiple symmetric" combinator h satisfying the condition: hxl...1n = xl(hxl...xn)(hx2...xnxl)...(hxnxl...xn-1)

We leave this to the reader.

Solutions 1 (a) It is sometimes easiest to work these problems backward. We are

looking for an element D such that for all x, y, z, w: Dxyzw = (xy)(zw). Let us look at the expression (xy)(zw) and see how we can get back to Dxyzw, where D is the element to be found. Well, we look at the expression (xy) as a unit - call it A - and so (xy) (zw) is A(zw), which we recognize as BAzw, which is B(xy)zw. So the first step of this "backward" argument is to recognize (xy)(zw) as 3For the observation that my scheme works in the AI-calculus I am indebted to Henk Barendregt.

Solutions

341

B(xy)zw. Next, we look at the front end B(xy) of the expression and recognize it as BBxy, and so B(xy)zw = BBxyzw. Therefore we take D to be BB. Let us check by running the argument forward:

Dxyzw = BBxyzw, since D = BB B(xy)zw, since BBxy = B(xy) (xy)(zw) = xy(zw) (b) Since we have already found D from B, we are free to use it. In other words, in any solution for B2 in terms of B and D, we can replace D by BB, thus getting a solution in terms of B alone. Again we will work the problem backward. -ti

x(yzw) = x((yz)w) = Bx(yz)w, but Bx(yz) = DBxyz, and so Bx(yz)w = DBxyzw. And so we take B2 to be DB, which in terms of B is (BB)B (which we can also write BBB). So our solution is BBB. Let us check: DBxyzw = Bx(yz)w = x(yzw). In the remaining parts of this problem, we will just give solutions, leaving the checking to the reader.

(c) Take E = BB2 (= B(BBB))

(d) Take B3 = EB (= B(BBB)B) - also = BB2B (e) Take D1 = BD (= B(BB)) (f) Take D2 = D1B (= B(BB)B) (g) Take D3 = DD (= BB(BB))

(h) Take E = EE (= B(BBB)(B(BBB))) 1.1

Take B1 = B.

Now suppose that n is such that there

is

a combinator Bn satisfying the condition: Bnxy1 ... ynz = x(yl ... YnZ). Then take Bn+1 = BBnB. Then Bn+lxy1 ... ynyn+lz = BBn Bxy1 ... ynyn+lz = Bn(Bx)y1 ... ynyn+lz = Bx(y1 ... ynyn+l)z (because

. . ynyn+l = Bx(yi . . . ynyn+l) by inductive hypothesis), and Bx(y1i... , yn, yn+1)z = x(y1 ... ynyn+lz). Thus Bn+ixyi ... ynyn+lz

Bn(Bx)y1

.

= x(y1 ... ynyn, z). This completes the induction. And so B1 = B, B2 = BB1B2, B3 = BB2B, B4 = BB3B, and so forth.

342

Chapter 17. Fixed point properties of combinatory logic

C'1

2 (a) Take R = BBT. [Check: BBTxyz = B(Tx)yz = Tx(yz) = yzx.] (b) Take C = RRR. [Check: RRRxyz = RxRyz = Ryxz = xzy.]

(c) Thus C = BBT(BBT)(BBT) = B(T(BBT))(BBT). (d) Given C, we can take R = CC (since CCxyz = Cyxz = yzx). 2.1

Take Q = CB. [Check: CBxyz = Byxz = y(xz).]

2.2 CB = B(T(BBT))(BBT)B = T(BBT)(BBTB) = T(BBT) (B(TB)) = B(TB)(BBT). 2.3 Take Qo = QQ(QQ). [Check: QQ(QQ)xyzw = QQ(Qx)yzw = Qx(Qy)zw = Qy(xz)w = xz(yw).]

2.4 Take Qo = BCD (= BC(BB)). [Check: BCDxyzw = C(Dx)yzw = Dxzyw = xz(yw).] We henceforth leave the checking to the reader.

2.5 (a) Take Q1 = BCB (alternatively Q(CQ)C).

(b) Take Q2 = CQ1 (= C(BCB)) - alternatively C(Q(CQ)C). (c) Take Q3 = BC(CB) - alternatively QQC. [In terms of B and T, we could simply take Q3 = BT.]

(d) Take Q4 = CQ3 (= C(BC(CB))) - alternatively C(QQC). (e) Take Q5 = BQ.

(f) Take Q6 = B(BC)Q5 (= B(BC)(BQ)).

2.6 (a) Take B = CQ. (b) Take B = QT(QQ). (c) Take C = QQ(QT).

2.7 Take G = BBC. 2.8 Take V = C(BB(CC))(CC) (alternatively BCT).

Solutions

343

2.9 We use the combinators Bn, of Problem 1.1 (B,,,xy1 ... y,,,z = x(y1 ... y,,z)). First we take V2 = B(BC)C. Then V3 = B3CV2; V4 = B4C

V3i...,Vn+1 = B.+iCV.. 2.10 Take C2 = BC(BC); C3 = BC(BC2), ... , Cn+1 = BC(BC,,) (n > 2).

2.11 TakeF=CV. 3 (a) We can take W' to be BMR, or BM(CC). In terms of B, M, and T, BMR is BM(BBT). We could also take Wto be B(BMB)T. (b) We take W = CW'.

(c) W = CW' = RRRW' = RW'R = BBTW'R = B(TW')R = If we take BM(BBT) for W' we get B(TW')(BBT). B(T(BM(BBT))(BBT). If instead we take B(BMB)T for W', we get B(T(B(BMB)T))(BBT), which is Rosser's expression for W in terms of B, T, and M. 3.1

Yes, it is even derivable from T and W. Take M = WT (thus WTx =

Txx = xx).

3.2 (a) We can take L = CBM, or RMB. We can also take L = B(TM)B. (b) We can take L = BWB. Then we can take CQ for B, and L becomes CQWB, which is QBW, which becomes Q(CQ)W.

(c) Take L = QM.

3.3 (a) Take M2 = BM. (b) Take M2 = Q(WQo)W - which is Q(W(QQ(QQ)))W. 3.4 Take W1 = BW; W2 = BW1i W3 = BW2,... , W.+1 = BWn. A simple induction shows that these combinators work.

3.5 (a) Take P = W3Q5 (= B(BW)(BQ)), or B(BWQ), or B(QQW).

(b) Take P1 = LQ - alternatively W(BQ). (c) Take P2 = CP1 (= C(LQ)).

Chapter 17. Fixed point properties of combinatory logic

344

(d) Take P3 = BCP.

3.6 (a) Take So = B(BW)C (which is W2C). '+1

(b) Take S' = BSo which is B(B(BW)C).

(c) Take S = S'(BB). ,1.

(d) Then S becomes B(B(BW)C)(BB) (the standard expression). .may

3.7 We can take S = W2G, which is B(BW)G, which is B(BW)(BBC).

6U0

3.8 We can take W' to be SRR. Then we take W to be CW' = C(SRR). Taking CC for R, we get C(S(CC(CC))), which we can take for W.

3.9 (a) Take W = ST. (b) Take M = WT. (c) Thus we can take M to be STT. 3.10 Take D = B(BS)B. Then take '4 = BD31..... 4n+l = B4),,-D.

3.11 (a) Take S1 = BS. (b) Take S2 = DS. (c) Take S3 = B-D. 4 4.1

Take O = QQW. Take P = BO.

4.2 (a) W1P1

(b) LO (c) WP

(d) PM

'D2

B-D,P, 4)3 = BDA,

345

Solutions

4.3 (a) W1P1 is BWP1, which is also QP1W, which is Q(LQ)W, which

is also Q(W(BQ))W. Also Q(W(BQ))W = BW(W(BQ)). Thus Q(LQ)W is a Turing combinator, and so are BW(W(BQ)) and Q(W(BQ))W.

(b) LO = L(QQW), so L(QQW) is a Turing combinator (also LO = .fir

L(BWQ)). And so Q(LQ)W and L(QQW) are two different expressions for a Turing combinator in terms of L, Q, and W.

(c) In WP, we can take BO for P, thus W(BO) is a Turing combinator. Replacing 0 by QQW, we get W(B(QQW)). Alternatively, we can take W(B(BWQ)), and so we have two other expressions for a Turing combinator in terms of B, Q, and W.

(d) PM becomes BOM (taking BO forP); alternatively we can take QMO, which in turn is QM(QQW). 5

6

(a) (b) (c) (d)

Take T = CI. Take M = WI. Take M = LI. Take M = SII.

(f)

Take O=SI. Take M=0I.

(g)

Take 0

(e)

(h)

P1.

Take M=PII.

First take Q3 to be GI. Then take C to be GGII. Then take Q =

G(CC)Q3 (which is G(CC)(GI)). Then, of course, we take B = CQ, then T = CI (or more simply GII).

7 M(BTM) is such an element X. Another is LT(LT). [A further discussion of this problem follows the solution of Problem 10.]

7.1 Obviously L is such an element.

7.2 LL(LL) is such an element. 8

Take I = SKK. Then take M = SII. Then take T = S(K(SI))K.

Then take B = S(KS)K.

8.1 (a) Suppose Kx = Ky. Then for any z, Kxz = Kyz. But Kxz = x

and Kyz=y, sox =y. (b) Suppose Kx = K. Then for any element y, Kxy = Ky, hence x = Ky

(since x = Kxy). Then for any elements yi and Y2, Ky1 = Ky2

Chapter 17. Fixed point properties of combinatory logic

346

(since they are both equal to x), hence yl = Y2 (by (a) above). Thus any elements yl and y2 are the same, so N has only one element. (c) For each positive n, we define Kn, by the following inductive scheme:

K1 = K; K.+1 = KKn, (e.g., K4 = K(K(KK))). We show that for n 0 m, Kn, 0 Km, and hence the set {K1i K2, ... , K,,,, ...} contains infinitely many distinct elements.

It follows from the cancellation law (a) that if K,,,,+i = Kn,+1, then

Km = Kn, (because Kn,,+1 = K,,,+1 implies KKm = KKn,). By repeated applications of this, it follows that if K, = Km+k then K1 = K1+k (e.g., if K3 = K5, then K2 = K4, hence K1 = K3), but K1 = Kl+k means that K = K(Kk), which is not possible unless K is the only element of N. And so if K is not the only element of N, then infinitely many elements are derivable from K.

8.2 We first do (b). Take K1 = TK and K2 = T(KI). Then for any x, K1x = xK and K2x = x(KI). Then Ki(Vxy) = VxyK = Kxy = x; K2(Vxy) = Vxy(KI) = Klxy = ly = y. Thus Ki(Vxy) = x and K2(Vxy) = y.

Part (a) is then immediate, because if Vxy = Vzw then Ki(Vxy) _ Ki(Vzw), hence x = z. Likewise K2(Vxy) = K2(Vzw), hence y = w. 8.3 (a) Take neg = V f t.

(b) Take c = Rf (Rxyz = yzx). (c) Take d = TK.

(d) Take i = RK. (e) Take e = CSneg.

8.4 For an extensional system, we can take neg to be C. 9 We now give combinators that work, but how these were found will be explained in Part II of this chapter.

(a) M(BMM) is idempotent. (b) LM(LM) is idempotent. (c) LLL(LLL)(LLL)(LLL) is idempotent.

347

Solutions

9.1

Again we give the combinators, but explanations are given in Part II.

(a) M(BKM) is voracious. (b) LK(LK) is voracious.

10 (a) We showed in Chapter 2 that if M is present and if the system has the compositional closure property that for any elements a and b there is an element c that composes a with b (i.e., is such that for all x, cx = a(bc)) then every element has a fixed point. More specifically, ono

if y composes x with M, then yy is a fixed point of x. Well, BxM composes x with M, and so BxM(BxM) is therefore a fixed point of x. Hence also M(BxM) is a fixed point of x. We can verify directly: M(BxM) = BxM(BxM) = x(M(BxM)). (b) Since for any x and y, Lxy = x(yy) then we can take Lx for y and we have Lx(Lx) = x(Lx(Lx)), and so Lx(Lx) is a fixed point of x. Discussion ova

We saw in the solution to Problem 7 that M(BTM) and LT(LT) are both elements that have the property of commuting with all elements. Also, they are both fixed points of T! Actually, if A is any fixed point of T, then A commutes with all elements, because then A = TA, hence Ax = TAx = xA. .--.

11 (a) Suppose x is a fixed point of M. Thus Mx = x. But also Mx = xx. Hence xx = x, so x is idempotent. We know by the solution to Problem 9 that for any x, M(BxM) is a fixed point of x. Hence M(BMM) is a fixed point of M, hence is idempotent. Also, LM(LM) is a fixed point of M, so LM(LM) is idempotent. [This is how we found solutions to (a) and (b) of Problem 9.]

(b) Suppose LLx = x. Then L(xx) = x (since LLx = L(xx)). Hence L(xx)x = xx. But L(xx)x = xx(xx), hence xx(xx) = xx, which means that xx is idempotent. Now, a fixed point of LL is L(LL)(L(LL)), which is also LLL(LLL) (since L(LL) = LLL). Therefore LLL(LLL)((LLL)(LLL)) is idempotent. [This is how I found a solution to (c) of Problem 9. Shorter solutions have been found by Larry Wos and others at Argonne National Laboratories. I believe a solution has been found with only seven letters.]

Chapter 17. Fixed point properties of combinatory logic

348

Fm's

11.1 Suppose Kx = x. Then for any y, Kxy = xy, but Kxy = x, hence xy = x, so x is voracious. Since M(MKM) is a fixed point of K, it is voracious. Since LK(LK) is a fixed point of K, it is voracious. [This is how solutions were found for Problem 9.1.] 12 [Proof of F1] Suppose f (x,xl, ... , x,,,) is realizable; let b be an element that realizes it. If now every element has a fixed point, let a be a fixed point of b. Then since a = ba, then ax, ... xn, = baxl ... x = f (a, x1, ... , xn).

[Proof of F21 Suppose that b realizes the function f (xx, yl, ... , yn). Then bxy1 ... yn = f (xx, Y 1 ,- .. , yn) for all x, yl, ... , yn, hence bby1 ... yn = f (bb, yl, . , yn), and so we take a = bb. .

.

,.y

The following is an obvious corollary of Theorem F2. Suppose that for every realizable function f (x, yl, . , yn) the function f (xx, yl, ... , yn) is realizable. Then for every realizable function f (x, yl,... , yn) there is an element a such that for all y1, ... , yn, ayi ... yn = f (a, yl, ... , yn) Obviously every complete applicative system satisfies the hypothesis of this corollary, and also the hypothesis of Theorem F1. . .

E-+

13 (a) We have shown that M(BxM) is a fixed point of x, so we seek a combinator 9 such that 9x = M(BxM) for every x. Well, BxM =

.-,

RMBx, hence M(BxM) = M(RMBx) = BM(RMB)x. Thus BM(RMB) is a sage.

(b) Also, BxM = CBMx, hence BM(CBM) is a sage. (c) We now use the fact that Lx(Lx) is a fixed point of x, and Lx(Lx) _ M(Lx) = BMLx. Hence BML is a sage. Thus also M2L is a sage. (d) Since we can take BWB for L (Problem 3.2(b)) then BM(BWB) is a sage (as the reader can directly verify).

13.1 We again use the fact that M(LX) (which is LX(LX)) is a fixed point of x. But M(Lx) = QLMx, hence QLMx is a fixed point of x, and so QLM is a sage. But we can take L to be QM (Problem 3.2(c)) and so Q(QM)M is a sage. Actually, without bringing L into it, we can reason more directly as follows. We use the first fact that M(BxM) is a fixed point of x. Now,

M(BxM) = M(QMx) = BM(QM)x = Q(QM)Mx, so Q(QM)M is a [Alternatively, we have already shown that BM(CBM) is a sage, but BM(CBM) = BM(QM) = Q(QM)M.] sage.

349

Solutions

40H

As we are at it, let us note that if we take L to be QM, then BxM and Lx are the same element (since BxM = QMx = Lx), and so Lx(Lx) and M(BxM) (which is BxM(BxM)) are really the same.

13.2 (a) If A is an assistant sage, then WA is a sage, since then WAx = Axx, which is a fixed point of x. In fact A is an assistant sage if and only if WA is a sage.

(b) QL(QL) is an assistant sage, for QL(QL)xx = QL(Lx)x = Lx(Lx), which is a fixed point of x. And so W(QL(QL)) is a sage. (c) QoLL is also an assistant sage, for QOLLxx = Lx(Lx). Actually, QoLL

same sage as W(QL(QL)).

-per

is the same element as QL(QL), for Qo = QQ(QQ), so QoLL = QQ(QQ)LL = QQ(QL)L = QL(QL). Thus also W(QoLL) is the (d) M2 is such an X! We already know that M2L is a sage, and we can take M2 to be Q(WQ0)W, as in the solution of Problem 3.3(b). And so Q(WQo)WL is a sage. Actually, this is the same sage as the one we just got in (c), since Q(WQ0)WL = W(WQOL) = W(QoLL) (since WQOL = QoLL).

13.3 (a) GLL is an assistant sage, since also GLLxx = Lx(Lx), which is a fixed point of x. [Thus W(GLL) is a sage, which can also be expressed as W(WGL).]

coax

(b) One way is to use the fact that QL(QL) is an assistant sage and to take CB for Q, thus obtaining CBL(CBL), which can be shortened to B(CBL)L. Another way is to use the fact that GLL is an assistant sage and to take BBC for G, thus obtaining BBCLL, which can be shortened to B(CL)L. [This is more economical than B(CBL)L.]

13.4 We have seen that W(WGL) is a sage, and we can take L to be BWB, and so W(WG(BWB)) is a sage.

13.5 On the one hand, B(CBL)L is an assistant sage (Problem 13.3), so W(B(CBL)L) is a sage, and if we take BWB for L we get W(B (CB(BWB))(BWB)). But it is more economical to take the sage W(WGL) and replace G by BBC (Problem 2.9) and L by BWB, thus getting W(W(BBC)(BWB)).

Chapter 17. Fixed point properties of combinatory logic

350

13.6 SLLx = Lx(Lx), hence SLL is a sage.

13.7 We take BWB for L in SLL, and so S(BWB)(BWB) is a sage. This can be shortened to WS(BWB), which is Curry's expression for a fixed point combinator.

13.8

(a) M(BOM) (b) LO(LO)

(c) M(PM) (d) WP(WP) (e) PII(P(PII)) (from (c), taking P11 for M). [How these were obtained will be revealed after Problem 14.] r-1

1 (a) P is such a combinator. [Pxyz = z(xyz) - Problem 3.5.] Suppose PYY = YY. Then for any element x, PYYx = YYx, but PYYx = x(YYx), hence x(YYx) = YYx, which means that YY is a sage.

(b) Suppose X has this property. Then WX(WX) = X(WX(WX)), and since X has this property, WX(WX) must be a sage. Since P has this property (by (a)) then WP(WP) is a sage, as we have already seen in the last problem.

14 Uxy = y(xxy), so UUy = y(UUy), thus UU is a sage! This is surely the simplest construction of a sage of any. Remarks

Now we shall explain how solutions to Problem 13.8 were found. In the sage UU, if we take PM for U (Problem 4.2(d)), we get PM(PM), which can be shortened to M(PM). This gives (c) of Problem 13.8. We can also take BO for P (Problem 4.1), and then M(PM) becomes M(BOM), which is (a) of Problem 13.8. As for (b) of Problem 13.8, we take LO for U (Problem 4.3(b)), and UU becomes LO(LO). As for (d), we can take WP for U in UU (Problem 4.2(c)), and we get WP(WP) (which we also got as a corollary of the solution of (b) of Problem 13.9).

351

Solutions

14.1 Suppose UX = X X . Then for any element y, UXy = X X y. But also UXy = y(XXy). Hence XXy = y(XXy), so XX is a sage.

Suppose 0 is a sage and that Ax = Ox for every x. Then for every x, x(Ax) = x(Bx), but x(Ox) = Ox (since 0 is a sage), so x(Ax) = 9x, so x(Ax) = Ax (since Ox = Ax), which means that A is a sage. 15

15.1 Suppose 0 is a sage. Then for any x, 09x = x(Ox) = Ox. Thus OBx = Ox for all x, and so by the last problem (taking 00 for A), OB is a sage.

15.2 Suppose A is a fixed point of 0, i.e., OA = A. Then for any x, OAx = Ax, hence x(Ax) = Ax, hence A is a sage. 15.3 Suppose B is a sage. Then for any x, x(Ox) = Ox, thus OBx = Ox. If now the system is extensional, then 00 = 0, which means that B is a fixed point of O.

15.4 Suppose A is a combinator with the property that for any element

x, Ax = O(xx). Then A is certainly a Turing combinator, since then t-+

Axy = O(xx)y = y(xxy). Let us call such a combinator a special Turing combinator. [In an extensional system, there is only one Turing combinator,

and so there is then no difference between a Turing combinator and a special Turing combinator. But we find it of interest to consider applicative systems that are not necessarily extensional.] Now, if A is a special Turing

combinator, then AA is not only a sage, but is even a fixed point of 0 (because, since Ax = O(xx) for every x, then AA = O(AA)). It so happens that the Turing combinators W1P1i LO, WP, and PM that we studied in Problem 4.2 are even special Turing combinators. Let us verify this:.

(a) W1P1x = BWP1x = W(Pix) = W(LQx) = W(Q(xx)) _ BWQ(xx) = O(xx) (since BWQ = 0). (b) LOx = O(xx).

(c) WPx = Pxx = BOxx = O(xx). (d) PMx = BOMx = O(Mx) = O(xx). And so if U is any of these four combinators, then UU is a fixed point of O.

Chapter 17. Fixed point properties of combinatory logic

352

15.5 It is true: BOx = 0(0O)x = x(BOx). [In fact BO = 0(00), so BO is a fixed point of 0, hence is a sage by Problem 15.2.]

Cps

15.6 We use P2i M2, and C. [P2xyz = x(yyz); M2xy = xy(xy).] We take A to be C(M2P2). Then for any x, Ox is a sage, since for all y, Oxy = C(M2P2)xy = M2P2yx = P2y(P2y)x = y(P2y(P2y)x) = y(Axy) Thus for all y, Axy = y(Oxy), so Ox is a sage.

16 (a) We can take L1 = BLB, which in terms of B and W is B(BWB)B. [Also we could take L1 = W1B2, where W1 is BW and B2 is BBB. Thus BW(BBB) could also be taken for L1, but we prefer BLB.] (b) Obvious.

(c) Take M3 = B(WW).

(d) Then M3L1 is a sage. Reduced to B and W, it is B(WW)(B(BW B)B). [Alternatively, we could take B(WW)(BW(BBB)).] 17 (a) Suppose that S is present and that every element has a fixed point. We will first show that A has the recursion property. Well, given an element a, let b be a fixed point of Sa. Then b = Sab. Hence for any x, bx = Sabx, but Sabx = ax(bx), and so bx = ax(bx). Thus A has the recursion property. And more specifically, for any element a, if b is a fixed point of Sa, then for all x, bx = ax(bx). We now take S itself for a, and so if y is any fixed point of SS, then for all x, -yx = Sx(-yx), hence -yxy = Sx(-yx)y = xy(yxy). Thus any fixed point y of SS is a 2-sage.

(b) We know that for any x, Lx(Lx) is a fixed point of x, and so L(SS)(L(SS)) is a fixed point of SS, hence by (a), it is a recursion combinator.

The combinator L(SS) - call it "a" - is an interesting one. It is a proper combinator, since axyz = L(SS)xyz = SS(xx)yz = Sy(xxy)z = yz(xxyz). Thus axyz = yz(xxyz). Taking a for x, we see that aayz = yz(aayz), and so as is indeed a 2-sage. 17.1 We have already constructed a sage B from B and W (Statman's sage, Problem 16) and if B is a sage, then BB is a 2-sage (since BOxy = B(xy) = xy(O(xy)) = xy(BBxy)). [Not so tricky, after all!]

Solutions

353

17.2 Yes, the combinator SS does the job. We have already seen in the solution of Problem 17 that every fixed point of SS is a 2-sage. Conversely,

suppose r is a 2-sage. Then SSPxy = Sx(Px)y = xy(Pxy), but xy(Pxy) _ Pxy (since P is a 2-sage), hence SSPxy = Pxy. If now A is extensional, then SSP = P, which means that r is a fixed point of SS.

17.3 B and W will work: we have seen that if 0 is a sage, then BO is a 2-sage. Actually, for any positive n, if 0 is an n-sage, then BB is an

(n + 1)-sage (as is easily verified). And so, starting with Statman's sage 0 constructed from B and W, BB is a 2-sage, B(BB) is a 3-sage, and so forth.

18 (a) Take el = WC. (b) Take e2 = Cel (= C(WC)). (c) Take S4 = BC(C42) [4)2xyzwv = x(ywv)(zwv) - Problem 3.10.] (d) Take C1 = S4e2e1. (e) Take C2 = S4e1e2.

(f) Take Dl = BQo(Bel). [Qoxyzw = xz(yw) - Problem 2.3.] (g) Take D2 = BG(Bel). [Gxyzw = xw(yz) - Problem 2.7.] 19 Since M and B are present, every element has a fixed point (Problem 10(a)). Given a1 and a2, let b1 be a fixed point of Ba2a1. Thus Ba2a1b1 =

bl, so a2(albl) = b1. Let b2 = albs. Thus a2b2 = b1 and albs = b2-

19.1 We use the combinator P3 of Problem 3.5 satisfying the condition:

P3xyz = y(xzy). We temporarily let A be the combinator LP3 - it is proper and satisfies the condition: Axyz = y(xxzy). Then the presence of A guarantees the cross-point property because for any elements a1 and a2, AAa1a2 = al(AAa2a1), and AAa2a1 = a2(AAala2), and so we take b2 = AAa1a2 and b1 = AAa2a1, and we have a1b1 = b2 and a2b2 = b1. 20

Given a1 and a2, take b1 = Clal(C2a2)(Clal) and b2 = C2a2(C1

ai)(C2a2).

Chapter 17. Fixed point properties of combinatory logic

354

20.1 Given a1 and a2, by hypothesis there are elements cl and c2 such

that for all x and y: clxy = ai(xy)(yx) and c2xy = a2(yx)(xy). Let d1 be a fixed point of cl and d2 a fixed point of c2. Then d1d2 = c1d1d2 = ai(did2)(d2di) and d2d1 = c2d2d1 = a2(did2)(d2di). And so we take bi = did2 and b2 = d2d1.

20.2 By Problem 17, the present hypothesis implies that A has the recursion property (in fact there is even a 2-sage). Then given elements a and b, let e be such that (1)

ex = bx(ex) (for all x)

Next, let c be a fixed point of Sae, so c = Saec = ac(ec). Hence: (2)

c = ac(ec)

By (1), taking c for x, we have ec = bc(ec). Thus c = ac(ec) and ec = bc(ec), so if we let d = ec, we have c = acd and d = bed. [This proof should be compared with that of Theorem 1.2, Chapter 16.]

20.3 Since B and W are present, so is BWB (which can be taken for L), so every element has a fixed point. Given a and b, let e be a fixed point

of BW(Ba). Then for all x, ex = BW(Ba)ex = W(Bae)x = Baexx = a(ex)x. And so we have: (1)

For all x, ex = a(ex)x

Now let d be a fixed point of W(Bbe). Then d = W(Bbe)d = Bbedd = b(ed)d, and so (2)

d = b(ed)d

By (1), ed = a(ed)d. We let c = ed, and we have c = acd and d = bcd.

20.4 Let e be a fixed point of S2V(S2aK1K2)(S2bK1K2). Then e

= S2V(S2aK1K2)(S2bK1K2)e = V(S2aK1K2e)(S2bK1K2e)

= V(a(Kie)(K2e))(b(Kie)(K2e))

Hence Kie = a(Kie)(K2e) and K2e = b(Kie)(K2e). And so we take c = Kie and d = K2e and we have c = acd and d = bcd.

20.5 Let j3 = alai. Then /3xy = x(yy)(,3xy) (because,3xy = alaixy = x(yy)(alaixy) = x(yy)(/3xy)).

Solutions

355

Given elements a and b, let h = a2/3ab. Then hx = a2/3abx = a(xx)(/3bx). Hence hh = a(hh)(/3bh). But also /3bh = b(hh)(/3bh) (since for every x and y, /3xy = x(yy)(/3xy)). And so we let c = hh and d = /3bh, and we have c = acd and d = bcd. 21

(D1C1C2i D2C2C1) is a DM-pair.

21.1 We need combinators A1, A2 satisfying the following conditions:

Aizwxy = x(wzwxy)(zwzxy) A2zwxy = y(zwzxy)(wzwxy) We can take Al = W2(V4D4e2e1) A2 = W3(Vs-b4ele2)

Then (A1A2A1, A2A1A2) is a DM-pair.

21.2 We need combinators 01 and 02 satisfying the following conditions:

Oizwxy = x(zxy)(wxy) O2zwxy = y(zxy)(wxy) We can take 01 = W2(V212); 02 = W3(V31'2). Now, suppose a and b are such that Olab = a and O2ab = b. Then

axy = Olabxy = x(axy)(bxy) bxy = O2abxy = y(axy)(bxy)

Thus (a, b) is a DM-pair

22 We want combinators n1, n2 satisfying the conditions:

nlwzxy = z(wxxy)(wyxy) n2wzxy = z(wwxxy)(wwyxy) We take nl = C(V24D3W(W2C))

n2 = Ln1

Then n2n2 is a nice combinator and ni satisfies (b).

Chapter 17. Fixed point properties of combinatory logic

356

22.1 We want combinators s1, 82 satisfying the conditions:

siwxy = x(wxy)(wyx) s2wxy = x(wwxy)(wwyx) We take

s1 = BW(C(C(BSID2)C)) S2 = Lsi Then 8282 is a symmetric combinator and si satisfies (b).

22.2 (C(BBs)C, C(BsC)) is a DM-pair. (WN, W1(CN)) is a DM-pair.

Chapter 18

Formal combinatory logic In preparation for the fixed point theorems of the next chapter, we must turn to some basics of axiomatic combinatory logic.

Part I The axiom systems CL and CLo §1 Formal combinatory logic For formal combinatory logic we take the five symbols

SK() = together with a denumerable sequence xl, x2, ... , xn, ... of variables (or alternatively, two symbols x and ' and take our variables to be x', x", x"', ... ). The class of terms is inductively defined by the following two conditions

(1) The symbols S and K and all variables are terms. These terms are called simple terms.

(2) For any terms X and Y, the expression (XY) is a term. Thus the set of terms is the smallest set containing S, K, and all variables, which is such that for any members X and Y, the expression (XY) is also a member. Terms that are not simple will be called compound. Thus every compound term is of the form (XY), where X and Y are terms. In displaying terms we often omit parentheses with the understanding that they are to be restored with association to the left, as we informally did in the last chapter, and we will also be free to omit outer parentheses. And so, e.g., the expression SKx3(KS) is simply an abbreviation for the term (((SK)x3)(KS)).

Chapter 18. Formal combinatory logic

358

By an equation, or formula, we shall mean an expression of the form X = Y, where X and Y are terms. We now consider the following formal axiom system CL. The axioms of CL are all formulas of one of the following

three forms (where X, Y, and Z are any terms):

Al: SXYZ = XZ(YZ) A2: KXY = X

A3: X=X The inference rules of CL are the following:

R1: From X = Y to infer Y = X.

R2: From X = Y and Y = Z to infer X = Z. R3: From X = Y to infer XZ = YZ. R4: From X = Y to infer ZX = ZY. By a proof in CL is meant a finite sequence F1, . . . , Fn, of formulas such that each member of the sequence is either an axiom of CL, or is derivable from an earlier member by R1, R3, or R4, or is derivable from two earlier members by R2. A formula is said to be provable in CL, or to be a theorem

of CL, if it is a member of a sequence that constitutes a proof in CL. Equivalently, the class of theorems of CL is the least class that contains all axioms of CL and is closed under the rules R1-R4.

Two terms X and Y are called equivalent - in symbols, X - Y - if the formula X = Y is provable in CL. The system CLo

`S~'

"yam

By a combinator is meant a term without variables. Thus all combinators are built from the four symbols S, K, (,). By a sentence is meant a formula in which no variable occurs. Thus a sentence is an expression of the form X = Y, where X and Y are combinators. We let CLo be the system CL whose axioms are restricted to sentences. Thus all theorems of CLo are sentences. Our main interest is in the system CLo. It is very easy to construct an elementary formal system over the alphabet {S, K, (, ), =} in which can be represented the set of combinators, set of sentences, set of axioms, and set of theorems of CLo (Exercise 1 below). Thus CLo is a formal system, and so the set of theorems of CLo is recursively enumerable (under any admissible Godel numbering). The combinators I, B, C, W, and others derivable from them, are definable in terms of S and K, as shown in the last chapter. All results on

§1 Formal combinatory logic

359

applicative systems informally proved in the last chapter can be formally proved in CLo, for example if we take any of the numerous sages 0 constructed in the last chapter (for example (Q(QM)M), and write it down explicitly in terms of S and K, then for any combinator X, the sentence OX = X (OX) is formally provable in CLo.

Exercise 1 Show that the set of theorems of CLo is formally representable. Do the same for CL (couched in the finite alphabet {S, K, , x" }). Induction principle

To show that a given property of terms holds for all terms, it suffices to show:

(1) The property holds for all simple terms (S, K, and variables).

(2) For any terms X and Y, if the property holds for X and Y, then it holds for (XY).

To show that a given property holds for all combinators, it suffices to show:

(1) The property holds for S and K. P..

(2) For any combinators X and Y, if the property holds for X and for Y, it holds for (XY). Combinatory completeness

We stated without proof in the last chapter that any applicative system containing S and K is combinatorially complete. In terms of formal combinatory logic, this means that for any term W and any variables Y 1 ,- .. , yn that include all variables in W, there is a combinator A such that the for-

mula Ay, ... yn = W is provable in CL (and hence for any combinators X1, ... , Xn, the sentence AX1 ... Xn = W (Xl, ... , Xn) is provable, where W (X1i ... , Xn) is the result of substituting the combinators X1, ... , Xn for the variable Y 1 ,- .. , Yn respectively in W). Let us now prove this.

We take I to be SKK, and so for any term X, the formula IX = X is provable in CL. Now, let x be any variable and W any term (in which x may or may not happen to appear). We define the term W., (also written Ax W) by the following inductive scheme:

(1) If W is x itself, we take Wx = I.

Chapter 18. Formal combinatory logic

360

(2) If W is any simple term other than x itself, we take Wx = KW. (3) For a compound term WV, we take (WV)x = SWxVx. By easy induction arguments we see:

Proposition 1

(a) The variable x does not occur in Wx.

(b) All variables of Wx occur in W.

(c) The formula Wxx = W is provable in CL.

III

III

Parts (a) and (b) are obvious. To see that (c) holds, if W is a simple term, then (c) holds by (1) and (2) above. For a compound term WV, if Wxx - W and Vxx - V, then (WV)xx = SWxVxx - Wxx(Vxx) - WV. Now we wish to show the combinatory completeness of CL, in the sense that for any term W, and for any variables yl, ... , y,,, which include all the variables of W, there is a combinator A such that the formula Ay, ... yn = W is provable in CL. Well, for any term W and any variables yl, ... , yn, we write Wy,...y. for (... ((Wy1)y2) ...)yn . By mathematical induction, Proposition 1 yields:

Proposition 2 (1) None of the variables yl, ... , yn occur in Wy1 (2) All variables of Wyly2...Y.. occur in W. (3) Wyaya-l...yl yi ... Y. = W W.

If now yl,... , yn includes all the variables of W, then by (1) above, Wyn,,,yl contains no variables, hence is a combinator A, and by (3), Ay, ... yn = W. Thus CL is combinatorially complete.

§2 Consistency and models

cam

-.C

By a non-trivial applicative system is meant an applicative system in which N contains more than one element. By a model for CL is meant a non-trivial applicative system that is combinatorially complete. Now, the consistency of CL - in the sense that not all formulas are provable - has been known for some time, but this was not enough to ensure the existence of a model. The existence of models for CL was established by Dana Scott[25]. Simplified models were subsequently found, and the following one is a variant by Meyer[9] of one due to Plotkin[12] as reported in Rosser[23].

§3 Extensional combinatory logic

361

k-0

Start with a non-empty set A and extend it to the least set B such that for every element x c B and every finite subset Q of B, the ordered pair (,(3, x) is an element of B. The model consists of all subsets of B, and for any subsets C and D of B, we take (CD) to be the set of all ordered pairs ('(3, x) such that ,Q is a finite subset of D, x is an element of B, and (/3, x) E C. We then take K to be the set of all (a, ((3, x)) such that a, /3 are finite subsets of B and x E a. We take S to be the set of all (a, (/3, (-y, x))) such that a,,3, -y are finite subsets of B and x c ary(,3ry). It is not too difficult to verify that

for any subsets C, D, E of B, KCD = C and SCDE = CE(DE). We thus have a model of CL.

§3 Extensional combinatory logic No more sentences (formulas without variables) are provable in CL than in CLo, but a host of other sentences are provable if we add to CL the following "extensionality" rule:

R5 From Wx = Vx to infer W = V, where W, V are terms and x is a variable that does not occur in either W or V.

With R5 added, the resulting system - extensional combinatory logic - is abbreviated "CL-l-ext". With R5 added, one can prove, for example, C11

the sentence SKK = SKS (because SKKx = SKSx is provable in CL, since SKKx = x and SKSx = x are both provable in CL).

'C3

As we have said, our main interest is in CLo and many sentences not provable in CLo are provable in CL+ext. And now comes something amazing: there are four sentences X1, X2, X3, X4 provable in CL+ext, not provable in CL, such that if they are added as further axioms to CLo, then all sentences provable in CL+ext become provable! In fact-, if we add X1-X4

as axioms to those of CL, the resulting axiom system - which we call CL1 - is equivalent to CL+ext, in the sense that the same formulas are

awl

Ski

provable in CL1 as in CL+ext. We will not need this remarkable fact for the next chapter, but its proof is a real gem, and so we give a version of it in the appendix that follows - a version that we believe to be particularly easy to understand by those not familiar with the result.

Appendix on CL1 We let x, y, z be any distinct variables. We first observe that the following four formulas are provable in CL+ext:

Chapter 18. Formal combinatory logic

362

Q1: S(Kx)(Ky) = K(xy)

Q2: S(Kx)I = x Q3: S(S(KK)x)y = x Q4: S(S(S(KS)x)y)z = S(Sxz)(Syz)

That these are provable in CL+ext follows from the fact that the following four formulas are provable in just CL (which the reader can easily verify):

(1) S(Kx)(Ky)z = K(xy)z

(2) S(Kx)Iy = xy (3) S(S(KK)x)yz = xz (4) S(S(S(KS)x)y)zw = S(Sxz)(Syz)w By the completeness of CL, there are combinators 01, 02, I', K ' , ' 0 1 , 02 such that the following six formulas are provable in CL:

(la) O1xy = S(Kx)(Ky) (2) (3)

(4a)

(1b)

02xy = K(xy)

I'x = S(Kx)I K'xy = S(S(KK)x)y z/J1xyz = S(S(S(KS)x)y)z (4b)

02xyz = S(Sxz)(Syz)

[Specifically, we can take 01 = C(BB(BSK))K, 02 = BK, I' = C(BS K)I, K' = BS(S(KK)), 01 = BB(BS)(BS(SKS)), 02 = C(BB(B(S(B (BS)S)))). But this particular choice is inessential.]

Then by virtue of Q1-Q4i the following formulas are provable in CL+ext. Qi : 01xy = 02xy Q'2 :

I'x = Ix (since x = Ix is provable)

Q'3 : K'xy = Kxy (since x = Kxy is provable)

Q4 : 01xyz = 02xyz Then by further applications of R5 (two of which are required for Q'1 and Q3, and three of which are required for Q4) the following four sentences are provable in CL+ext. X1 01 = 02

X2I'=I

§3 Extensional combinatory logic

363

X3 K'=K X4 01 = 02

These are the "mystery" sentences X1-X4! We let CL1 be CL with X1X4 added as axioms. Since X1-X4 are provable in CL+ext, then all formulas of CL1 are provable in CL+ext. We now wish to show that everything provable in CL+ext is provable in CL1. And for this it suffices to show that CL1 is closed under rule R5, i.e., that if Wx = Vx is provable in CL1 and if the variable x does not occur in W or in V, then W = V is provable in CL1.

Since X1-X4 are provable in CL1, then Q1-Q4 are provable in CL1. The formulas Q1-Q4 are the important ones for the proof that follows.

For any terms W1 and W2, we write W1 - W2 to mean now that W1 = W2 is provable in the system CL1. For any term W and variable x, we define Wx as in the proof of combinatory completeness. Until further notice, we let x be a variable fixed for the entire discussion, and it will Cc)

be notationally convenient to write W* for Wx. Thus, if W = x, then W* = I; if W is any simple term other than the variable x, W* = KW, and for a compound term, (W1W2)* = SWiW2. Now we shall see how the formulas Q1-Q4 enter the picture. [Of course they also hold, replacing x, y, z by any terms whatsoever.]

Fact 1 If x does not occur in W, then W* - KW. Proof By induction on the structure of W: if W is a simple term distinct from x, then W* is KW, hence W* - KW. Now suppose W is a compound term W1W2i where W1 and W2 are both free from x and both such that Wi KW1 and W2 KW2. Then W* = SW1* W2 - S(KW1)(KW2), but S(KW1)(KW2) -- K(W1W2) (by Q1!). Hence W* - KW (since W is W1W2). This completes the induction

Fact 2 If x does not occur in W, then (Wx)* - W. Proof Suppose that x doesn't occur in W. Then by Fact 1, W* - KW. Now, (Wx)* = SW*x* - S(KW)x* = S(KW)I. But S(KW)I - W (by Q2), and so (Wx)* - W.

Fact 3 (KW1W2)* - W. Proof (KW1W2)* = S(S(KK)W)W2 (see note below) 1

Wi (by Q3)-

364

Chapter 18. Formal combinatory logic

Note

(KW1W2)* = S(KW1)*W2 = S(SK*W*)W2 = S(S(KK)W*)W* (since

K* = KK). Fact 4 (SW1W2W3)* - (W1W3(W2W3))*.

Proof (SW1W2W3)* = S(S(S(KS)W*)W2)W3 (see note below) =- S(S W*W3)(SW*W3) (by Q4) = S(W1W3)*(W2W3)* = (W1W3(W2W3))* Note

(SW1W2W3)* = S(SWiW2)*W3 = S(S(SW1)*W*2)W3 = S(S(SS* W*)W2)W3 = S(S(S(KS)Wi)W*)W3. Having proved Facts 1-4, the formulas Q1-Q4 (and hence the sentences X1-X4) have now served their purpose, and it will no longer be necessary to refer to them. If F is a formula W = V, by F* we shall mean the formula W* = V. We now aim to prove that if F is provable in CL1, so is F*.

Lemma 0 If F is provable in CL1 and F contains no variables, then F* 55.

is provable in CL1.

Proof Suppose W = V is provable in CL1i and W and V contain no variables. Then KW = KV is provable in CL1, but since neither W nor V contain any variables, then W* = KW and V* = KV are both provable in CL1 (by Fact 1). Hence W* = V* is provable in CL1.

Lemma 1 If F is any axiom of CL1, then F* is provable in CL1. Proof Suppose F is an axiom of CL1. If F is one of the sentences S1-S4, then F contains no variables, hence F* is provable in CL1 by Lemma 0. If F is an axiom of type A3, i.e., if F is W = W for some term W, then F* is the formula W* = W*, which of course is provable in CL1.

If F is of type A1, i.e., of the form KW1W2 = W1, then F* is

E-4

(KW1W2)* = Wi, which is provable in CL1 by Fact 3. If F is of type A2, i.e., of the form SW1W2W3 = W1W3(W2W3), then F* is (SW1W2W3)* = (W1W3(W2W3))*, which is provable in CL1 by Fact 4. This takes care of all possible cases.

§3 Extensional combinatory logic

365

Lemma 2 If Wi = W2 is provable in CL1, so are (W1V)* = (W2V)* and (VW1)* = (VW2)*.

Proof This is obvious. Suppose Wi = W2 is provable in CL1. Then so is SW1V* = SW2V*, which means that (W1V)* = (W2V)* is provable in CL1. Likewise SV*Wi = SV*W2 is provable, hence (VW1)* = (VW2)* is provable.

Now, let CL1* be the axioms system CL1, with the following axioms and rules added: For every axiom F of CL1 we take F* as an additional axiom of CL1*. Then we add the rules that from Wi = W2 we may infer (W1V)* = (W2V)* and (VW1)* = (VW2)*. By Lemma 2, these two new inference rules do not enlarge the class of provable formulas of CL1, and by Lemma 1, the new axioms of CL1* are already provable in CL1, and

E-+

therefore all formulas provable in CL1* are provable in CL1 (though a proof of a formula in CL1 will in general be considerable longer than a proof of it in CL1*). Now, suppose F1,.. . , F,,, is a sequence of formulas that constitute a proof in CL1, Then the sequence Fl*,..., F,*, is a proof in CL1*. Therefore, if F is provable in CL1, then F* is provable in CL1*, but since everything provable in CL1* is provable in CL1 we have

Theorem A,, If F is provable in CL1, so is F*. Now we easily get:

III

Theorem A [Main result] - If Wx = Vx is provable in CL1 and if x does not appear in W or in V then W = V is provable in CL1.

Proof Assume the hypothesis. Since Wx = Vx is provable in CL1, so is (Wx)* = (Vx)* (by Theorem A0). But since x doesn't appear in either W or in V, then (Wx)* = W and (Vx)* = V are provable in CL1 (by Fact 2), hence W = V is provable in CL1.

This shows that the systems CL1 and CL+ext are equivalent.

Chapter 19

A second variety of fixed point theorems There is a result in combinatory logic known as the "second fixed point theorem," whose statement, proof, and applications to undecidability will be dealt with in Part II of this chapter. It is based on methods of naming combinators within combinatory logic itself, and this fits in ideally with

the overall purpose of this volume. Part I of this chapter is devoted to adequate naming functions. The second fixed point theorem itself has a host of variants, all of which have pleasant generalizations to sequential systems, which will be dealt with in our final chapter.

CAS

Part I Self-reference in CLO As we stated in the last chapter, our main interest is in the system CLo; 0°.

ASCU

extensionality is not needed for what we will be doing. For any combinators C,3

.ti

X and Y, by X - Y we now mean that the sentence X = Y is provable in CLo.

§1 Admissible name functions Let us consider a 1-1 mapping II that assigns to each combinator X a s0.

combinator X (more often written X-') that we will regard as the name of X. We shall call the mapping admissible (as a name function) if there are combinators A and S such that for all combinators X and Y the following two conditions hold: III

C1: AXY-XY

§1 Admissible name functions

367

C2: SX=X We remark that the usual procedure to obtain an admissible name function is to first assign to each natural number n a combinator n as its representative, and then to take an effective Godel numbering of all combinators, coo

and then to take X to be n, where n is the Godel number of X. But we shall take a more direct approach, briefly comparing it later on with the coo

standard approach. Before we exhibit a specific admissible name function, we wish to establish some general principles about name functions. In preparation for this, let us recall a few things. First, we are defining t to be K and f to be KI (§8, Chapter 17) and for any combinators X and

Y, tXY - X and fXY - Y. We are taking neg to be V f t (V satisfies the condition: Vxyz - zxy), and we recall that negt = f and neg f - t. We are taking the ordered pair [X, Y] to be VXY and K1 to be TK and K2 to be TKI (T, not to be confused with t, satisfies the condition: Txy - yx), and we recall that K1 [X, Y] - X and K2 [X, Y] - Y. We recall that the combinators S and K are called simple and all other combinators are called compound.

Pre-admissible mappings We shall call the mapping X -> X pre-admissible if there are combinators Z1, Z2, Ql, 0`2 satisfying the following conditions: P1:

(1) Z1K

t, but for any X

K, Z1X - f

(2) Z2St, but foranyX4S,Z2X-f P2: Q1XYY - X and ay2XYY - Y

'C3

a,"

We call such an element Z1, a K-tester and such an element Z2 an Stester. The combinators al and 0'2 might be called predecessor combinators. [We can think of X and Y as the first predecessor, second predecessor of XY respectively.] Now comes a lovely result, whose proof uses a modification of a brilliant use of fixed points due to Alan Turing[36].

Theorem R [A recursion principle] If the mapping X -> X is preadmissible, then for any combinator a and any combinators a1 and a2, there is a combinator ,Q such that:

(1) pK=a1 and ,3S-a2 (2) For any compound combinator XY, 9XY - a(QX)(QY)

Chapter 19. A second variety of fixed point theorems

368

Proof Given a, al, a2, by the fixed point theorem (Theorem F, §12, Chapter 17) there is a combinator/d satisfying the following condition: ,3x = Zlxaj(Z2xa2(a(Q(olx))\N\o2x))))

III

III

Since Z1K = t, then it is easily seen that /.3K - a1. Since Z1S = f and Z2S = t, then ,dS = a2. Next, for a compound combinator XY, since Z1XY f and Z2XY ° f, then (3XY = a(,3(o1XY))(0(a2XY)) a(,3X) (,3Y).

As our first application of the recursion principle we have:

X11

Theorem 1.1 Suppose that the mapping X -+ X is pre-admissible and that there is a combinator A' such that for all X and Y: A'XY - XY. Then the mapping satisfies condition C2 of admissibility, i.e., there is a combinator 6 such that for all x: 6X - X.

Proof By Theorem R, taking a = A', a1 = K, a2 = SL there is a combinator S (called "0" in Theorem R) such that SK = K, SS = S, and 6XY = A'(6X)(SY). [Specifically, we can take S satisfying the condition Sx = Z1xK(Z2xS(A'(6(0'lx))(6(o2x)))).] The result then follows by inducIII

III

III

III

III

tion. We already have SK K and 6S - S. Now suppose that X and Y are such that SX - X and SY - Y. Then SXY = A'(6X)(SY) = A'XY XY. This completes the induction.

Corollary 1.2 A sufficient condition for a mapping X --> X to be admissible is that it be pre-admissible and that there are combinators A and A' such that for all combinators X and Y:

AXY = XY

A'XYXY More on admissibility The following will be useful:

Proposition 1.3 If M is admissible then for every n > 2 there is a combinator An, such that for all X 1 ,.. . , X,,,: A,zXl ... X,, - X1 ... X. Proof By induction on n > 2: we take A2 = A. Now suppose there is a combinator A,,, satisfying the condition A,,Xl ... X,,, = X1 ... X. We

§1 Admissible name functions

369

III

then take An+l satisfying the condition: An+1x1... xnxn+l - A(Anxl ... xn)xn+l. Then An+1X1... XnXn+l - A(AnXl ... XnXn+1 = AXl ... XnXn+l - Xl ... XnXn+l

Strong admissibility We shall call a mapping X --> X strongly admissible if it is admissible and if also there is a combinator H such that for every combinator X:

HX -X Theorem 1.4 If the mapping X -> X is pre-admissible then there is a combinator H satisfying the condition: HX - X.

III

Proof Suppose the mapping is pre-admissible. Then by Theorem R, taking a = I, al = K, a2 = S, there is a combinator H such that HK - K,

HS - S, and HXY = I(HX)(HY) - hence HXY - (HX)(HY). By an obvious induction, HX - X for every X.

Corollary 1.5 If the mapping X --> X is both pre-admissible and admissible then it is strongly admissible. Remarks

We do not know whether or not every admissible mapping is strongly admissible. There are certain fixed point theorems of this chapter whose hypothesis requires only admissibility, and others whose hypothesis appears to require strong admissibility. As a point of minor interest, we have:

Proposition 1.6 If the mapping X -> X is strongly admissible then there is a combinator A' satisfying the condition: A'XY - XY.

Proof Take A' such that A'xy = S(A(Hx)(Hy)). 6(A(HX)(HY)) - S(AXY) - SXY XY

Then A'XY

Full admissibility :,.

Tao

.°-

A trivial example of a strongly admissible map is the identity function. We shall now define a mapping X --> X to be fully admissible if it is both admissible and pre-admissible. A fully admissible mapping is strongly admissible by Corollary 1.5.

Chapter 19. A second variety of fixed point theorems

370

More on pre-admissibility For applications, the following proposition will prove handy: v?.

Proposition 1.7 Suppose that there are combinators al and a2 such cad

that (1) If X is simple (K or S) then a1X = t, but if X is compound, then III

a1X - f. III

(2) a2Ktand a2S- f Then there is a K-tester Z1 and an S-tester Z2. Proof Take Z1 such that Z1x = alx(a2x)f and Z2 such that Z2x

ajx(neg(a2x)) f. Then it is easily verified that Z1 is a K-tester and Z2 is an S-tester.

The above with earlier results gives the main result of this section.

Theorem I A sufficient condition for a mapping X - X to be fully admissible (and hence also strongly admissible) is that there are combinatory o , 0`2, al, a27 A, A' such that for all combinators X and Y: III

III

QjXY = X III

a2XY = Y a1X t, if X is simple, otherwise a1X = f III

III

a2K = t and a2S = f AXY - XY III

A'XY XY §1.1 Construction of a fully admissible map We now exhibit a concrete name function X -* X and show that it is fully admissible. We inductively define X as follows:

K = [t, t] (= VKK)

S = [t, f] (= VK(KI)) XY = [f, [X, Y]] (= V f (VXY)) We see that every name X is an ordered pair whose first component is t if X is simple, and f if X is compound. We must now find combinators

§2 Admissible maps via numeral systems

371

al, a2, al, a2, A, A' such that the sufficient condition for full admissibility given by Theorem I holds. We take a1, a2 such that for all x, alx = Kl(K2x) and vex = K2(K2x).

Then it is obvious that a1XY = X and 0`2XY = Y (since K2XY [X7 YD_

As for al and a2, if X is simple, then K1X = t, otherwise K1X = f, and so we take al = K1. Also, K2K = t and K2S = f, and so we take a2 = K2. We now have al, a2, al, a2, and so our mapping is pre-admissible (since by Proposition 1.7 there is then a K-tester Z1 and an S-tester Z2). As for A, we obviously take A satisfying the condition: Axy V f (Vxy). Then AXY = XY. As for A', we note XY = V f (VXY) = V f (VV f (V f (V(V f (VYX)) Y))) (as the reader can verify by direct computation). And so we take A' such as to satisfy the condition:

A'xy = Vf(VVf(Vf(V(Vf(VVx))y))) Thus our mapping is fully admissible.

Exercise 1 Show that the mapping X -* X that we have constructed is formally representable over the alphabet of CLo. ?.1

*§2 Admissible maps via numeral systems A more standard type of admissible mapping uses a numeral system together with a Godel numbering, which we now briefly sketch. By a numeral system is meant a 1-1 map that assigns to each natural number n a combinator n such that there exist combinators a and Z such that:

(1) ZO=t, but for alln#0, Zn= f (2) For every natural number n, an = n+ (n+ = n + 1) Such a combinator or is called a successor combinator and such a combinator Z is called a zero tester. A specific numeral system used in Barendregt[1] is inductively defined as follows. We take 0 = I and for any n, we take n+ = [f, n]. This is indeed a numeral system, since we can take or = V f and Z = Tt. Then obviously III

III

V fn = [f, n] = n+, and ZO = TtI = It = t, but Zn+ = Tt(V fn) _ III

V f nt = t f n- f. Thus or is a successor combinator and Z is a zero-tester. III

In this numeral system there is a crucial combinator P - called a predecessor combinator - such that for all n, Pn+ = n, namely, take

Chapter 19. A second variety of fixed point theorems

372

`ti

P= T f. Then Pn+ = T f (V f n) - V f n f- f fn - n. [Note also that PO - f.]

CAD

Now, a combinator a is said to define a numerical function f (xl, ... , xn) (with reference to a given numeral system) if for all numbers al, ... , an: act, ... an - f (a,, ... , an). And f is said to be A-definable if some combinator a defines it. The beautiful thing is that the A-definable functions are precisely the recursive functions! This is obvious in one direction. From C'1

CAD

E-+

the fact that CLo is axiomatizable (the set of provable sentences is formally representable) it easily follows that every A-definable function is recursive 0-CCl

t',

(*Exercise 2, below). As for the converse, the key facts are that primi..O

tive recursion and minimization can be captured in combinatory logic (see *Exercise 3 below). Under any of the standard Godel numberings, there are recursive func-

tions con(x, y) and numx such that for any combinators X and Y with F31

respective Godel numbers x and y, the number con(x, y) is the Godel number of (XY) and num(x) is the Godel number of the numeral x (see T.M.M. for a handy Godel numbering). Then, for any combinator X, one defines

X (more usually written X,) to be the numeral x, where x is the Godel number of X. The mapping X -* X is then admissible (in fact strongly ..cam-r

admissible'). This is the usual procedure. By contrast, our admissible mapping does not use a numeral system or Godel numbering, and the proof if its strong admissibility is relatively simple.

*Exercise 2 Show that every A-definable function is recursive. *Exercise 3 (a) Show that the set of A-definable functions is closed under composition, explicit transformations, and introduction of constants.

(bl) Suppose f (x) is A-definable by a. Let c be any number and let g(x) be the function defined by recursion from f and c, i.e., the function

`±o

satisfying the following conditions: g(O)

9(n+)

= c =

f(g(n))

CAD

By the fixed point theorem, there is a combinator a satisfying the condition

Qx - zxx(a((3(Px))) 1The proof of this is rather elaborate - see Barendregt[1], whose proof uses a 1-point base (a non-proper combinator X from which S and K are both derivable).

§3 The second fixed point theorem

373

Show that Q defines the function g(x).

(b2) More generally, suppose g(y1i... , yn), h(x, yl,... , yn, z) are )-definable functions, and that f (x, y1i ... , yn) is a function defined from g and h by the following recursion: = g(Y1, . " , yn) h(x, yl.... 7 Yn' f (x, y1, ... , yre)) f (x+, yl, ... , yn) /J (0, y1, ... , yn)

-

Show that the function f (x, Y1, ... , yn) is A-definable.

(c1) Suppose that p is a combinator such that for every natural number

n, either pn = t or pn - f. By the fixed point theorem there is a combinator H satisfying the condition Hx - pxx(H(ax)). Now oar

suppose that n is a number such that pn - t, but for every m < n, pfn = f (in other words, n is the smallest number such that pn t). III

III

Prove that HO = n.

(c2) Suppose that f (x1, ... , xn, y) is a regular function on the natural numbers, i.e., , for every x1i .... xn there is at least one y such that f (XI, ... , xn, y) = 0. Let 9(x1, ... , xn) be the smallest y such that

f (X1, xn, y) = 0. 0.1

Now suppose that a is a combinator that defines the function f . By the fixed point theorem there is a combinator H satisfying the condition: Hx1 ... xny = Z(ax1

... xny)y(Hx1 ... xn(a'y))

Let 3 be a combinator satisfying the condition ,3x1 ... xn - Hx1 ... xnO. Show that the function g is definable by p. [This proves that the set of A-definable functions is closed under minimization, which concludes the proof that all recursive functions are A-definable.]

Part II Fixed point theorems of the second type §3 The second fixed point theorem In what follows, II will be an admissible mapping X - X fixed for the discussion. [Strong admissibility will not be required till later.] Now, which of the following statements, if true, would surprise you?

Chapter 19. A second variety of fixed point theorems

374

(1) There is a combinator X such that X = X.

(2) There is a combinator X such that X XX. (3) There is a combinator X such that X = XX. III

(4) There is a combinator X such that X = XX. III

(ID

(5) There is a combinator X such that X X.

(7) There is a combinator X such that for every combinator Y, XY YX.

III

III

(6) There is a combinator X such that for every combinator Y, XY = X.

Well, all of these statements are true! They are all but special cases of the following theorem.

Theorem 3.1 [Second fixed point theorem of combinatory logic] For every combinator a there is a combinator X such that aX = X. Proof we continue to let A -and 6 be combinators such that for all X and

III

III

Y, AXY = XY and SX - X. We now let t be a combinator satisfying the condition: Ax = Ax(Sx). Then for any combinator X, AX = XX (since AX = AX(6X) = AXX = XX). Next, given a, let Y be a combinator such that for all x, Yx = a(Ax) (we could take Y = Bat). Then YY = a(MY) = a(YY). We hence take X = YY, and so X = aX. Exercise 4 Why are the statements (1)-(7) preceding the theorem consequences of it?

§4 Applications We let II be an admissible mapping fixed for the discussion, and we shall define X to be a II-fixed point of a if aX = X. Theorem 4.1 tells us that every combinator has a 11-fixed point. Before turning to some significant applications of this, let us see why statements (1)-(7) preceding the theorem are special cases of it.

For (1), let X be a 11-fixed point of the the identity combinator I. For (2), we obviously take X to be a II-fixed point of the combinator M

(Mx = xx). For (3), we take X to be a 11-fixed point of a fixed point combinator 0. Then X = OX = X(OX) = XX. For (4), we take X to

§4 Applications

375 III

be a II-fixed point of a voracious combinator a (ax = a for all x). Then "CS

aX = X, but also aX - a, hence X - a. Then, since aX = X and a- X, we have XX _= X. For (5) we take X to be a II-fixed point of S. For (6) we take X to be a II-fixed point of K, and for (7) we take X to be a II-fixed point of T.

Representability We shall say that a combinator a represents the set of all combinators

X such that aX = t. We shall say that a set M of combinators is Arepresentable if it is represented by some combinator a.

It is easy to prove that every A-representable set is formally representable (thus r.e. under every standard Godel numbering) and it can be shown from known results that every formally representable set of combinatory is A-representable. From Theorem 3.1 we get:

Theorem 4.1 For any A-representable set M of combinators there is a combinator X such that X- t F-+ X E M.

Proof Suppose a represents M. By Theorem 3.1 there is a combinator X such that aX =_ X. But also aX =_ t X E M (since a represents M). Thus X - tX E .M.

For any set M of combinators, by its complement .M we mean its complement with respect to the set of all combinators, i.e., M is the set of all combinators not in M. We let T be the set of all combinators a such that a =_ t and T be its complement. Thus T is the set of all a such that

a0- t. Corollary 4.2 The set T is not A-representable. Exercise 5 For any sentence X = Y of CLo, define its code to be the car

coo

combinator [X, Y] (i.e., VXY). Call a sentence a Godel sentence for a set M of combinators if the sentence is provable in CLo if its code is in M. Show that for any A-representable set M, there is a Godel sentence for M. Conclude that the set of codes of the non-provable sentences of CLo is not A-representable.

III

III

III

We shall say that a combinator a dines a set M if aX =_ t for every x c M, and aX = f for every X E M. We call M A-definable if some

Chapter 19. A second variety of fixed point theorems

376

c--

combinator a defines M. It follows from the consistency of CLo that t 0 f (because if t = f , then for every X and Y, tXY f XY, which means X = Y, which would mean thatCL0 is inconsistent). Therefore, if a defines III

III

M, then a certainly represents M (since it is immediate that X E M = aX = t, and conversely, aX = t -(aX = f) = -X E M = X E M, and thus X E M +-+ aX = t). Also, if a defines M then neg a defines M, and so M is also A-representable. Thus if M is A-definable then M and .M are both A-representable. [The converse also happens to be true, since A-representability is equivalent to formal representability and A-definability is equivalent to formal solvability.] Since T is not A-representable, as we have just seen, it of course follows that T is not A-definable. We are about to prove a theorem that generalizes this fact. We shall call a set M of combinators extensional, or closed under equivalence, if for every combinator X in M, all combinators equivalent to X are also in M. The following theorem, which generalizes Corollary 4.2, is essentially Scott's variant[24] of Rice's theorem[19]. ci-

Theorem 4.3 (a) If M1 and M2 are disjoint extensional non-empty sets, then no superset of M1 disjoint from M2 is A-definable. (b) No extensional set of combinators is A-definable, except for the empty set and the set of all combinators.

Proof (a) Suppose (M1, M2) is a disjoint pair of extensional non-empty sets,

and that M1 C M, and that M is A-definable. We show that M cannot be disjoint from M2. Let al be any element of M1 and a2 any element of M2 and let a be a combinator that defines M. Now let 3 be a combinator satisfying III

III

the condition: Ox - axa2al. Then if X E M, aX = t, hence ,3X = a2, whereas if X M, then aX = f , hence /3X = a1. Thus X E M = QX - a2, and X M = 3X - al. Now, by Theorem 3.1 we can take X such that QX = X. Suppose X M. Then X = a1 (since X = dX al) and therefore X E M1 (since M1 Jill

III

[++

III

III

is extensional and contains al). This is impossible, since M1 C_ M.

Therefore X E M. Hence 3X = a2, so X =_ a2, hence X E M2 (since M2 is extensional and contains a2). Therefore X is in both M and M2, and so M cannot be disjoint from M2.

§4 Applications

377

C':1

I-.

(b) Suppose M is extensional and neither empty nor the set of all combinatory. Then X4_ is also extensional and non-empty. The only superset of M disjoint from M is M itself, which by (a) is not A-definable. Thus M is not A-definable.

tit

Discussion

c1+

!"'(

It is immediate from (b) above that the set T is not A-definable, since T is obviously extensional and is neither empty nor the set of all combinators. Indeed, it follows from (b) of Theorem 4.3 that for no combinator X is it the case that the set of all combinators equivalent to X can be A-definable. In particular, the set of all combinators equivalent to I is not A-definable - which is essentially Church's theorem[3]. This set is one of the earliest examples of a A-representable set that is not A-definable (which under any standard Godel numbering goes over to a recursively enumerable set that is not recursive).

Part (b) of Theorem 4.3 is the combinatory analogue of Rice's theorem[19], which goes as follows. In recursion theory a set A of numbers

is called extensional if for every number n in A, all numbers m are in A such that w.,,,, = wn, - in other words, A contains with every number n, all numbers m such that m and n are indices of the same r.e. set. Rice's theorem is that no extensional set can be recursive, except the empty set and the set of all natural numbers.

Exercise 6 Using the fact that for any recursive function f (x) there is a number n such that wn = W f(n), prove Rice's theorem.

A method of simultaneously generalizing Rice's theorem and Scott's theorem is given in the next exercise. ICY

Exercise 7 Let - be an equivalence relation on a set N and let E be a collection of functions from N into N having the fixed point property that

for every f E E there is some x c N such that f (x) - x. Let S be a set of subsets of N such that for every element A E S and any elements al, a2

of N there is a function f E E such that f (x) - al for every x c A and f (x) __ a2 for every x c A. Call a set A of elements of N closed (with respect to the equivalence relation -) if A contains with every element x, all elements y such that y - x.

(a) Prove that no closed set other than N or 0 can be a member of S.

Chapter 19. A second variety of fixed point theorems

378

(b) Show that both Rice's theorem and Theorem 4.3 can be obtained as special cases of (a). coy

§5 Other fixed point theorems of the second variety We continue to assume that the mapping X -* X is admissible. The c0.

following theorem and its corollary can be proved:

Theorem 5.1 [Second double fixed point theorem] For any combinators III

a and 3 there are combinators X and Y such that aXY - X and /3XY Y.

Such a pair (X, Y) will be called a 11-double fixed point of the pair (a, a).

Corollary 5.2 For any combinators a and ,Q there are combinators X and Y such that aX - Y and ,QY - X. We shall not prove these here; they follow as corollaries of more general results to be proved in the next chapter. Likewise, the following theorems about strongly admissible mappings will be seen to be special cases of the theorems proved in the next chapter.

Theorem 5.3 [The uniform second fixed point theorem] If the mapping X --> X is strongly admissible then there is a combinator 0 - that we will call a II-sage such that for every combinator X, OX - XOX. Remark

Without the hypothesis of strong admissibility we do not know whether Theorem 5.3 holds or not. Similar remarks apply to the remaining theorems of this chapter.

Theorem 5.4 Under the same hypothesis (strong admissibility) there is a combinator 0 such that for any combinators X and Y, qXY - XYgXY. coo

Theorem 5.5 [Uniform version of Theorem 5.1 - a "double Myhill theorem of the second type] If the mapping X -i X is strongly admissible then there are combinators 01 and 02 such that for all combinators X and Y:

(1) c1XY - X01XY 02XY

§5 Other fixed point theorems of the second variety

(2) 02XY

379

Y01XY 02XY

Such a pair (01i 02) will be called a II-DM pair.

Nice combinators and symmetric combinators of the second type As the reader probably suspects, we also have: yam]

't3

Theorem 5.6 Suppose that the mapping X --> X is strongly admissible. Then

(a) There is a combinator N such that for any combinators Z, X, Y: NZXY - ZNXXY NYXY. (b) There is a combinator a such that for any combinators X and Y:

aXY - XaXY aYX. The reader might find it a profitable exercise to try proving some of the above theorems before turning to the next chapter.

Chapter 20

Extended sequential systems We now turn to a variety of fixed point theorems that simultaneously generalize the fixed point theorems of the last chapter and earlier fixed point theorems for sequential systems. .ti

We let S be a sequential system (N, E, ->) such that for every n > 2 there is an exact E-function E(xl,... , xn). And now we consider two other items. The first is an equivalence relation - on N that satisfies the following two conditions:

El If a - b then for any E-function O(x, yl, ... , yn), the equivalence 0(a, yi, ... , yn) = 0(b, yl, ... , yn) holds for all yl, ... , yn. III

E2 If a - b, then for any finite sequences 0, r of elements of N, the ''7

equivalence B, a, r -40, b, r holds.

8,°

Of course ordinary identity is such an equivalence relation. For applications to the formal system CLo of combinators,, we define a, x1,. .. , xn -> b, yl,... , Yk to mean that axl ... xn - by, ... Yk (which, we recall, means that axl ... xn = by, ... Yk is formally provable in CLo, but not necessarily that axl ... xn is the same combinator as by, ... yk). Since

for CLo, a - b implies ac - be and ca - cb, then the equivalence relation between combinators is easily seen to satisfy conditions El and E2. Our second item to be considered is a mapping M that assigns to each element x of N an element Jr- of N. W e shall say that a function fo(xl, ... , xn)

mirrors a function f (X 1, ... , xn) (with respect to M and - understood) if for all elements x1, ... , xn: AG _l, ... , xn) - f (xl, ... , xn). And now we define the mapping M to be admissible if the following two conditions hold: III

M 1 Each exact E-function E(xl,... , xn) (n > 2) is mirrored by a Efunction Eo(xi, ... , xn) [Eo(xi, ... , xn) = E(xl, ... , M2 There is a E-function 6(x) such that for all elements x: 5(x) - x.

§0 M-diagonalizers

381

C)'

We let E be the triple (S, -, M) and we call E an extended sequential system. We call E of type 1, type 2, universal if S is respectively of type 1, type 2, universal. We call E admissible if M is an admissible mapping (with respect to S and -). In applications to combinatory logic, M is to be an admissible map

II, as defined in the last chapter. By Proposition 1.3 of the last chapter, for each n > 2 there is a combinator A,, such that for all X 1 ,.. . , Xn; AnXl ... Xn - Xl ... Xn, and so we take Eo(xl,... , xn) to be Anxl, ... xn,

and since we are taking E(xl,... , xn) to be the product xl ... xn, then Eo(Xl,... , Xn) = AX, ... X. - XI ... X. = E(Xi,... , Xn), and so condition Ml holds. We of course take a combinator S satisfying the condition

Sx - x, and we then define S(x) to be 6x, and thus 6(x) - x, and so condition M2 holds. Coming back to extended sequential systems in general, let us note the

obvious, but nevertheless important fact that if we take - to be ordinary identity and M to be the identity map, then M is trivially admissible (with 6(x) the identity function and Eo(xl,... , xn) = E(xl,... , xn), where E(xl,... , xn) is an exact E-function). And so all theorems that we will prove about admissible extended sequential systems have applications to sequential systems as well as to combinatory logic with an admissible map II.

In all that follows, for each n > 2, E(xl, ... xn) is assumed to be an exact E-function.

§0 M-diagonalizers We shall call a function 0(x, yl, ... yk) (k > 0) an M-diagonalizer if for any elements x, y1, ... , yk: 0(x, yl, ... , yk) - E(x, x, yl, ... , yk), where E(x, yl, ... , yk) is an exact E-function. [For k = 0, this condition is

0(x) - E(x, x).] And now we call E diagonalizable if for each n > 1, 'CS

E contains an M-diagonalizer of n arguments. The following lemma is basic to all that follows.

Lemma A If E is admissible and of type 1, then E is diagonalizable.

, . . .

4-.

Proof Suppose E is admissible and of type 1. Given n > 1, let E(x, y, z1, ... , zk) be an exact E-function of k + 2 arguments (k = n 1) and let Eo(x, y, z 1 , zk) be a E-function that mirrors the function E(x, y, z1i ... , zk). Take a E-function 6(x) satisfying condition M2

and take 0(x, yl, ... , yk) = Eo(x, 6(x), 6(yl), ... , 6(yk)). Then A is a E-function (since E is of type 1) and we have 0(x yl, ... , yk) =

382

Chapter 20. Extended sequential systems III

III

Eo(x b(x), 6(y1), ... 6(yk)) Eo(x, x, y1 ... yk = E(x, x, y1, ... yk). Thus A is an M-diagonalizer of n arguments. [If n = 1, then k = 0, and we take 0(x) = Eo(x, S(x)), and so O(x) - E(x, x).]

III

Corollary A If lI is an admissible mapping X -> X of combinatory logic, then for each n > 1, there is a combinator An such that for all combinators X,Yl,...,Yn-1: AnXY1...Yn-1 =XXY1...f7n-1

III

Of course for combinatory logic, such a combinator An is one satisfying the condition: Onxyli ... yn_1 - Anx(Sx)(Sy1) ... (6yn-1). Such a 500

combinator On we will call a H-diagonalizer.

§1 M-fixed points We recall (Chapter 12) that we called an element b a fixed point of a if b -> a, b. We now define b to be an M-fixed point of a if b -4a, b. The following theorem generalizes the second fixed point theorem of combinatory logic (Theorem 3.1, Chapter 19).

Theorem 1.1 If £ is diagonalizable then every element has an M-fixed point.

Proof By hypothesis, E contains an M-diagonalizer O(x). Given a, let c be such that for all x: c, x -4a, O(x). [We are assuming that all sequential systems are weakly connected.] Then E(c, c) -4 c, c -> a, 0(c) --4a, E(c, c) (since 0(c) = E(c, c)). We thus take b = E(c, c) and sob -> a, b. By Theorem 1.1 and Lemma A we have:

Corollary 1.2 If £ is admissible and of type 1, then every element has an M-fixed point.

The second fixed point theorem of combinatory logic is thus a consequence of Corollary 1.2. As another corollary of Theorem 1.1 we have:

Corollary 1.3 If £ is universal and diagonalizable then for any Efunction f (x), there is an element b such that b -+ f (b).

§2 The double M-fixed point theorem

383

Exercise 1 Prove Corollary 1.3.

Exercise 2 Is the second fixed point theorem of combinatory logic also a consequence of Corollary 1.3? Quasi-M- diagonalizers

Imo

Theorem 1.1 has a strengthening to which we will now turn. Call a function g(x, yl, ... , yk) a quasi-M-diagonalizer if for every element x there is an element x1 such that for all y ,7. .. , Ilk: q(-t1, 917... , Yk) E(x, xi 7 y17 ... , yk), where E(x, z, yl, ... , yk) is an exact E-function. [For

r'!

k = 0, q(x) is a quasi-M-diagonalizer if for every x there is an element x1 such that q(xl) -> E(x, xl).] Obviously every M-diagonalizer is also a quasi-M-diagonalizer (just take xl = x). Then Theorem 1.1 can be strengthened to

Theorem 1.1# If E contains a quasi-M-diagonalizer q(x) of one argument then every element has an M-fixed point.

Proof Exercise 3 below. III

Let us note that for - the identity relation and M the identity mapping, an M-diagonalizer is simply a diagonal function (as defined in Chapter 12); a quasi-M-diagonalizer is simply a quasi-diagonal function, just as an M-

fixed point is simply a fixed point. In Chapter 12 we proved that if E contains a quasi-diagonal function of one argument then every element has a fixed point. This, then, is but a special case of Theorem 1.10 above.

Exercise 3 Prove Theorem 1.1#.

§2 The double M-fixed point theorem III

In the last chapter we stated (Theorem 5.1) that for any combinators a and a there are combinators X and Y such that aXY - X and ,39Y - Y. [-4

The following is a generalization of this.

'C3

Theorem 2.1 If S is diagonalizable - or even if E contains an Mdiagonalizer of two arguments - then for any elements al and a2 there are elements b1 and b2 such that b1 -> al, b17 b2 and b2 -> a2, b1, b2.

Chapter 20. Extended sequential systems

384

Proof Suppose that E contains an M-diagonalizer 0(x, y). Given al and a2, let cl and c2 be elements such that for all x and y:

I{-

(1) cl, x, y -' a1, 0(x, y), 0(y, x) (2) c2, x, y - * a2, A

X), A

Y)

Then E(cl, C1, c2) -4 C1, El, c2 - a,, 0(C1v C2), A(C2, Cl) - a1, E(C1,

Also, E(C2, E2, El) - a2,2,1 -f

a2, L (C1, C2), CSI

el, C2), E(C2, E2, C1).

AA, Z4) -4 a2, E(c1, C1, C2), E(C2, c2, C1). We thus take b1 = E(c1, c1, C2) and b2 = E(c2, c2, c1).

Exercise 4 Show that the conclusion of Theorem 2.1 follows from the weaker hypothesis that E contains a quasi-M-diagonalizer q(x, y). [Note that this generalizes Theorem 1.1 of Chapter 14: if E contains a quasidiagonalizer of two arguments then S has the weak double fixed point property.]

Exercise 5 Show that if the conclusion of Theorem 2.1 holds for £, then for any elements a1 and a2 there are elements b1 and b2 such that b1 -> a1, b2 and b2 --4a2, b1 (we might call this property the M-cross-point property).

Exercise 6 Suppose that E is of type 1 and contains a quasi-Man]

diagonalizer of one argument. Does S necessarily have the M-cross-point property?

Exercise 7 State and prove an n-fold generalization of Theorem 2.1.

§3 Strong M-fixed point properties .ti

As might be expected, we define S to be strongly admissible if £ is admissible and M also satisfies the following condition:

M3 There is a E-function H(x) such that for all x: H(x) - x. U]'

"L7

Such a function H(x) we will call an M-neutralizer, or just a neutralizer for short, when the mapping M is fixed for the discussion. For combinatory logic with a strongly admissible mapping X

we of course take H(x) = Hx, where H is a combinator satisfying the condition HX - X. For applications to ordinary sequential systems (which can be looked at as extended sequential systems in which - is the identity relation and M is the identity mapping) we of course take H to also be the identity mapping.

§3 Strong M-fixed point properties

385

Adequate systems

.nom

.'3

"C3

^C3 ..d ,a?

p..

And now we shall define £ to be adequate if it is of type 2, diagonalizable, and contains an M-neutralizer H(x). By Lemma A, if £ is strongly admissible and'of type 1 and type 2, then £ is adequate. The rest of this chapter will be devoted to M-fixed point properties of adequate systems. In what follows, H(x) will be an M-neutralizer fixed for the discussion. The M-recursion and M-Myhill properties

We shall say that £ has the M-recursion property if for every element a there is a E-function O(x) such that for all x: 0(x)

a, x, 0(x)

Theorem 3.1 If £ is adequate then £ has the M-recursion property. .:$

't3

Proof Suppose £ is adequate. Then E contains an M-diagonalizer 0(x, y) of two arguments. Then, for any element a, take b such that for all y and x: b, y, x -> a, H(x), A(y, x)

Then E(b, 6,x)

->

a, H(x), 0(b, x)

->

a, x,E(b,b,x)

And so we take O(x) = E(b, b, x). [Since S is assumed to be of type 2, then E(b, b, x) is a E-function of x.]

alb

We shall say that £ has the M-Myhill property if there is a E-function O(x) such that for all x, 0(x) --4x, 0(x). [This is a uniform version of the M-fixed point property.] It is obvious that if S is universal and has the M-recursion property, then £ has the M-Myhill property (just take a such that a, x, y -4 x, y). And so as a consequence of Theorem 3.1 we have:

Theorem 3.2 If £ is adequate and universal then there is a E-function O(x) such that for all x: 0 (x) -> x, 0(x)

Chapter 20. Extended sequential systems

386

III

mar,

We note more specifically that if E is adequate and universal, then there is an element b such that for all y and x: b, y, x -> H(x), 0(y, x), and then E(b, b, x) is a E-function O(x) satisfying the condition: O (x) -4 x, 0(x). For combinatory logic, this tells us that for a strongly admissible mapping II, there is a 11-sage - a combinator 0 such that for all X, we have OX - X 0X, and so the uniform second fixed point theorem of combinatory logic (Theorem 5.3, Chapter 19) is now established. We can, in fact, now see how such a combinator 0 can be obtained. We let b be a combinator satisfying the condition: byx Hx(A3y(6y)(Sx)) (so that for any combinators Y and X, bYX - XYYX) and we take 0 = bb.

There is, however, another argument, which does not appear to be

0(D

in_

generalizable to extended sequential systems, and which yields the uniform second fixed point theorem as a corollary of the second fixed point theorem.

0q9

III

III

We take a combinator a satisfying the condition: ayx - Hx(Ay(bx)). We then let 0 be a 11-fixed point of a (0 - a0), and so for any X: OX - aOX HX(A9(6X)) - X(AOX) - XOX. Thus 0 is a 11-sage. This argument reveals an additional fact of interest, namely that there

'C3

is a combinator a all of whose 11-fixed points are 1I-sages. Coming back to extended sequential systems, Theorem 3.1 (and hence also Theorem 3.2) can be strengthened as follows. For any positive n, let us say that E is n-adequate if E is of type 2 and contains a neutralizer H(x) and an M-diagonalizer A(x1i . . . , x,,,) of n arguments. Thus E is adequate if E is n-adequate for all positive n. It is obvious from the proof of Theorem 3.1 that it would still hold if we replaced "adequate" by "2-adequate." And

now let us say that E is quasi-n-adequate if E is of type 2 and contains a neutralizer H(x) and a quasi-M-diagonalizer q(xj, ... , xn) of n arguments. By a slight modification of the proof of Theorem 3.1 it can be seen that if ,F is quasi-2-adequate, then E has the M-recursion property (Exercise 8 below). This not only yields the second fixed point theorem of combinatory logic, but also generalizes Theorem 2.1 of Chapter 13 that says that if E is of type 2 and contains a quasi-diagonalizer Q(x, y) of two arguments, then S has the recursion property.

Exercise 8 Prove that if E is quasi-2-adequate then E has the Mrecursion property.

Exercise 9 Suppose that E is universal and that .6 is adequate - or even quasi-3-adequate. Prove that there is a E-function q(x, y) such that for all x and y: 0(x, y) -> x, y, 0(x, y). [Note that Theorem 5.4, Chapter 19, is a consequence of this.]

§4 Uniform double M-fixed point properties

387

§4 Uniform double M-fixed point properties Theorem 4.1 If S is adequate then for any elements al and a2 there are E-functions 01(x, y), 02 (x, y) such that for all elements x and y: (1) 01(x, y) - al, x, y, 01(x, y), 02(x, y) (2) 02(x, y) - a2, x, y, 01(x, y), 02(x, y)

Proof (sketch) Take b1, b2 such that z1, x, y) bl, z1, z2, x, y - al, H(x), H(y), A(z1, z2, x, y), A b2, zl, z2, x, y - a2, H(x), H(y), A(z2, zl, x, y), A(zl, z2, x, y)

Then take 01(x, y) = E(bl, b1, b2, x, y) and 02(x, y) = E(b2, b2, b1, x, y).

Corollary 4.2 If S is adequate then for any elements a1, a2 there are E-functions 01(x, y), 02 (x, y) such that for any x, y:

-> a1, x,

01(x, y), 02(x, y) and 02(x, y) - a2, y, 01(x, y), 02(x, 9) -

Corollary 4.3 If E is adequate, then for any elements al, a2 there are E-functions O1(x), 02(x) such that for all x: 01(x) -> a1i x, q1(x), 2() and 02(x) - a2, x, 01(x), q2() From Corollary 4.2 we have:

Theorem 4.4 If E is adequate and universal then there are E-functions 01(x, y), 02 (x, y) such that for all x, y: 01(x, y)

x, 01(x, y), 02 (x, y)

III

02 (x, y) - y, 01(x, y), 02 (x, y)

As a consequence of Theorem 4.4, we see that for combinatory logic with a strong admissible mapping X -49 there are combinators 01 and 02 such that for all X and Y, 019Y = X01XY 02(XY) and 02XY Y01XY 02(XY) (Theorem 5.5, Chapter 19). But this is also a consequence of the second double fixed point theorem of combinatory logic (Theorem 5.1, Chapter 19) together with the following:

Chapter 20. Extended sequential systems

388

Theorem 4.5 If II is a strongly admissible map X -49 for combinatory logic, then there exist combinators al, a2 such that any II-double fixed point (01, 02) of (al, a2) is a II-DM pair (i.e., if a,

01 and

02, then for any combinators X and Y, 01XY - Xqi1XY 02XY and 02XY - Y01XY 02XY.

Proof (Sketch) Take al, a2 satisfying the conditions: alzwxy alzwxy

Hx(A3z(6x)(6y))(A3w(6x)(6y)) Hy(A3z(Sx)(6y))(A3w(Sx)(6y))

§5 M-nice and M-symmetric functions Theorem 5.1 If E is adequate and universal then: (a) There is a E-function N(z, x, y) such that for all x and y: N(z, x, y) -> zN(x, x, y), N(y, x, y)

(b) There is a E-function s(x, y) such that for all x and y: x, s(x, y), s(y, x)

Proof (Sketch) (a) Take b such that b, w, z, x, y -> H(z), 0(w, x, x, y), 0(w, y, x, y), and take N(z, x, y) = E(b, b, z, x, y).

(b) Take b such that b, w, x, y -> H(x), 0(w, x, y), 0(w, y, x), and take s(x, y) = E(b, b, x, y).

Exercise 10 Consider combinatory logic with a strongly admissible mapping II.

(a) Show that there is a combinator a such that if N is any II-fixed point

of a, then for all X, Y, Z: NZYX - ZNXXY NYXY. (b) Show that there is a combinator a such that if s is any II-fixed point

of a, then for all X, Y: sXY - XsXY sYX.

References [1] H. P. Barendregt. The Lambda Calculus. North Holland, 1981. [2] George Boolos. The Unprovability of Consistency. Cambridge University Press, 1979.

[3] Alonzo Church. The Calculi of Lambda Conversion. Princeton University Press, 1941. [4] A. Ehrenfeucht and S. Feferman. Representability of recursively enumerable sets in formal theories. Archiv fur Mathematische Logik and Grundlagenforschung, 5:37-41, 1960. [5] Melvin Fitting. Fundamentals of Generalized Recursion Theory. North Holland, 1981.

[6] Kurt Godel. Uber formal unentseidbare Scitze der Principia Mathematica and Verwandter, Systeme I. Montshefte fur Mathematik and Physik, 1981. [7] Stephen C. Kleene. Introduction to Mathematics. Van Nostrand, 1952. [8] Andrzej Mostowski. Sentences Undecidable in Formalized Arithmetic. North Holland, 1952.

[9] A. R. Myer. What is a model of the lambda calculus? Information and Control, 52:87-122, 1982.

[10] John Myhill. Creative sets. Zeitschrift fur mathematischer Logik and Grundlagen der Mathematik, 1:97-108, 1955. .'3

[11] Rowitt Parikh. Existence and feasibility in arithmetic. Journal of Symbolic Logic, 36:494-508, 1971.

[12] G. D. Plotkin. A set-theoretic definition of application. Technical Report Memorandum MIP-R-95, School of Artificial Intelligence, University of Edinburgh, 1972.

References

390

[13] Emil Post. Recursively enumerable sets of positive integers and their decision problems. Bulletin of the American Mathematical Society, 50:284-316, 1944.

[14] Marian Boykan Pour-El. Effectively extensible theories. Journal of Symbolic Logic, 22:39-54, 1968.

[15] H. Putnam and R. Smullyan. Exact separation of recursively enumerable sets within theories. Proceedings of the American Mathematical Society, 11:574-577, 1960. [16] W. V. Quine. Mathematical Logic. Norton, 1940.

[17] W. V. Quine. Concatenation as a basis for arithmetic. Journal of Symbolic Logic, 11:105-114, 1946.

[18] W. V. Quine. The Ways of Paradox, and Other Essays. Random House, 1966.

Can

C"°

.'3

[19] H. Gordon Rice. Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society, 74:358-366, 1953.

[20] Hartley Rogers. Theory of Recursive Functions and Effective Computability. McGraw-Hill, 1967.

[21] J. Barkley Rosser. A mathematical logic without variables. Annals of Mathematics, 36:127-150, 1935. 0.1

[22] J. Barkley Rosser. Extensions of some theorems of Godel and Church. Journal of Symbolic Logic, pages 87-91, 1936. [23]

J. Barkley Rosser. Highlights of the history of the lambda-calculus. Annals of the History of Computing, 6(4), October 1984.

[24] Dana Scott. A system of functional abstraction. Unpublished, 1963. [25] Dana Scott. Continuous lattices. In Proceedings of the 1971 Dalhousie Conference, volume 274 of Lecture Notes in Mathematics, New York, 1972. Springer-Verlag.

[26] John Shepherdson. Representability of recursively enumerable sets in formal theories. Archiv fur Mathematische Logik and Grundlagenforschung, pages 119-127, 1961. [27] Raymond M. Smullyan. Languages in which self-reference is possible. Journal of Symbolic Logic, 22(1):55-67, 1957.

References

391

[28] Raymond M. Smullyan. Theory of Formal Systems. Princeton University Press, 1961.

[29] Raymond M. Smullyan. Chameleonic languages. Synthese, 60(2):201224, 1984.

[30] Raymond M. Smullyan. Uniform self-reference. Tarski Symposium Volume, Studia Logica, 44(4):439-445, 1985.

[31] Raymond M. Smullyan. Quotation and self-reference. In S. J. Bartlett and P. Suber, editors, Self-Reference, pages 122-144. Martinus Nijhoff Publishers, Nordrecht, 1987. [32] Raymond M. Smullyan. Godel's Incompleteness Theorems. Oxford University Press, 1992.

[33] Raymond M. Smullyan. Recursion Theory for Metamathematics. Oxford University Press, 1993.

[34] A. Tarski, A. Mostowski, and R. Robinson.

Undecidable Theories.

North Holland, 1953.

[35] Alfred Tarski. Der wahrheitsbegriff in den formalisierten sprachen. Studia Philisophie, 1:261-405, 1936.

[36] Alan Turing. The p-function and Ak conversion. Journal of Symbolic Logic, 2:153-163, 1937.

[37] Erik G. Wagner. Uniformly reflexive structures. Transactions of the American Mathematical Society, 144, 1969.

Index n-k Myhill property, 284 n-m recursion property, 284 n-adic concatenation, 133 n-adic notation, 132 n-ary predicates, 49 n-place predicates, 49 n-sage, 336 n-synchronizer, 286 n-universal, 239 n-universal element, 239 ord(II), 286 x is part of y, 152 '-'

(U), 135

t..

Ai, 182

chi-

K-tester, 367 M, 320 M-Myhill properties, 385 M-Myhill property, 385 (s,

L', 79 1I-double fixed point, 378 Ea, 124 E0-formulas, 149 E0-relation, 124 E1-formulas, 149 E1-relation, 124 A#, 50 A-representable, 375 DMk, 277 w-consistent, 174 w-inconsistency, 70 w-inconsistent, 174

x begins y, 152 x ends y, 152 xBy, 152 xEy, 152

H

Wo, 50 L1, 75

'C3

M-diagonalizers, 381 M-fixed points, 382 M-neutralizer, 384 M-nice functions, 388 M-recursion, 385 M-symmetric functions, 388 S-tester, 367 W*, 50

s-,

Jn(x1i...,x,,), 128

(C)

J(x, y), 126

CSC

pip

B!", 182 G,,,, 153

xPy, 152 (many-one) reducible, 178 CLo, 357 DM, 273 DM', 275 DMo, 273 DMk, 273 DRo, 267 DR1, 267 DR2, 267 2-sage, 335

adequate systems, 385 admissible, 92, 380 admissible functions, 67 admissible name functions, 366 admissible notation, 114

Index

393

almost a semi-D.G. predicate, 206 almost represents, 205 application, 315 applicative system, 25, 230, 315 Arithmetic, 78 arithmetic, 78 assistant sage, 332 associate, 7, 206 axiomatizable, 201

."S.

.._

:D:

208

pop

(CD

effective sequence, 210 effectively a Rosser system for sets 208

effectively inseparable, 193 effectively unconquerable, 224 effectively universal, 203 effectively universal systems, 203 elementary dyadic arithmetic, 113 elementary formal systems, 102 EM, 311 Enumeration Theorem, 142 enumeration theorem, 141 equivalence relation, 17 exact functions, 234 exact separability, 63 exactly separates, 63 ...

4,4

D.G., 187 D.G. function, 187 D.G. predicate, 205 D.U., 186 D.U. pairs of sets, 186 decidable, 48, 201 definability, 63 defines, 375 diagonal function, 50, 236 diagonalizable, 381 diagonalization, 3, 17, 50

effective representation system, 20 effective Rosser systems, 219 effective Rosser systems for sets,

(CD

193

completely represents, 64 consistent, 48 constructive definability, 122, 124 contrarepresentable, 57 creative, 181 cross-point, 19 cross-point property, 336

cm.

Church's theorem, 377 CL, 357 class abstracts, 79 co-productive, 181 combinatory completeness, 359 complete, 48 complete representability, 63 completely creative, 181 completely effective inseparable,

..1

pmt

bar fixed points, 309 bases, 326

`i-

'47

diagonalizer, 17, 236 diagonalizes, 17 double enumeration, 157, 182 double enumeration theorem, 183 double fixed points, 263, 336 double indexing, 182 double iteration theorem, 185 double Myhill property, 273 double quasi-diagonal lemma, 269 double recursion properties, 267 double universality, 182 doubly generative pairs, 187 doubly generative systems, 205 doubly indexable, 182 doubly universal, 186 doubly universal pairs, 158 duplicator, 320 dyadic arithmetic, 113 dyadic concatenation, 117 dyadic Godel numbering, 129 dyadic Godel numbers, 153

394

Index

inverse functions K2 , 128 involute, 186 involution, 186 iteration properties, 177 iteration theorem, 145

f.r. (formally representable), 104 finite existential quantification, 122 finite quantifications, 122 finite universal quantification, 122 first diagonal lemma, 50 first predecessor, 367 first-order definable, 124 fixed point, 52 fixed point combinators, 331 fixed point functions, 249 formal combinatory logic, 357 formal system, 105 formally representable over K, 104 full admissibility, 369 fully admissible, 369

Lemma N, 247 Lemma QQ, 269 lexicographical Godel numbering,

C."

explicit transformations, 106 expresses, 56, 77 expressibility, 77 expressible, 56 extended Myhill property, 311 extended recursion property, 251 extended sequential systems, 380 extension of alphabets, 111 extensional, 329, 376

Kleene pair (Kl, K2), 189 Kleene's symmetric incompleteness theorem, 68

aCV

132, 133

Godel number, 26, 48, 49, 78 Godel sentences, 26, 79, 140 Godelizing operations, 91 generative, 147, 179, 204 generative function, 147, 179 generative predicate, 204 generative sets, 179 generative systems, 204 hyperconnected, 257

284

multiple fixed points, 336 multiple symmetric, 284 Myhill property, 228

near D.G. predicate, 206 near diagonalization, 18 near diagonalizer, 18, 247 near generative predicate, 204 near normalization, 85, 89 near semi-D.G. predicate, 206 negation property, 58 neutralizer, 384 nice combinators, 339 nice functions, 281 norm, 4, 15 normal, 52 normality, 52 normalization, 4 normalization lemma, 247 normalizers, 245 .N. .N.

c.o

idempotent, 329 incomplete, 48 inconsistent, 48 index, 142 indexicals, 1 inverse functions K and L, 127

master predicate, 210 minimization, 121 mirrors, 380 modus ponens, 103 multiple fixed point properties, 263,

omniscient function, 224

Index

395

one-sided quotation, 7

off=

cad

tea)

Second diagonalization lemma, 60 second fixed point theorem of combinatory logic, 374 second predecessor, 367 self-reference in CLo, 366 semi-D.G., 190, 205 semi-D.G. function, 190 semi-D.G. predicates, 206 semi-D.U., 194 semi-reducibility, 193 semi-reducible, 193 semi-Rosser systems, 209 sentential systems, 231 separability, 60 sequential systems, 228 solvability over K, 110 solvable over K, 110 SP, 287 SR, 287 stable, 70 standard systems, 177 stem functions, 209 C))

.°.°.°

Scott, 376

CC))

quasi M-diagonalizers, 383 quasi-diagonalizer, 237 quasi-exact, 237

sage, 331 sage producer, 334 satisfies, 77 "N"

pairing function, 328 Parikh, 72 Peano notation, 115 pre-admissible mappings, 367 predecessor combinators, 367 predicate, 48 predicate of n arguments, 49 predicates of degree n, 49 primitive recursive functions, 119 productive, 181 property Bn,, 309 propositional combinators, 328 provability by stages, 69 provable sentences, 48

Rice's theorem, 376, 377 Rosser function, 208, 219 Rosser system for relations, 208 Rosser system for sets, 208 Rosser systems, 207 rule D, 102 rule of detachment, 103

C))

t;:

cad

'C3

(1)

C)R

'C3

00O

S."

113

recursively inseparable, 192 reduces, 178, 186 reducible, 147, 186 refutable sentences, 48 regular, 121 repeat, 7 represent, 49 representability, 104 representation function, 51 representation system, 48, 49

0.'

r.e., 113 r.e. sequence, 210 r.e. system, 201 recursion property, 229, 245 recursive, 113 recursive enumerability, 113 recursive functions, 118 Recursive pairing functions, 126 recursively enumerable, 113 recursively enumerable relations,

strong M-fixed point properties, 384

strong admissibility, 369 strong double Kleene property, 272 strong fixed point functions, 253 strong Kleene property, 258 strong separability, 62 strongly admissible, 369, 384

Index

396

cad

strongly connected, 253 strongly connected systems, 253 strongly equivalent, 231 strongly separates, 62 successor combinator, 371 SW, 287 symmetric combinators, 339 symmetric functions, 279, 310 synchronization, 286 synchronized fixed point properties, 286, 287 synchronized fixed point theorems, ti,

cm.

288

synchronized Myhill property, 287 synchronized recursion property, 287 .,.

synchronized sequences of func-

tions, 286 synchronized weak fixed point property, 287 synchronizer of order n, 286 Tarskification, 82 tarskification, 88

term t of (E), 103 "'O

term definable, 78 the double M-fixed point theorem, 383

the universal system (U), 135 Theorem EM, 311 truth combinators, 328 Turing sages, 333 type 2 approach, 249, 269 types 0,1,2, 233 unary notation, 115 unconquerable systems, 223 undecidable, 48, 201 uniform double M-fixed point properties, 387 uniform double iteration theorem, 184

uniformly D.U., 186 uniformly semi-D.U., 195 uniformly universal, 178 universal, 147, 178, 239 universal sets, 178 universal systems, 239 unstable, 70

very strongly connected, 256 very strongly connected systems, 278

very strongly equivalent, 231 voracious elements, 329 weak double fixed point property, 263, 337 weak double Kleene property, 266 weak fixed point property, 228 weak fixed point function, 249 weak Kleene property, 229 weakly connected, 233 weakly equivalent, 231 weakly separates, 60

zero tester, 371