A Theory of Conditionals in the Context of Branching Time

  • 29 58 9
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

A Theory of Conditionals in the Context of Branching Time

Richmond Thomason; Anil Gupta The Philosophical Review, Vol. 89, No. 1. (Jan., 1980), pp. 65-90. Stable URL: http://lin

811 52 472KB

Pages 27 Page size 595 x 792 pts Year 2007

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

A Theory of Conditionals in the Context of Branching Time Richmond Thomason; Anil Gupta The Philosophical Review, Vol. 89, No. 1. (Jan., 1980), pp. 65-90. Stable URL: http://links.jstor.org/sici?sici=0031-8108%28198001%2989%3A1%3C65%3AATOCIT%3E2.0.CO%3B2-T The Philosophical Review is currently published by Cornell University.

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/sageschool.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.

The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact [email protected].

http://www.jstor.org Thu Sep 6 03:34:56 2007

The Philosophical Review, L X X X I X , No. 1 (January 1980).

A THEORY OF CONDITIONALS IN THE

CONTEXT OF BRANCHING TIME*

Richmond Thomason and Anil Gupta

n Stalnaker [9] and in Stalnaker and Thomason [lo], a theory of conditionals is presented that involves a "selection function." Intuitively, the value of the function at a world is the world as it would be if a certain formula (the antecedent of a conditional) were true. In these two papers, the notion of a possible world is left entirely blank and abstract; worlds are simply treated as points. This approach has the advantage of generality, but could also be misleading. Clearly, in a situation in which there are likenesses among possible worlds the selection function will be affected. Suppose, for instance, that we can speak of those worlds that are like w and those that are unlike it. Then the function should not choose a world unlike w when one like w would do as well. The moral of this is that if we pass to a logical theory in which "possible worlds" are given a certain amount of structure, we can't expect the logic of conditionals to remain unaffected-for this structure may provide some purchase on world similarity. ' In this paper we want to explore one of the most pervasive and

I

"Copyright, 1979, Richmond Thomason and Anil Gupta

* This began as a paper by Thomason, written in February 1977, revised and expanded in January 1978, and presented, with comments by Gupta, at the University of Western Ontario in May 1978. The present joint version was completed in October 1978. The basic ideas took initial shape in a series of discussions between Thomason and Walter Edelberg, who deserves a great deal of credit for his insights into the topic. In particular, he was the first to see the importance of the crucial inference we call "the Edelberg inference." Later parts of the paper owe much to interactions with Robert Stalnaker, Bas van Fraassen, and each other. We hope that our use of terms like 'similarity' isn't misleading. We don't believe that meditating on the notion of similarity among possible worlds is likely to advance our knowledge of conditionals, or that it is very enlightening to explain the world chosen by thd selectioh function as the most similar one in which the condition is true. We're only saying here that sometimes rather coarse kinds of similarity are available, and that when they are available they should be exploited.

RICHMOND H. THOMASON. ANIL GUPTA

important cases of this sort: the interaction of conditionals with tense. This interaction can be rather intricate in even the most commonplace examples of conditionals: consider, for instance, the following two. (1.1) You'll lose this match if you lose this point. (1.2) If he loves her then he will marry her. We believe there is a difference in logical form here: (1.1) has the form FQ > FR (or perhaps, the form F(Q > FR)), while (1.2) has the form Q > FR.2 O r consider the following pair. (1.3) If Max missed the train he would have taken the bus. (1.4) Max took the bus if he missed the train. We believe that the form of (1.3) is P(Q > FR) (so that in this sentence the word 'would' is the past tense of 'will'), while that of (1.4) is P Q > PR. (Thus, it is not, on our view, a matter of the form of (1.4) that one would normally expect the bus-catching to have followed the train-missing if (1.4) and its antecedent are true; (1.4) has the same form as 'He took the bus if he came here from the bus station'.) These sentences help to bring out a fundamental point: if you approach tense logic with the theory of conditionals in mind that we have just sketched, it's natural to want a logic capable of dealing with temporal structures that branch towards the future. Take example (1. l), for instance, and imagine you are evaluating it at a moment of time with a single past but various possible futures, some in which you win this point and some in which you lose it. We want the selection function to choose a future course of events (or scenario, or history) in which you lose this point; the truth of (1.1) depends on whether you lose the match on this scenario. Other examples, like (1.3), can lead to scenarios that Here, we mean only to give these judgments, without attempting to justify them or to present a theory of how they should be formalized. It would be appropriate to return to these matters after the development of a model theory for tense and conditionals, but we will not attempt this in the present paper. We should mention, however, our working assumption about subjunctive mood: it has no categorematic semantic meaning. For an attempt to use pragmatics to explain mood in conditionals, see Stalnaker [ I I]. We suspect that this account needs to be supplemented with a n explanation of how mood interacts with scope. Like 'any' and 'every', indicative and subjunctive may serve to signal "logical forms" in which operators are arranged in certain ways.

CONDITIONALS IN BRANCHING TIME

might have occurred, but didn't. Given a form like P(Q > FR), we are led to a past moment, at which we choose a scenario in which Qis true. Now, it may be that there is no such scenario if we confine ourselves to what actually happened. In the example under consideration, this will be the case if Max didn't in fact miss the train. In this case, we want the selection function to choose a scenario that might have been actualized, but wasn't. So we will draw on logical work concerning branching temporal structures. A number of logics are discussed in Prior [6], but the version of tense logic that we will use is that of Thomason [13].

We will begin by developing a theory of tense and conditionals which is a simple adaptation of Stalnaker's theory. The difficulties that this first theory encounters will motivate some of the central ideas of the second and the third theories that we present later on. In Stalnaker's theory a conditional A > B is true at a world w if and only if B is true at a world w' determined by A and w. Intuitively, w' is the world at which A is true (i.e., w' is an Aworld),and which is the closest possible A-world to w . Formally, ~ the theory posits a function s4 which for each antecedent A and world w picks out a world w' = s(A,w). Then clause (2.1) gives the truth conditions of a conditional formula. (2.1) A > B is true a t w if and only if B is true at s(A,w). Constraints imposed on the s-function will determine the logical properties of conditionals; in Stalnaker [9] and in Stalnaker and Thomason [lo] such constraints are elaborated, and the corresponding properties explored. These constraints can be understood as reflecting the idea that the A-world is the "closest" one in which A is true. When time is brought into the picture, worlds give way to evolving histories. Thus, the truth value of a sentence-and you T o simplify things, we ignore-and will continue to ignore-necessarily false antecedents. Imaeine that they are treated in some suitable wav. ' Stalnaker originally called these "s-functions," by way of abbreviating 'selection function'. But we propose to call them'"s-functions," by way of abbreviating 'Stalnaker function'. There is a good reason for this change: later we will be talking a good deal about "choice-functions," a n d these are entirely different from s-functions. L 2

RICHMOND H. THOMASON. ANIL GUPTA

should now think of sentences as including tensed ones, whose truth conditions may refer to past and future moments-will depend on a history h and a moment i along h. That is, formulas are evaluated at moment-history pairs (i,h). Further, in assigning a truth value to A>B, you do not merely want to consider the closest A-world. Rather you want to consider the closest moment-history pair at which A is true. Thus we want the Stalnaker function s to take as arguments a formula A and a moment-history pair (i,h) and to yield as value the closest pair (i',hf) to (i,h) at which A is true. Conditionals will then be interpreted by (2.2). (2.2) A > B is true at (i,h) if and only if B is true at (i',hf), where (i',h') = s(A,(i,h)). Rule (2.2) brings time into the picture, so that the elements we are selecting are complex. This means that we must look more closely at closeness. As a first step, we should assume that i' is a n ''alternative present" to i: 'If I were in Rome . . .' amounts to 'If I were in Rome now . . .', 'If I had been born in Wales . . .' to 'If it were true now that I had been born in Wales . . .', and so forth. Second, we wish to make a claim: closeness among momenthistory pairs conforms to the following condition, the condition of Past Predominance. (2.3) In determining how close ( i , , h , ) is to (i,,h,) (where i , and i, are alternative presents to one another), past closeness predominates over future closeness; that is, the portions of h , and h, not after6 i , and i, predominate over the rest of h , and h2.7 This informal principle is to be interpreted as strongly as possible: if h, up to i, is even a little closer to h , u p to i , than is h, up to i,, then (i,,h,) is closer to ( i , , h , ) than (i,,h,) is, even if Yes, we also mean this principle to apply to cases like 'If it were 5:OO. . . ' and 'If it were Christmas. . . '. That is one reason why we chose the phrase 'an alternative present' rather than 'simultaneous'. F: In saying 'not after' here, we wish to make clear that 'past closeness' in this context really means 'past-or-present closeness'. A similar condition is discussed in Benhett [2]. See also Lewis [4], p. 76, Lewis [5], and Slote [ 8 ] .Note that our condition of Past Predominance makes no reference to "the moment" to which the antecedent "refers." If A>B is evaluated a t i, then on our proposal it is closeness up to i that predominates.

CONDITIONALS IN BRANCHING TIME

h, after i, is much closer to h l after i , , than is h, after i,. Any gain with respect to the past counts more than even the largest gain with respect to the future. Our formal theory will incorporate the hypothesis that (2.3) is correct. Contrast this hypothesis of Past Predominance with the notion that neither the past nor the future predominates in evaluating conditionals. Call this, the most natural rival of (2.3), the Overall Similarity Theo7y. The two differ in important logical ways. For one thing, the Overall Similarity Theory allows considerations of time and tense to influence the truth conditions of A >B only insofar as they affect the truth conditions of the antecedent A and the consequent B. But on our theory they can influence the truth conditions directly. Thus, when A and B are eternal, A > B is eternal on the Overall Similarity Theory, but need not be on ours. For another thing, since they motivate different constraints on the ordering of histories and hence on the s-functions, the two theories yield different principles of interaction between tenses and conditionals. This point is most easily seen in connection with a language having metric tense operators F nand Pn.(Fn, for instance, may be read "it will be the case n minutes hence that.") The Overall Similarity Theory will validate the following distribution principles. (2.4) Fn(A>B) > (F" >FF"B) (2.5) P"(A>B)> (P"A>PnB) (2.6) (FQ >FnB) > F"A>B) (2.7) ( P n A > P B ) > P y A > B ) Take (2.4), for instance. Suppose F"A>B) is true at (il,h); this means that A > B is true at (i,,h), where i, is the moment n minutes further along h than i l . We can safely assume that A is true at (if2,h') for some if2copresent with i, and h' containing if2,for otherwise (2.4) is vacuously true. So we conclude that at the closest pair (i,*,h*), A and B both are true. O n the Overall Similarity Theory, h* will be the history most resembling h overall, among those histories that meet the condition that A be true on them at the moment copresent with i,. T o say A is eternal is to say that for all h, if A is true a t (i, h ) for any i then A is true a t (i', h ) for all if along h.

RICHMOND H. THOMASON, ANIL GUPTA

But then h* is also the closest history to h overall among those that meet the condition that FnA be true on them at the moment copresent with i , . Thus F ~ A > F ~isB true at (i,h), since B is true at (i',,h'). Similar arguments yield the validity of (2.5)(2.7) on the Overall Similarity Theory. But none of these four formulas is valid on our proposal. Again, take (2.4). This may fail to be true at (i,,h) because the closest history to h, given what has happened before i , , need not be the same as the closest history to h, given what has happened before i,. Readers who are not content with this informal account can add metric tenses to the formal language we interpret below, and show that (2.4)-(2.7) are indeed falsifiable. Differences like this emerge also with ordinary, nonmetric tense operators. The following four formulas, for example, are valid on the Overall Similarity Theory but invalid on the Past Predominance Theory. (2.8) G(A >B) 3 (FA >FB) (2.9) H(A>B) > (PA>PB) (2.10) (FA>G(A>B))>F(A>B) (2.11) (PA>H(A>B))>P(A>B) (G and H are understood respectively as "It will always be the case that" and "It has always been the case that.") These considerations show that the difference between Overall Similarity and Past Predominance is substantive; it affects the logic of tenses and conditional^.^ Why do we choose the latter logic? Firstly, because we are not persuaded that (2.4)-(2.11) are logical truths. Consider (2.7). Imagine that David and Max have been playing a simple coin-tossing and betting game. Max flips the coins. Unknown to David, Max has two coins, one with heads on each side and one with tails on each side. If David bets tails, Max flips the first coin; if he bets heads, Max flips the second. Two minutes ago David bet that the coin would come up heads on the next flip. Max flipped the coin and it came up tails. Now the following can be said truly (say, by someone who does not know which way David bet). "esides examples like (2.8), that are valid given Overall Similarity but invalid given Past Predominance, there are others that are valid given Past Predominance but invalid given Overall Similarity. One instance of this is (3.16), discussed in the next section.

CONDITIONALS IN BRANCHING TIME

(2.12) If two minutes ago David bet tails then he wins now. So, it seems that the formula (2.13) (P2Q> P2R) is true, where Qstands for the sentence 'David bet tails' and R for the sentence 'David wins now'. But the formula (2.14) P2(Q>R) is false. If David had bet tails two minutes ago, he would still have lost. So we have a situation in which (2.7) is false. Secondly, Past Predominance explains our intuitions about the truth conditions of English conditionals better than the Overall Similarity Theory. Consider the following variant of an example of Kit Fine's. (See Fine [3].) (2.15) If button A is pushed within a minute, there will be a nuclear holocaust. Imagine ways in which the button may be hooked up, or may fail to be hooked up, to a doomsday device. In some of these, (2.15) is true, and in others false. And among the former cases, we can well imagine ones in which the button is not pushed, and no holocaust occurs; say that one of these cases involves a certain moment i and history h. The Overall Similarity Theory has difficulties with such cases. For a history hf in which the button is disconnected and no holocaust occurs when the button is pressed is much more similar to h, overall, than a history h" in which there is a holocaust when the button is pressed. But if h' is used to evaluate (2.15), the sentence is false. Moreover, (2.16) is true. (2.16) If button A is pushed within a minute, it is already disconnected. Here, and in other cases too numerous to mention," Overall Similarity would be hard put to explain our intuitions about truth. O n the other hand, Past Predominance fits these intuitions. A hypothetical disconnecting of a button that is already connected counts for more than any hypothetical change regarding what will happen. Thirdly, Past Predominance makes possible an approach for explaining differences between indicative and subjunctive conditionals. We wish to suggest (tentatively) that some examples that have been contrasted simply along the indicative-subjunclo

See Slote [8] for a discussion of some of these.

RICHMOND H. THOMASON, ANIL GUPTA

tive dimension also involve scope differences with respect to tenses. For instance, consider Ernest Adams' lovely pair of examples. (Adams [I], p. 90.) (2.17) If Oswald didn't shoot Kennedy then Kennedy is alive today. (2.18) If Oswald hadn't shot Kennedy then Kennedy would be alive today. Our proposal is that (2.17) should be formalized as (2.19) PQ>R, while (2.18) should be formalized as (2.20) P(Q> R). In (2.19) a,nd (2.20), Qstands for 'Oswald doesn't shoot Kennedy' and R for the eternal sentence 'Kennedy is alive today', and we understand the past tense operator P to be relativized to a n indexically specified interval of time. The difference in the truth conditions of the two sentences arises because (2.19) requires us to maximize closeness to the present moment while (2.20) requires us to maximize closeness only up to some past moment. (2.20)' and hence (2.18), are true because at some past moment the corresponding indicative Q>R, i.e., 'If Oswald doesn't shoot Kennedy then Kennedy will be alive. . .', is true. More generally, we want to propose (tentatively) that a subjunctive asserts that the corresponding indicative sentence was true in some contextually determined interval of time." We now put these ideas to work by sketching a formal interpretation of a propositional language L with the usual truthfunctional connectives, the conditional >, the past and future tense operators P and F, and the "settledness" operator L. As far as tenses and settledness are concerned, we adopt the theory developed in Thomason [13]. Model structures consist of a nonempty set K of moments (to be thought of as world states, and not to be confused with clock on K. The relation < times12), and two relations < and

--

" A full discussion of the ideas presented in this paragraph requires more space than we have in the present paper. We intend to pursue these themes elsewhere. l2 Since we do not impose a metric on the branches of our structure, there need be nothing corresponding to clock times in these structures. T h e relation of copresence introduced below need not be considered to stand in any simple relation to clock times, either. In effect, we are ignoring in this paper the tech-

CONDITIONALS IN BRANCHING TIME

orders members of K into a treelike structure, the branches of which give various possible courses of events. We impose the following conditions on < : (1) it is transitive; and (2) if i,,i,LFQ and L(FQ>FR) true, while FQ>LFR is made false, as is illustrated by the following model structure. (2.23) h l h2 h3 h4

You are to assume here that the ordering relation is the strict . . . partial order of the tree, that i1-i2 and i 3 ~ 1 4 ~ 1 , ~ 1 6 . Let V(Qi,) = V(Q,i,) = F. Then V hi ' ( F Q = V hi 2 ( F Q = I I F, and so v:;(L--FQ = T. Let V(Q,i,) = V(Q,i6) T and V(R,i,) = T but V(R,i6) = F. Now, V ~ ( F Q= V ;i(FQ = T and therefore, (2.24) V?(LF@ = T. ~ l s o since , vh~;(FR)= F, we have (2.25) v?;(LFR), = F. Finally, let sl(FQ,il)= i2 and s2(FQ,il,hl)= s2(FQ,il,h2)= h,. This last bit is the crucial part of our model-the part that makes the inference invalid. Notice how h l and h2 are collapsed

CONDITIONALS IN BRANCHING TIME

counterfactually into h3, which is only one among two histories for i, on which FQ is true. Now it is easily seen that the premisses of our inference are true on this model, but the conclusion is false. We have already seen that the first premiss L-FQ is true at ( i , , h , ) . For the second prerniss we have V ~ ~ I ( F Q > L F Q=I T, for h V i '(LFQ) = T in view of (2.24), and s,(FQ,i,) = i, and s2(fQ.il7h1) = h3. T o see that the third premiss is true observe that V h3 i,(FR) = T. Since s,(FQ,i,,h,) = s,(FQ,i,,h,) = h3, we have V ~ I ( F Q > FR) = V;~(FQ>FR) = T . Hence V!I(L(FQ>FR)~ = T. Lastly, note that V?~(FQ>LFR) = F, i& view of (2.25). This same model shows tdat the Edelberg inference is invalid. Grant that this is a bad thing. c a n we patch the theory up so that the inference becomes valid? A direct way to do it would be to rule out the kind of situation that makes counterexamples to the inference possible; we could simply require the following. . . (2.26) If s,(A,i) = i f then for all hf&Hisuch that V h ' (A) = T there is an heHi such that s,(A,i,h) = hf. This ensures that s, will not create gaps in such a way as to in- validate the inference. But besides being ad hoc, this condition seems to us to be ugly. What makes it so is the fact that it seems to rule out structures and assignments of truth values that don't at all seem logically impossible. Take the following case. hI h2 h3 h4 h5 (2.27)

Here, let V(P,i,) = V(P,i4) = F and V(P,i,) = V(P,i,) = V(P,i,) = T. This makes it combinatorially impossible to match each history through i, in which P becomes true as an image of some history through i , . And yet nothing seems to prohibit either the structure (2.27) or the truth assignment we have placed on it. It is true that we could rule out such cases with no effect on validity, by building copies: for instance, we could insert a copy

RICHMOND H. THOMASON, ANIL GUPTA

of h, after i , to obtain enough scenarios. But this is ugly, and unless there is some independent motivation for this procedure, we find it implausible. We note lastly that the analogue of (2.26) is not so implausible, and does not rule out the situation portrayed in (2.27), if the Stalnaker function s, yields, as in David Lewis' theory, a class of histories. The problem with this account is that it fails to validate the law of conditional excluded middle,15 (A >B)V (A > -- B). So the difficulty that needs to be solved is this: how to validate both conditional excluded middle and the Edelberg inference. We present our solution to this difficulty in the next section.

This section is going to be rather technical. Readers who are not interested in the technical details may want to skim it.16 Those who find the brief motivation we give for the technical apparatus unsatisfying may want to read Section 4 first. Central to the theory we will present is the idea that the concept of truth for L should be relativised to a future choice function rather than to a history. (3.1) A future choice function is a function F from the set K of moments to U {Hi/ieK) such that ( I ) FieHiand (2) if ifeFi and i < if then Fil = F i . We let Q be the set of all future choice functions (for a fixed model structure). A future choice function F gives at each moment i a unique history through that moment-the history that would be actual if i were actual. Condition (2) ensures that the histories F chooses at later moments are coherent with histories it chooses at earlier ones; without this condition, FA would not imply PFA. One way to understand choice functions is to see them as a natural generalization of histories, one that is required by the transition to a tense logic in which what is true at moments copresent with i can be relevant to what is true at i. A history tells you what will happen only for moments that lie along it-for the rest it leaves the future indeterminate. A choice function,

''

For a defense of this law see Stalnaker [12]. Advice to skimmers: study carefully the definition of a future choice function, and spend some time on clause (3.13). l6

CONDITIONALS IN BRANCHING TIME

on the other hand, tells you what will happen at all moments. A choice function is a richer history; a history is a partial choice function. Now in a tense logic which has only operators like P, F and L, the truth value of a formula A at a moment i depends only on histories that pass through i. You are not forced to consider moments copresent or incomparable with i. So here it is all right to think of the concept of truth as relativized to a history. But when you add conditionals to the language the truth value of a formula A at i, in general, depends also on what will happen at moments if copresent with i. Different choices as to what will happen at if affect what conditionals hold true at i. Thus in this context histories do not contain enough information; though they tell us what will happen, they do not tell us what would happen. We implement choice functions in our semantics by defining V: (A). We can keep the definitions of model structures and models as they are given in the last section, except that in the present theory the second Stalnaker function s, takes as arguments a formula A, a moment i and a future choice function F and yields as value a future choice function s,(A,i,F). The recursion clauses for truth functions, tenses and conditionals are adjusted to the new parameter, and except for the clause for settledness remain in essentials similar to those of classical conditional and tense logics. , restrict ourselves to choice In the definition below of v ~ ( A )we functions meeting a certain requirement: F must be normal at i, in the following sense. (3.2) F is normal at i iff for all j < i, F = F,. We say that a pair (i,F) is normal iff F is normal at i. The normalization of F to i is the choice function Ff such that F: = Fi for j t i and F' = F for j + i . If F is normal at i, F treais i as 'i8ctual" from the point of view of moments in the past of i. Now, the definition of satisfaction. (3.3) V:(@ = V(Q,i). = T iff v:(A) = F. (3.4) v:(--A) (3.5) Vr(A 3 B) = T iff either V ~ A =) F or V ~ B =) T. (3.6) Vy(FA) = T iff for some ifeFisuch that i < if,v,:(A) = T.

RICHMOND H. THOMASON. ANIL GUPTA

(3.7) V: (PA) = T iff for some if such that if< i, v:(A) = T . (3.8) V: (A>B) = T iff either s,(A,i) is undefined or vs:li;if) (B)

=

T.

The clause for L requires forethought. We should not simply say that a formula LA is true at i with respect to F normal at i iff A is true at i with respect to all choice functions F ' normal at i. This is because LA says that A holds no matter how things will be. Hence, we want F ' to differ from F only on moments that are after i or after some moment copresent with i. (There is also a formal reason for not accepting this account of the truth conditions of LA: on it the Edelberg inference is invalid.) Before we state the clause for L, we need to define some ancillary concepts. (3.9) i is posterior to j iff there is a moment j' copresent with j such that i>jf or i = j'. (3.10) i is antiposterior to j iff either i is not posterior to j or iej." (3.11) F agrees with G on moments posterior to i (symbolically, F E Post (G,i)) iff all moments j posterior to i are such = G,. that (3.12) F agrees with G on moments antiposterior to i (symbolically, F E APost(G,i)) iff all moments j, k antiposterior to i are such that j E Fk iff j E Gk. Now clause (3.13) gives the truth conditions for LA. (3.13) V V L A ) = T iff at all choice functions G normal at i such that G E APost (F,i), v?(A) = T . We observe that the clauses for F and P, (3.6) and (3.7), are correct only for choice functions F normal at i; if these are generalized to all choice functions then the law A > PFA becomes falsifiable. This is our motivation for restricting the recursive definition above to choice functions that are normal at a given moment. Notice that the clauses for P and F never take us to

5

" T h e terminology is awkward, a n d the concepts will probably seem more devious than they should be. T h e reason for this is that we feel it is important to state a theory without assuming that for all moments i a n d histories h, there is a n i' such that i=i' a n d i ' ~h . T h u s there may be moments that are neither anterior nor posterior to i. Making the assumption in question would simplify things, but would lose the generality that we feel is appropriate for a tense logic. In this connection, see Note 12, above.

CONDITIONALS IN BRANCHING TIME

non-normal momentchoice function pairs. The same obviously holds for all other connectives, except the conditional. Here the constraints on the two Stalnaker functions ensure that s,(A,i,F) is always normal at s,(A,i). These constraints are as follows. ' a GEC?such that (i) If there is an if such that i ~ i and G is normal at if and VF(A) = T then both s,(A,i) and s,(A,i,F) are defined provided F is normal at i; and s,(A,i,F) is normal at s,(A,i). Otherwise both s,(A,i) and s,(A,i,F) are undefined.

A i F )(B) (iii) ~f v ss:[A:ij then s,(A,i,F)

=

v

S2JB,I,F)(~)= T, s1 B,1) s,(B,i,F) and sl(A,i) = s~(B,i).

=

(A) = T where F ' is the result of normaliz(iv) If V F' Sl(A,i) ing F to s,(A,i) then s,(A,i,F) = F'. F (v) If there is an FeC? such that F is normal at i and V (A) = T then s,(A,i) = i. (vi) If there are choice functions F,G such that F is normal at s,(A,i) and V

(B) ~l(A,i)

sl(B,i) and v:(~,~)

(A)

= =

T , and G is normal at

T then, s (A,i)

=

s, (B,i).

(vii) If F E APost(G,i), and s,(A,i,F) and s,(A,i,G) are defined, then s,(A,i,F) E APost (s,(A,i,G),i). (viii) If G is a choice function normal at s,(A,i) such that (a) G E Post(F,i) where F is normal at i, (b) G E APost (s,(A,i,Ff),i) for some choice function F ' E APost (F,i), and ( c ) v:(~,~)

(A)

=

T , then s, (A, i, F)

=

G.

--

i. (ix) If s,(A,i) is defined then s,(A,i) Conditions (i) - (iii) are direct analogues of ones from Stalnaker's theory. Condition (iv) requires s, to distort the choice function as little as possible. Conditions (v) - (vii) embody the Principle of Past Predominance. Condition (v) says that if you can preserve all of the past then you should; Condition (vi) says that you must not choose a more dissimilar past than you have to;18 and Condition (vii) says that even at counterfactual colS

Conditions (i) - (vi), if they were exchanged for conditions on Stalnaker

RICHMOND H. THOMASON, ANIL GUPTA

present moments the past should be preserved, if possible. All these conditions have an effect on validity. Thus (iv) and (v) ensure that the inferences (3.14) From A and B to infer A >B (3.15) From MA and L(A 3 B) to infer A >B are valid, where MA --L--A. Conditions (vi) and (vii) ensure that the inferences (3.16) and (3.17), respectively, are valid. (3.16) From A>MB, B>MA, A>C, and A > (C>LC) to infer B>C. (3.17) From A >L(A > B) to infer L(A >B). Condition (viii) says that the future histories at all copresent moments must be preserved by s, if doing so is consistent with Past Predominance. This condition helps to ensure that the Edelberg inferences are valid. (See the Appendix for proof.) Finally Condition (ix) ensures that a conditional ' I f . . . then . . .' amounts to 'If . . . now then . . .' .

=,,

We now want to discuss some problems that lead to refinements of the theory we just presented, and these in turn provide some fresh perspectives on matters of philosophical interest. We begin with an example of Stalnaker's.lg Suppose two coins are tossed successively, one in Chicago and the other in Bombay.

functions taking propositions (rather than formulas) as arguments, would amount to this. There is for each normal pair (i, F ) , a well ordering < ( i , F) of normal pairs consisting of moments copresent with i a n d choice functions, such that (a) (i, F ) is the least pair under this ordering a n d ( i , , F , ) B . H e also contrasted material with what he called formal implication, but this contrast does not concern us here.) Stalnaker, like Russell and unlike Lewis, examines the truth value of A > B in some one situation, but does not assume this to be a fixed situation, independent of A. This is what we mean by calling Stalnaker's theory variably material. C. I. Lewis takes A>B to be strict, in the sense that it holds when A > B is true in a multiplicity of situations, and he assumes this multiplicity is fixed independently of A. David Lewis relaxes this assumption, while retaining the multiplicity of situations. Note that strict theories result in a conditional that expresses some necessary connection between the antecedent and the consequent. Material theories deny that there is any such necessary connection expressed by a conditional.

RICHMOND H. THOMASON, ANIL GUPTA

analysis of conditionals, the second Stalnaker function s, yields at each A, i, h a class of histories s,(A, i, h). Then A > B is true at (i, h) if and only if A 3 B holds at (s,(A,i),hf)for all h' ES,(A, i, h). We can make the Edelberg inference valid if we require that for all histories h ' H ~ such that V !:(A,i) (A) = T sl (A,i) there is a history heHi such that h ' s,(A, ~ i, h). O n the resulting theory we can keep all the desired model structures, and also have the Edelberg inference valid, but we give up conditional excluded middle. A third option is to give up the Edelberg inference, but explain its apparent validity in some more or less devious way. One such way is via supervaluations. It is clear that if we accept conditional excluded middle we have to supplement our theory with supervaluations on Stalnaker functions to account for cases such as Quine's Bizet-Verdi example. (See Stalnaker [12] for a discussion of this.) Thus instead of a single Stalnaker function s (where s is a pair (s,, s,)) we now have a set S of such functions compatible with the "facts" about conditionals. The truth is then what is common to all these functions: where v . ( ~ ,') (A) 1

is the truth value of A relative to s and h, on the

theory of Section 2, we let V for all SES;and V h (A)

=

P (A)

=

T if

v ~ (S, ~(A) '

F if v ~ ( S,~ , (A)

=

=

T

F for all SES;

otherwise V h (A) is undefined. Now, we can easily make the Edelberg inferences valid, in the sense that if the premisses are true for all members of S then the conclusion is also true for all members of S. The inferences will be valid in this sense if we re, i) = if, and v ; ' ( ~ )= T , quire that whenever (s,, s 2 ) ~ Ss,(A, where i'&hf,then there is a (s,, sf2)&S,where sf2(A,i, h) = h'. However, L(A >B) > (A >L(A > B)) will be invalid; it can fail to have a truth value. T o the extent that we can discover intuitions about whether the Edelberg inference should be merely truth preserving or should provide a valid conditional, these support the latter alternative. Also,, since even in those cases where you have to resort to this distinction (e.g. in explaining the validity of Convention T ) it is difficult to motivate the distinction as convincingly as one would wish, it is perhaps a good strategy to

CONDITIONALS IN BRANCHING TIME

avoid using it when it is possible to do so. This secondary consideration lends support to the theory of Section 3. Note that truth-value gaps can arise in the third theory in two ways, for there are two parameters along which supervaluations can be introduced: 'F' and 's'. Since choice functions are generalizations of histories, the former parameter is what yields the indeterminacy of future contingencies. Thus, a sentence like (4.2) This coin will come up heads on the second toss if it comes up heads on the first toss may lack a truth value at a moment i before both tosses, because there is a choice function F , (assigning i a history on which the coin comes up heads on the first toss and tails on the second) on which (4.2) is false, and there is another choice function F, (assigning i a history on which the coin comes up heads on both tosses) on which (4.2) is true. And an unconditional sentence like (4.3) This coin will come up heads on the second toss will lack a truth value a t i for reasons that are exactly the same: F , , for instance, makes (4.3) false and F2 makes it true. Now consider a moment j later than i, at which the coin comes up tails for the second time, and compare the "past tenses" of (4.2) and (4.3) at j. (4.4) It was the case that if the coin were going to come up heads on the first toss, it would come up heads on the second toss. (4.5) The coin was going to come up heads on the second toss. Here we see a difference between conditionals and nonconditionals; conditionals can be unfuljlled, and this may cause their '