2,144 893 12MB
Pages 297 Page size 391.2 x 612.24 pts Year 2006
Nelson Goodman
Philosophy Now Series Editor: John Shand This is a fresh and vital series of new introductions to today’s most read, discussed and important philosophers. Combining rigorous analysis with authoritative exposition, each book gives a clear, comprehensive and enthralling access to the ideas of those philosophers who have made a truly fundamental and original contribution to the subject. Together the volumes comprise a remarkable gallery of the thinkers who have been at the forefront of philosophical ideas. Published Donald Davidson Marc Joseph
Hilary Putnam Maximilian de Gaynesford
Michael Dummett Bernhard Weiss
W. V. Quine Alex Orenstein
Nelson Goodman Daniel Cohnitz & Marcus Rossberg
Richard Rorty Alan Malachowski
Saul Kripke G. W. Fitch
John Searle Nick Fotion
Thomas Kuhn Alexander Bird
Wilfrid Sellars Willem A. deVries
David Lewis Daniel Nolan
Charles Taylor Ruth Abbey
John McDowell Tim Thornton
Peter Winch Colin Lyas
Robert Nozick A. R. Lacey Forthcoming David Armstrong Stephen Mumford
Peter Strawson Clifford Brown
Thomas Nagel Alan Thomas
Bernard Williams Mark Jenkins
John Rawls Catherine Audard
Nelson Goodman Daniel Cohnitz & Marcus Rossberg
In memoriam Lothar Ridder, teacher and friend
© Daniel Cohnitz & Marcus Rossberg, 2006 This book is copyright under the Berne Convention. No reproduction without permission. All rights reserved. First published in 2006 by Acumen Acumen Publishing Limited 15a Lewins Yard East Street Chesham Bucks HP5 1HQ www.acumenpublishing.co.uk ISBN: ISBN:
1-84465-036-7 (hardcover) 1-84465-037-5 (paperback)
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. Designed and typeset in Century Schoolbook by Kate Williams, Swansea. Printed and bound by Cromwell Press, Trowbridge.
Contents
Acknowledgements Abbreviations 1 The worldmaker’s universe
vii viii 1
2 If this were an emerald it would be grue: problems and riddles of induction
28
3 The big picture
54
4 Particulars and parts
75
5 From Vienna Station to Boston Terminus
99
6 Follow the sign
140
7 Diagnosing art
164
8 Starmaking
191
9 Never mind mind, essence is not essential, and matter does not matter
204
List of symbols Glossary of technical terms Further reading Notes Bibliography Index
229 230 237 240 261 277
v
Acknowledgements
We should like to thank the following colleagues for helpful comments on earlier versions of parts of this book, and various kinds of support ranging from technical help to philosophical discussions that lasted many hours: Stefan Bagusche, Elizabeth Barnes, Aislinn Batstone, Dieter Birnbacher, Mike Bishop, Manuel Bremer, Marc Breuer, Ross Cameron, Curtis Carter, Roy Cook, Lindsay Duffield, Philip Ebert, Catherine Elgin, Gerhard Ernst, Steven Gerrard, Patrick Greenough, Bob Hale, Sune Holm, Carrie Jenkins, Lisa Jones, Andrew Jorgensen, Christoph Kann, Tanja Kosubek, Hannes Leitgeb, Darren McDonald, Brian McElwee, Aidan McGlynn, Thomas Mormann, Daniel Nolan, John Norton, Nikolaj Pedersen, Oliver Petersen, Simon Robertson, Raffaele Rodogno, Oliver Scholz, John Shand, Peter Simons, Barry Smith, Axel Spree, Jakob Steinbrenner, Chiara Tabet, Myung Hee Theuer, R. K. B. Vaas, Michael Weh, Robert Williams, Crispin Wright, the members of the seminar that was held on an earlier version of this book at the University of St Andrews, and two anonymous referees. We dedicate this book to the memory of Lothar Ridder, our friend and teacher, who died 31 January 2004, aged 51, after a long battle with cancer. Lothar had been our teacher since our undergraduate studies. He received his PhD for a thesis on the ontology of logical atomism (1989) and we first met him when he was writing his “Habilitation” on mereology. Lothar discussed Goodman’s and Leonard’s work in detail in his Mereologie (2003). In Chapter 4 of our book, which deals with Goodman’s mereology and nominalism, you will find some of his ideas and results. Lothar impressed everyone who was privileged to know him with his sharp analytic mind, his meticulous precision in philosophy, his gentleness and immense modesty. He embodied the qualities to which we aspire. We miss him greatly. vii
Abbreviations
BA Basic Abilities Required for Understanding and Creation in the Arts: Final Report (with David Perkins, Howard Gardner, and the assistance of Jeanne Bamberger et al.) (Cambridge, MA: Harvard University, Graduate School of Education, 1972). FFF Fact, Fiction, and Forecast, 4th edn (Cambridge, MA: Harvard University Press, [1954] 1983). LA Languages of Art: An Approach to a Theory of Symbols (Brighton: Harvester, [1968] 1981). MM Of Mind and Other Matters (Cambridge, MA: Harvard University Press, 1984). PP Problems and Projects (Indianapolis, IN: Bobbs-Merrill, 1972). RP Reconceptions in Philosophy and other Arts and Sciences (with Catherine Z. Elgin) (London: Routledge, 1990). SA The Structure of Appearance, 3rd edn (Boston, MA: Reidel, 1977 [1951]). SQ A Study of Qualities (New York: Garland, [1941] 1990). WW Ways of Worldmaking (Indianapolis, IN: Hackett, [1978] 1985).
viii
Chapter 1
The worldmaker’s universe
The whole and its parts Nelson Goodman was a philosopher of enormous breadth. He made important and highly original contributions to epistemology, logic and the philosophy of science, but is also a famous figure in the analytic philosophy of art and in metaphysics. Unlike many other philosophers of the twentieth century, Goodman was not a puzzle-solver who wrote in an almost inaccessible technical style on unrelated highly specialized problems; on the contrary, he worked on the most fundamental problems, contributing original and provocative insights, and his writing was always brilliant in both precision and style. As his students and/or co-authors Catherine Elgin, Israel Scheffler and Robert Schwartz have remarked in retrospect, he: delighted in making points in the most provocative way possible. There is some controversy over what is the most outrageous thing he ever said. Among the more obvious contenders are the repudiation of sets, the grue paradox, the contention that one wrong note disqualifies a performance as an instance of a musical work, and the claim that there are many worlds if any. In such remarks, Goodman threw down the gauntlet. If you disagree with one (or more) of the above, you need to show what is wrong with the argument that leads to it. All too frequently the initial irritation is compounded as we begin to appreciate the difficulty of evading his conclusions without begging the question, or committing ourselves to claims that seem even more dubious than his. (Elgin et al. 1999: 207) 1
Nelson Goodman
Goodman clearly ranks among the most important philosophers of the twentieth century. Full appreciation of the value of his work, however, has barely begun. One might go as far as saying that of the most influential philosophers of the twentieth century, Goodman is today the most neglected. This book, for example, is, as far as we know, the first attempt to introduce Goodman’s philosophy as the coherent, interconnected system of ideas it is. Thus one of our major aims here is to show by way of example that studying Goodman systematically and in some detail is worthwhile. What we also hope to achieve is to put some of Goodman’s better known contributions into the right context. Goodman’s Languages of Art (1968) brought him to the attention of postmodern thinkers, who tend to interpret him (with approval) as a culture relativist with leanings towards irrationalism.1 This is a much distorted picture of Goodman’s philosophy, based on a misunderstanding of his constructionalism. Unlike philosophers such as Paul Feyerabend or Richard Rorty, who worked in a certain analytic tradition for a part of their lives and then converted to some sort of non-analytic philosophy in later years, Goodman’s whole work is within one and the same tradition of analytic philosophy. Of course, Goodman’s views have developed over the years (it would be sad if this was not so), but we want to demonstrate that his early work in logic and epistemology and his later work on art and constructionalism actually form a coherent whole, methodologically as well as in their content. This enterprise might be doomed from the start. In what we believe to be Goodman’s last publication he writes: There is no such thing as the philosophy of Nelson Goodman any more than there is such a thing as the finger of Nelson Goodman. There are many philosophies, but on the other hand there is no nice neat order of different complete philosophies: there are lots of ideas, conjectures about various fields. A few months ago, at the Technische Universität in Berlin, … I gave an impromptu talk called “Untangling Nelson Goodman” and I said, ‘Well, here’s all this mess and can I do anything about untangling these things?’. The answer was that I couldn’t do very much. I mean, for instance, I had dealt with certain topics many different times and in many different contexts; but it is not always clear how these relate to one another. And so perhaps I should try to make it clear. All I could do is suggest some of the different attacks that I made on some of the problems at different times and at least note that these were not all part of a well organized 2
The worldmaker’s universe
scheme. They were all different attempts to deal with different aspects of the problem. And then it occurred to me that untangling this mess might entail a good deal of loss, the kind of loss you get if you try to untangle a plate of spaghetti: you would end up with some rather uninspiring strings of dough which would not have anything of the central quality of the whole meal. (Goodman 1997: 16–17) Despite Goodman’s pessimism, we hope to have achieved an introduction to Goodman’s philosophies that points out the coherence and unity that Goodman denies in the quote above. We hope to have “untangled” Goodman a bit, or to have at least provided a startingpoint for further studies. The most systematic presentation of Goodman’s work would start with his PhD thesis A Study of Qualities (SQ) and develop everything else from there. A Study of Qualities, however, is one of the the most complicated books he ever wrote, which is why we have decided to sacrifice some systematicity on the altar of didactics. Instead, after some remarks about Goodman’s life, personality and historical background in this chapter, we shall introduce Goodman’s so-called “new riddle of induction” – one of Goodman’s major contributions to the philosophy of science – in Chapter 2. Presumably, readers are already familiar with the “new riddle” or have already heard about ‘grue’ and kindred predicates. We hope that learning by way of example how Goodman approaches a philosophical problem is a useful start for understanding his philosophy. Having introduced Goodman’s way of doing philosophy in a specific area of epistemology, we turn to the broader perspectives. In Chapter 3 we reconstruct Goodman’s view on the purpose and aim of philosophy as a professional activity. We will see that Goodman’s philosophy aims at elucidation and understanding in a very specific sense. Philosophically problematic notions are clarified by way of rational reconstructions. In the ideal language tradition, the philosophical tradition Goodman belongs to, this gets done by explicating problematic concepts. What explication is, and how it differs from other forms of definition, is explained in the second half of the chapter. In Chapter 4, with a few broad brushstrokes, we introduce the formal basis for Goodman’s nominalism. In 1940 Goodman and Henry S. Leonard published a mereological system called the calculus of individuals, departing from work that Leonard had already done for his PhD (1930). Goodman and Leonard demonstrate that this apparatus is able to make precise, and hence elucidate, important 3
Nelson Goodman
philosophical concepts with the aid of relations such as ‘overlapping’, ‘part’ or ‘mereological sum’, which are definable in the calculus. This powerful formal tool was brought to use in Goodman’s chief work, The Structure of Appearance (1951), a considerably revised version of A Study of Qualities. This book is a thorough analysis and further elaboration of the work of Rudolf Carnap, his most famous predecessor in the ideal-language tradition. In Chapter 5 we characterize Carnap’s project in Der logische Aufbau der Welt (the Aufbau) ([1928] 1961) and also review Carnap’s early conventionalism (of the early 1930s), which arose from Carnap’s collaboration and discussions with Otto Neurath. Goodman’s work is usually classified as being an anti-foundationalist version of Carnap’s project, but Carnap himself endorsed most of Goodman’s later results shortly after the Aufbau (in the period including The Logical Syntax of Language ([1934] 1937), with its principle of tolerance). The main characteristics of this period of Carnap’s are also characteristics of Goodman’s Structure of Appearance and his later work. Since much in The Structure of Appearance is in direct confrontation with Carnap’s Aufbau, it is best, and probably solely, understood on the basis of a firm knowledge of the key issues of the Aufbau. We therefore take the space to dedicate a good part of this chapter to the pre-Goodman philosophy, from which his own work arises as a criticism and further development. The second formal basis for Goodman’s philosophy, his theory of symbols, will be introduced in Chapter 6. All of our cognitive access to the world, Goodman insists, is governed by the use of symbols of all sorts. In Languages of Art he develops a taxonomy of symbol systems that contribute in different ways to our different cognitive enterprises. All kinds of symbol systems and all ways of reference have nevertheless one thing in common: they facilitate our understanding. Chapter 7 will finally take us to Goodman’s philosophy of art. Taking a work of art as a complex symbol, Goodman has a good part of what he needs for his “epistemic turn” in aesthetics (Elgin 1997a: ch. 3). Goodman sees art as an essentially cognitive endeavour, and so the most important thing is to recognize a work of art as promoting our understanding (or making) of a world version. Along with this comes a relocation of what was traditionally taken to be essential to art. After having placed Goodman in a certain branch of analytic philosophy and having already noted the constructionist nature of his solution for the new riddle of induction, in Chapter 8 we shall explain Goodman’s constructionalism and relativism. According to Goodman, worlds and the objects in them are made rather than found. 4
The worldmaker’s universe
The enterprise of cognition is to construct symbol systems, so-called versions. True versions constitute a world. So in choosing a symbol system that, for example, categorizes the celestial bodies, we, in a sense, make the stars. In this chapter we shall present the worldmaking that Goodman describes, for example, in Ways of Worldmaking (1978), and in doing so demonstrate why for Goodman art, philosophy and science are all “ways of worldmaking”. These ways are on a par, but construct different worlds, each appropriate for different epistemic purposes. In order to help the understanding of the unity of Goodman’s thought, most of the book is conceived as a rather affirmative reconstruction of Goodman’s views. Chapter 9, however, will deal mainly with critical remarks. One group of Goodman’s publications that are often forgotten are the highly technical papers he wrote, most of which have been published in the Journal of Symbolic Logic. Ullian (1999) gives an overview of them; this book is not the place to introduce them. Another area of Goodman’s work that we cannot adequately treat here is his non-philosophical engagement with the arts and art education, although we shall say something about that in the next section.
Nelson Goodman (1906–1998)2 Henry Nelson Goodman was born on 7 August 1906 in Summerville , Massachusetts, USA, the son of Sarah Elizabeth (Woodbury) Goodman and Henry L. Goodman. In the 1920s he enrolled at Harvard University. As a student, Goodman was interested in philosophy, as well as in mathematics and creative writing; in fact, at the very beginning of his studies he wanted to become a novelist. However, at the end of the first semester, after a seminar on the pre-Socratics, he knew that philosophy was right for him (Scholz 2005). At that time, Harvard was a good place to study philosophy. Alfred North Whitehead, Clarence Irving Lewis and Henry Maurice Sheffer were prominent members of the faculty, as were William Ernst Hocking, Ralph Barton Perry, and James Haughton Woods. Looking back on his early years of study, Goodman wrote: Our introduction to philosophy included the historic running debate over idealism versus realism between W. E. Hocking and Ralph Barton Perry, and over monism and pluralism in logic between C. I. Lewis and Harry Sheffer. We were absorbed by 5
Nelson Goodman
Lewis’ courses in the theory of knowledge, based on the just published Mind and the World Order; and we were first exasperated and then enthralled by the nearly incoherent but inspired and profound lectures of James Haughton Woods on Plato. We sharpened our philosophical teeth in almost daily discussions of such matters as Berkeley’s idealism, Plato’s theory of ideas, Whitehead’s extensive abstraction, and problems in logic. (1969b: ix) Goodman graduated from Harvard with a BS (Phi Beta Kappa,3 magna cum laude) in 1928, his PhD, however, which he received in 1941 for A Study of Qualities, took him more than 12 years to complete. There are several possible reasons for the lateness of his PhD. Maybe the most important was that Goodman was Jewish, and therefore not eligible for a graduate fellowship at Harvard (Schwartz 1999, Elgin 2000a, Scholz 2005). He had to work outside the university to finance his studies. This, however, leads to the more often recognized reason for the lateness of his PhD: Goodman’s interest and activity in the art world. In 1937, the German philosopher Rudolf Carnap had just accepted a permanent post at the University of Chicago. Carnap was offered the leadership of a research group with several assistantships. He wrote to his friend Willard Van Orman Quine at Harvard for advice on whom to hire. Quine responded with a list of recommendations, naming J. C. C. McKinsey, Leonard and Goodman: My next recommendation would be H. Nelson Goodman … Candidate for PhD. Age about 30. Very competent and industrious. An authority on the Logischer Aufbau, and conversant with your later work. Well grounded in philosophy and logic. As you know, he is in business – which accounts for the lateness of his PhD. I am not sure that he would not be interested in the assistantship. Teaching experience and pul’ns [publications], none. (Creath 1990: 233) ‘In business’ here refers to Goodman’s activities as the director of an art gallery. As Oliver Scholz (2005) reports, Goodman started extensive studies in practical and theoretical aspects of art. In the seminars and colloquia of the collector and connoisseur Paul Joseph Sachs (1878–1965), Goodman learnt about the basics of art history as well as about practical matters of art presentation and art appreciation. In particular he learnt to see the difference between the cursory perception of artworks in a museum or a catalogue and the intensive 6
The worldmaker’s universe
appreciation of an original. Sachs’s seminars and lectures created many an art collector and Goodman was one of them. It was a passion that Goodman kept throughout his life: Goodman’s professional role as a gallery director and his private art collecting were sources of great satisfaction. His life-long pursuit of collecting art began in his student days. He was well known in the art world for his discriminating aesthetic perception and equally for his astuteness in negotiating the price of an object. A visit to his home in Weston, Massachusetts would reveal a collector with enthusiasm and in-depth knowledge over a wide range of art. Virtually every corner and closet held yet another group of art treasures. It was not unusual to see hanging on opposite walls an important Flemish Old Master by Jan Van Kessel and an exquisite naïve work by an unknown twentieth century Italian immigrant farmer, Peter Petronzio. His collections included seventeenth century Old Master paintings and drawings, modern art from Picasso to Demuth, ancient Asian sculptures, and Native American arts of the Northwest Coast and the Southwest, even Pre-Columbian. (Carter 2000: 252–3) From 1928 until 1941 Goodman was Director of the Walker-Goodman Art Gallery at Copley Square, Boston. Here he met his later wife Katharine (“Kay”) Sturgis, who came to the gallery in order to exhibit her watercolours and ink drawings.4 From this time on, Goodman would constantly be travelling between the art world and the world of academic philosophy, as at home in museums, at art exhibitions and in auction houses as at congresses and conferences on logic and epistemology. Working as the director of an art gallery at least provided Goodman with a bigger car than teaching at Harvard did for contemporaries such as Quine. When Carnap visited the US in 1935, Quine organized a trip5 to the annual philosophy convention at Baltimore. To Carnap he wrote: “Everything is in order for our trip to Baltimore. A man by the name of Goodman, who is busy with your [Aufbau], will take us in his large car” (Creath 1990: 194). Although he was successfully running an art gallery, Goodman still found time for philosophy. During the early 1930s Goodman belonged, with David Prall, Leonard, Charles Stevenson and John Cooley, to the privileged few who attended Quine’s early lectures on Carnap.6 A Study of Qualities, now regarded by some (see Elgin 2000a, Scholz 2005) to be the best and most ambitious philosophical PhD 7
Nelson Goodman
thesis ever submitted at Harvard, was presumably reworked considerably after 1930. (We shall come to that in some detail in Chapter 5.) However, the idea of a calculus of individuals, one of the main technical innovations in Goodman’s thesis, first appeared in Leonard’s doctoral thesis Singular Terms (1930). Leonard’s thesis probably gives a good impression of what Goodman was up to in 1930. It seems easily conceivable that Goodman only drew the connection to Carnap’s work later (we know that even Quine did not know much about the Aufbau until 1932 or so, when he heard about it from Cooley; see Quine 1985: 86). It might also have been surprising for Goodman to learn from Quine in 1935 that Leśniewski had already provided a full theory of parthood relations (see Quine 1985: 122). All of these factors might have forced Goodman to enlarge his project into new directions during the early 1930s to render his PhD thesis an original contribution, since the basic ideas for the calculus of individuals were already in his hands in 1930 (SA: XVII) and a “finished version” of his dissertation was ready by 1933 (SQ: i).7 As Scholz (2005) points out, Goodman’s perfectionism might also have caused some delay. The subsequent delay in publication, however, was mainly due to the Second World War. From 1942 to 1945 Goodman performed military service in the US Army, conducting psychological tests (Scholz 2005, Carter 2000). It then took until 1951 for The Structure of Appearance to appear in print (A Study of Qualities was not published until 1990). After his military service, Goodman taught briefly as “instructor in philosophy” at Tufts College, and was then hired as associate professor (1946–51) and later as full professor (1951–64) at the University of Pennsylvania. He served briefly as Harry Austryn Wolfson Professor of Philosophy at Brandeis University (1964–67), finally returning to Harvard in 1968, where he taught philosophy until 1977. In 1967 Goodman founded Project Zero at Harvard: a centre to study and improve education in the arts. Goodman believed that arts learning should be studied as a serious cognitive activity, but that nothing, or “zero”, had been firmly established in the field, hence the name of the project. At Harvard, Goodman was also involved in establishing the Dance Center and the Institute for Arts Administration. Goodman chaired the Arts Orientation Series (1969–71) and was advisor of the Arts for Summer School for seven years (1971–77). In these activities Goodman could combine his interests in the arts with his work as a philosophy professor. Besides art administration, Goodman actively participated as a producer in the art world. Goodman conceived altogether three 8
The worldmaker’s universe
multimedia-performance events. Inspired by paintings by his wife, Katharine Sturgis, Goodman first produced Hockey Seen together with composer John Adams and choreographer Martha Armstrong Gray. Hockey Seen was shown in 1972 at Harvard and in 1980 in the Belgian town Knokke-le-Zoute; in 1984 it was recorded on film at Harvard. John Updike’s novel Rabbit, Run served as the template for Goodman’s same-titled second work, which was realized again with choreographer Gray and composer Joel Kabakov. Goodman’s last work was Variations: An Illustrated Lecture Concert, featuring twenty-one of Pablo Picasso’s variations on Diego Velázquez’s painting Las Meninas. The original painting and its variations were displayed with the help of slide projections, while a musical theme and analogous twenty-one variations composed by David Alpher were played. Variations was performed at a number of places, including Helsinki University, Wayne State University, the Rockport (Massachusetts) Chamber Music Festival, Harvard University and Trinity University Texas (Carter 2000). In 1986 the Gloucester Daily Times (Massachusetts) praised it in a review as “a serious and original work. It succeeds in at least two ways: it forces the audience to think about how each section is a variation on what precedes it, and about the relationship between [visual] art and music.”8 Nelson Goodman’s personality is often described as unapproachable and demanding. Indeed, Goodman – a perfectionist – demanded a great deal from himself and others. W. J. T. Mitchell, now Gaylord Donnelley Distinguished Service Professor in the departments of English and art history at the University of Chicago, remembers his respect for this aspect of Goodman’s reputation at their first meeting: I first met Goodman in the mid-’80s, having made the pilgrimage to Cambridge to discuss with him the chapter of Iconology that deals with his work. I was scared to death. Goodman’s own writing, and his reputation as one of the most demanding, formidable characters in the Harvard philosophy department, was intimidating enough. Then there were the stories of how notoriously difficult he was on doctoral exams. I knew that he did not suffer fools gladly, and I was pretty sure at that moment that I had been a fool to take on the task of expounding his relation to the traditional problem of word and image. His appearance – bald, a massively sculpted head with prominent nose and jaw – and his gruff, peremptory tone of voice only reinforced my terror. (Mitchell 1999)
9
Nelson Goodman
However, people who knew him well were also acquainted with Goodman’s gentler side.9 As his friend and colleague Hilary Putnam remarked, “he affected to be a sourpuss, but he was really enormously cheerful” (Harvard University Gazette 1998).10 His gentleness extended beyond his fellow human beings. Goodman and his wife were animal-lovers and dedicated animal-welfare activists. Both were members of the World Society for the Protection of Animals and supported campaigns to help animals that were endangered due to wars or natural disasters. Elgin recalls the following story, which nicely combines the two aspect of Goodman’s personality. It was summer in the early ’80s. Nelson Goodman and I were working on Reconceptions. As always, Nelson adamantly insisted that what he was working on was to be given the highest priority. All other matters were to be shelved until his project, whatever it happened to be, was done. He spent his summers in a cottage in Rockport, Massachusetts, but arranged to meet me in Cambridge to confer about the book. When I arrived, I found a message. Owing to an emergency, he would not be in. Naturally, I was alarmed. He was, after all, an old man. Was he sick? Was he injured? When I telephoned him in Rockport, he seemed surprisingly unruffled. Evidently my view of what qualifies as an emergency and his diverged considerably. He told me that a robin had built her nest and laid her eggs in an apple tree outside his window. This, he remarked in an aside, enabled him to realize the objective, first bruited in Fact, Fiction, and Forecast, of doing ornithology without going out in the rain. That morning the eggs had hatched. This all sounded rather pleasant in a bucolic sort of way, but what, I demanded, was the emergency? “Cats have been sighted,” he replied. That said it all. He stayed at his cottage and patrolled the perimeter until the nestlings flew away. Then he returned to work, insisting as adamantly as ever that all other matters had to be shelved until our project was done. (Elgin 1999) Goodman was not fond of looking back on his achievements, but was always looking forward to the next philosophical problem to tackle. As he once advised Elgin: “When you finish one project, ask yourself what is the most difficult outstanding problem in philosophy. Then work on that” (Elgin 2000: 2). 10
The worldmaker’s universe
This might well be the reason why there is nothing like an autobiography and only two interviews with Goodman (see Scholz 2005). He even refused the offer to be honoured with a volume in the prestigious Schilpp Library of Living Philosophers (Elgin 2000a: 2). However, he was revered in other ways (see Carter 2000). He served as President of the American Philosophical Association, Eastern Division, in 1967 and as Vice President of the Association for Symbolic Logic, 1950–52. Goodman was a Fellow of the American Academy of Arts and Sciences and a corresponding Fellow of the British Academy. In 1946 and 1947 he received the Guggenheim Award. His many distinguished lectures and lecture series included: the Sherman Lecture (1953) at the University of London; the Alfred North Whitehead Lecture (1962) at Harvard University; the John Locke Lectures (1962) at the University of Oxford; the Miller Lectures (1974) at the University of Illinois; the Immanuel Kant Lectures (1976) at Stanford University; and the Howison Lecture (1985) at the University of California, Berkeley. His honorary doctoral degrees are from the University of Pennsylvania, Adelphi University, Technical University, Berlin (1990) and the University of Nancy, France (1997). In 1992 the American Society for Aesthetics honoured Goodman with a seminar on his work on their fiftieth anniversary. Goodman’s work has been celebrated in Europe with various international colloquia during the last decade of the twentieth century, including: “Author’s Colloquium” (1991) at the Zentrum für interdisziplinäre Forschung, Bielefeld University, Germany (Bieri and Scholz 1993); “Colloquium on Representation” (1991) at Rome University/Tuscia University, Viterbo, Italy; “Nelson Goodman et les Langages de l’Art” (1992) at the National Museum of Modern Art, Georges Pompidou Centre, Paris; “Manière de Faire les Mondes” (1997) at the University of Nancy, Pont à Mousson, France; and “Weisen der Welterzeugung” (1998) at Heidelberg University, Germany (Fischer and Schmidt 2000). Goodman died on 25 November 1998 in Needham, Massachusetts, at the age of 92, after a stroke, and was buried in a family plot in Everett, Masschusetts. His wife had preceded him in death in 1996. In a memorial note, Hilary Putnam, Goodman’s colleague at Harvard, considers him to be “one of the two or three greatest analytic philosophers of the post-World War II period” (Harvard University Gazette 1998). Goodman’s work comprises eight books and numerous articles and smaller works. The content of these will be our topic for the remaining chapters.
11
Nelson Goodman
The historical background Before we come to Goodman’s own views, however, we first have to set the stage by providing some of the historical background that Goodman developed his ideas in. Some ideas are easier to grasp when one knows what they contrast with, and some peculiarities are easier to assess when one learns in what climate they were developed.
The arch-enemy: Henri Bergson What was the contrast to Goodman’s philosophy? Let us begin with Goodman’s declared “arch-enemy”, the philosopher Henri Bergson, and his followers, the Bergsonians. Bergson (1859–1941) is today not widely known, especially among analytically minded philosophers in Anglophone countries. Although Bergson had a recent revival due to the work of Gilles Deleuze, this was limited to postmodernist circles. At the time that Goodman was a young student, however, Bergson was among the most famous and influential philosophers. Bergson’s fame was not restricted to Europe. When he first visited the United States in 1913, his appearance in New York was probably the cause of the first traffic jam in the history of Broadway.11 In 1928 Bergson was even awarded the Nobel Prize for Literature. With Bergson’s concern with phenomenology and epistemology in general (we discuss Goodman’s view on this matter in Chapter 5), there might be something like a common philosophical interest with Goodman. Bergson’s role in Goodman’s writings is, however, not as a representative of a specific philosophical position that is worth detailed investigation. Whatever Bergson had to say about evolution, perception, memory, religion or morality does not seem to show up in Goodman’s writings. Bergson and his followers are rather stereotypes of irrationalism (at the time euphemistically referred to as “mysticism”), of an anti-scientific attitude towards philosophy and of hostility towards formal methods. Indeed, this has some basis in Bergson’s writings. According to Bergson, all we sense are images. The representations we form of these images, be they perceptions or subtler representations arrived at via analysis and synthesis, are always diminutions of the images we started with. Therefore knowledge, like scientific knowledge, which represents the world in a certain way, cannot be absolute knowledge; it necessarily leaves something out. 12
The worldmaker’s universe
Whereas philosophers who arrived at similar views have shrugged and turned to the question of the relation representations have with respect to reality or the thing in itself, Bergson offered a further theory of how we eventually could arrive at absolute knowledge by sidestepping the limits of representations. This is done by way of “intuition”, which is for Bergson another form of experience with the help of which we are able to place ourselves in the position of (parts of) the things. By “entering into” the things we can gain absolute knowledge of them. This view seems to be the basis for Goodman’s arch-enemy construct. In a hypothetical dialogue in “The Way the World Is” (1960a), Goodman lets his opponent characterize his position with the following words: All our descriptions are a sorry travesty. Science, language, perception, philosophy – none of these can ever be utterly faithful to the world as it is. All make abstractions or conventionalizations of one kind or another, all filter the world through the mind, through concepts, through the senses, through language; and all these filtering media in some way distort the world. It is not just that each gives only a partial truth, but that each introduces distortions of its own. We never achieve even in part a really faithful portrayal of the way the world is. (PP: 25) For Goodman this view is totally mistaken. The wrong presupposition of Bergsonian thinking is to believe that there is only one way the world is, namely the reality behind our representations. But for Goodman, as we shall see in detail in Chapter 8, there are many ways the world is and every right version captures one of them: Since the mystic is concerned with the way the world is and finds that the way cannot be expressed, his ultimate response to the question of the way the world is must be, as he recognizes, silence. Since I am concerned rather with the ways the world is, my response must be to construct one or many descriptions. The answer to the question “What is the way the world is? What are the ways the world is?” is not a shush, but a chatter. (PP: 31) Similarly, the use of formal methods to improve clarity and precision is defended against Bergson, who holds “that the application of any precise concepts can only result in distorting and in effect killing actual experience” (PP: 47). Whereas the possibility of error is granted by Goodman (formal methods can be more or less suited for a certain topic), formal methods are employed with a hope, the hope that we might be able to achieve something clearer or better founded 13
Nelson Goodman
than the “tenuous generalizations that have nourished philosophical controversy for centuries” (ibid.).
The inexplicably hostile ally: the Brits Surprising as it may be, another contrast to Goodman’s philosophy is to be found within analytic philosophy. As Michael Dummett, who was nineteen years younger than Goodman, reported from his own experience in his youth, analytic philosophy was for a long time divided into two camps: the ideal-language tradition, which goes back to the work of Gottlob Frege and whose main promoter was Carnap; and the ordinary-language tradition, which is usually traced back to the work of the later Wittgenstein and was promoted by Gilbert Ryle and associated with the philosophers of the University of Oxford. Dummett, who was Ryle’s student at Oxford, explains in his “Can Analytical Philosophy be Systematic, and Ought it to Be?” (1977) that at that time the main opposition to Ryle’s conception of analytic philosophy was not Heidegger or some other continental philosopher, but Carnap. Goodman, as we shall see in the next section, stood in Carnap’s tradition and hence in opposition to the British ordinary-language philosophy. One of the main differences between both types of analytic philosophy was their attitude towards systematicity in philosophy. As is well known, the later Wittgenstein abjured systematicity in philosophy completely (see his Philosophical Investigations (1953)). As Goodman interpreted him, Wittgenstein would regard philosophical problems as diseases spread via natural language. The philosopher is accordingly a therapist who in single cases of confusion comes to help with a cure that is specific to the case at hand. The way Goodman understands this ambulance model of philosophy allows philosophers to stop doing philosophy whenever they please. Since they are not interested in constructing a systematic theory, to stop philosophizing will not prevent them from reaching a final goal. Their aims are all temporary: to help the poor souls who find themselves trapped in a puzzle of natural language.12 Goodman rejected this view. First of all, philosophical puzzles do not arise for the man on the street who just tries to make a living. Philosophical puzzles arise for philosophers and they arise only because the philosopher has set up standards of understanding that might or might not be met by a literal understanding of natural language. Hence: 14
The worldmaker’s universe
the philosopher’s puzzlement about language is always a puzzlement about interpreting ordinary statements in a philosophical way. The puzzlement or confusion is a function not only of the language but of our standards or sense of philosophical acceptability. Wittgenstein triumphantly exclaims that his conception of philosophy allows him to stop doing philosophy whenever he pleases. But he can stop doing philosophy, or at least stop needing to do philosophy, only when all philosophical puzzlement and confusion are resolved. (PP: 43–4) Thus philosophers cannot stop philosophizing as they please; they can stop philosophizing only if either all philosophical confusions are resolved by a reinterpretation of ordinary language that conforms to the standards of philosophical acceptability or if they relax their requirements enough to take language as it is. Philosophers are not therapists of ordinary people. They are the ones having the problem of understanding and will not be able to stop doing philosophy before such understanding is achieved. A second criticism that Goodman raises against the British style of analytic philosophy is that it is ill motivated by the belief that antifoundationalism implies anti-systematicity: [T]he rejection of absolutistic justifications for system-building does not of itself constitute justification for the extremely asystematic character of typical current British analysis. Unwillingness to accept any postulates of geometry as absolute or self-evident truths hardly diminishes the importance of the systematic development of geometries. Unwillingness to take any elements as metaphysical or epistemological ultimates does not make pointless all systematic constructions in philosophy. There are virtues in knowing where we began, where we have gone, and where we are going, even if we fully acknowledge that we might as well have begun somewhere else. (1958a/PP: 44) Whereas Goodman was opposed to the anti-systematic attitude of the British philosophers of verbal analysis, he was rather surprised by their hostility towards the ideal-language tradition. After all, he thought, both ways of doing analytic philosophy were complementary: Verbal analysis is a necessary preliminary and accompaniment to systematic construction, and deals with the same sphere of problems. For example, the verbal analyst may well concern himself 15
Nelson Goodman
with explaining the vague locution we use when we say that several things are ‘all alike’; and he may well examine the difference between saying that a color is at a given place at a given time and saying that a color is at a given place and at a given time. The constructionalist dealing with qualities and particulars will likewise have to be clear on these points. The analyst, treating these as separate problems, may well miss the intriguing relationship between the two, while a systematic treatment shows them to be cases of a single logical problem. But verbal analysis and logical construction are complementary rather than incompatible. The constructionalist recognizes the anti-intellectualist as an arch enemy, but looks upon the verbal analyst as a valued and respected, if inexplicably hostile, ally. (1956a/PP: 17) In a sense, the end of philosophy for Goodman is further away than for the “verbal analyst”. While the (stereotypical) “verbal analyst” stops in his philosophical endeavour after the analysis, for Goodman this is only the first part of the enterprise; a systematic construction (for instance like the ones we present in the rest of this book) has to follow. Here lies a second major difference between Goodman’s understanding of the aim of philosophy, and that of Wittgenstein or the ordinary-language philosopher. For the later Wittgenstein there are no real philosophical problems, merely “puzzles about language”. What seems to be a philosophical problem inevitably must turn out to be a misunderstanding of what our words mean, or result from a misuse of those words. Philosophy, therefore, cannot be revisionary. The (correct) use of the words in natural language is the standard that needs to be respected; it cannot be that a philosopher uncovers that this use is not in good order. Goodman, on the other hand, is in for surprises. In the course of doing philosophy, so-called “common sense” (what Goodman calls “the repository of ancient error”) and the pre-systematic use of words can, and in interesting cases will, be declared defunct.13 Philosophy leads to revisions, and discovers errors in, for example, ordinary-language categorizations. Goodman’s philosophy is critical in that sense14 in a way that Wittgenstein or ordinary-language philosophy is not. For Goodman there are real philosophical problems that can and need to be solved, not just verbal confusion and language puzzles. The divide between these two traditions became very narrow during the twentieth century. We shall say a few words about the contemporary scene in Chapter 9. But now, after having reviewed contrasts to Goodman’s philosophy, we should turn to his roots and more friendly allies. 16
The worldmaker’s universe
Allies I: Carnap and logical empiricism As we have already said, Goodman clearly belongs in the tradition of analytic philosophy that is usually labelled ‘ideal-language philosophy’. The beginning of that tradition is marked by the year 1879, the year that Gottlob Frege, a German professor of mathematics at the University of Jena, published his first major work, the Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens [Concept Script: A Formal Language of Pure Thought Modelled upon that of Arithmetic] ([1879] in Frege 1967). In this book Frege developed what we now call predicate logic. Of course, philosophers had done research on logical matters at least since Aristotle. Frege, however, was the first to develop a comprehensive formal system: a whole ideal language. In this formal language large parts of natural language could be reformulated in a precise way. Moreover, it was also possible to reformulate a sufficiently promising part of mathematics, such that the idea that mathematics is nothing but logic (so-called logicism) gained for the first time a degree of plausibility. This was a very powerful new tool, not only for foundational problems in mathematics, but, as became clear very soon through the works of Bertrand Russell, Whitehead and Wittgenstein, also for all other areas of philosophy. Frege was not a very prominent figure during his lifetime. What he achieved was appreciated only much later. However, among the few students in his auditorium in Jena in the 1910s, when Frege was giving his lectures on the Begriffsschrift, was Carnap, a young student from Wuppertal-Ronsdorf (a small town in the west of Germany) who was to become one of the most influential philosophers of the twentieth century.15 Carnap was not only well equipped as a student of Frege, he also came under the influence of Russell’s work on logic and was among the first philosophers to use the symbolic language developed in Russell and Whitehead’s Principia Mathematica (Whitehead & Russell 1910–13) for the clarification of scientific discourse. Russell’s programmatic words in Our Knowledge of the External World (Russell [1914] 1969) set the task for the whole of Carnap’s work: [T]he study of logic becomes the central study in philosophy: it gives the method of research in philosophy, just as mathematics gives the method in physics. … All this supposed knowledge in the traditional systems must be swept away, and a new beginning must be made. … To the large and still growing body of men engaged in the pursuit of science, 17
Nelson Goodman
… the new method, successful already in such time-honored problems as number, infinity, continuity, space and time, should make an appeal which the older methods have wholly failed to make. … The one and only condition, I believe, which is necessary in order to secure for philosophy in the near future an achievement surpassing all that has hitherto been accomplished by philosophers, is the creation of a school of men with scientific training and philosophical interests, unhampered by the traditions of the past, and not misled by the literary methods of those who copy the ancients in all except their merits. (Russell [1914] 1969: 243–6; quoted in Carnap 1963: 13) In Our Knowledge of the External World, Russell not only suggested logic as the primary methodological tool of philosophy, but also declared the logical constitution of assumed entities as its primary topic. Borrowing Whitehead’s method of extensive abstraction (which we shall come to shortly), Russell sketched how philosophically dubious entities such as abstract objects (e.g. single time points and properties) could be constituted logically from less problematic entities (such as events and sets of mutually similar concrete objects, respectively). Russell’s declared motivation for this was a version of Ockham’s razor (see Chapter 4): Entities are not to be multiplied without necessity. In other words, in dealing with any subject-matter, find out what entities are undeniably involved, and state everything in terms of these entities. Very often the resulting statement is more complicated and difficult than one which, like common sense and most philosophy, assumes philosophical entities whose existence there is no good reason to believe in. (Russell [1914] 1969: 112) The project of logical constitution sketched here started a tradition that lead directly to Goodman’s chief work, the constructional system of The Structure of Appearance. In 1924 Carnap made contact with the Viennese philosopher Moritz Schlick, who later was the nominal leader of what became famous as the “Vienna Circle”,16 a heterogeneous group of scientifically minded philosophers who met in Vienna to discuss new achievements in logic and the philosophy of science. Carnap moved to Vienna in 1926. His famous book The Logical Structure of the World (the Aufbau, [1928] 1961) (which we shall discuss in some detail in Chapter 5) served for a while as the frame theory of the Vienna Circle. In the Aufbau, Carnap 18
The worldmaker’s universe
tried to provide in detail a constitutional system of all subject matter: a very ambitious implementation of Russell’s programme. In 1935, due to the increasing influence of fascism in central Europe, Carnap emigrated to the United States. Carnap was already acquainted with Quine at Harvard and Charles Morris in Chicago and quickly took up a permanent position at the University of Chicago that he held from 1936 to 1952 (then moving to the University of California, Los Angeles). A number of his fellow philosophers from the Vienna Circle also emigrated to the United States. Slowly they began to modernize American philosophy with the help of Quine and other pragmatists, and built up a strong network of analytic philosophers. From the early 1950s it is fair to say that this group ruled American philosophy, with Carnap as the leading figure. The kind of philosophy represented by that branch of early analytic philosophy can best be characterized with reference to its motifs, basic assumptions and drawn consequences. One basic motivation was to replace Kantian transcendental philosophy, which seemed to be clearly refuted by the recent developments of logic and natural science. On the logical side, Kant’s main argument to assume synthetic a priori truths had been the alleged nature of mathematics. Mathematical statements such as ‘7 + 5 = 12’ did not seem to be logically true, if judged by Aristotelian syllogistic. So, Kant concluded, as these a priori truths are not logical truths, and did not appear to him to be analytic in any other way either, mathematics had to be synthetic.17 Modern logic made it possible to analyse some mathematical truths as truths of logic18 and thereby undermined Kant’s philosophy of mathematics. At the same time, the second major argument for the idea of synthetic a priori truths, the intuitiveness and “obvious” empirical adequacy of Euclidean geometry, was destroyed by Einstein’s theory of relativity: Kant’s eternal a priori truths turned out to be a posteriori falsehoods. This disaster called for a new epistemology, an epistemology that would recognize the new advances in logic as well as the new developments in physics. A second motivation sprang from discontentment with the standard theory dynamics in philosophy. Traditionally, one absolute system had always merely been replaced by another incommensurable absolute system. Philosophical progress seemed impossible if this process was continued. Philosophy should rather be done in a way analogous to science. The perception about science was that big projects were subdivided into special problems and in a division of labour each part 19
Nelson Goodman
of the whole of the scientific community could contribute to the big project by solving small specialized problems.19 The two basic assumptions of the early analytic philosophers concerned the foundation of human knowledge and the meaningfulness of expressions. Concerning the foundation of human knowledge, the logical empiricists believed that knowledge could only be achieved through experience. But if that is so, all classical metaphysical research programmes were pointless. Sentences that could not be tested by experience were cognitively and theoretically empty. Metaphysical claims, like the claim of the metaphysical realists that reality is mind independent as well as the claim of the idealists, that it is not, are both of this kind. As Carnap pointed out in Pseudoproblems in Philosophy ([1928] 1961), two scientists might well have diverging opinions about this question of metaphysics, but they would clearly agree on all empirical questions, for example, on whether or not there is a mountain in a certain place in Africa. Classical philosophical problems were thus deemed to be pseudoproblems that philosophy should get rid of. Sentences that did not belong to logic or mathematics but nevertheless lacked empirical content were all considered meaningless. The consequences that follow from these basic assumptions are threefold. First, traditional philosophy appears to be a vain endeavour and we should clear all areas of human thinking of its influences. Secondly, after all philosophical problems are revealed as being mere pseudoproblems, the function of philosophy is the syntactic, semantic and pragmatic analysis of (scientific) discourse: “Philosophy is the syntax of scientific language”.20 Thirdly, philosophy should bring all sciences together into a unity of science. If scientific expressions are only meaningful if based in experience, and hence reducible to statements about observations or classes thereof, all scientific discourses should prove to be intertranslatable and expressible in a common elementary language. In contrast to the ordinary-language philosophers, the logical empiricists did not think that ordinary language would be basically in good order and that simply a better understanding of it would reveal the dissolutions to our philosophical (pseudo)problems. On the contrary, the positivists believed ordinary language to be inexact and misleading and were engaged in constructing a better substitute. This made necessary a systematic reconstruction of discourse in an artificial ideal language: the language of formal logic. As we shall see in detail, Goodman’s philosophy did in some respects differ from logical empiricism so characterized. However, logical 20
The worldmaker’s universe
empiricism defined the problems and the background Goodman was working in.
The roots at Harvard Goodman’s philosophical training was already over when logical empiricism washed over America. As we have said, by 1935 Goodman was already considered by Quine to be an expert on Carnap’s Aufbau. Where did Goodman receive this early training? Of course, when Goodman was studying philosophy at Harvard, the university was associated with logic because of Whitehead, Sheffer, C. I. Lewis, Charles Sanders Peirce and Josiah Royce. Although – as Quine said in retrospect – at that time “really the action was in Europe” (Quine 1985: 83), Harvard’s younger generation was soon familiar with the work by Carnap, Kurt Gödel, Jacques Herbrand, Alfred Tarski and others. Cooley introduced Quine to Carnap’s Aufbau around 1932 (ibid.: 86), the year in which the Vienna Circle member Herbert Feigl took up a postdoctoral grant at Harvard. After Quine returned from his Sheldon Fellowship in Europe, he began to lecture on European logical positivism in Harvard as early as winter 1934. Also interested in logic was Leonard, who, with Goodman, started the project to construct a calculus of individuals, which later led to The Structure of Appearance (see Chapter 5). Both were members of Quine’s informal seminar on Carnap’s logical syntax. In 1935 Leonard and Goodman told Quine about their project. Quine was immediately interested and supported Goodman’s studies. That night, Quine writes in his autobiography (ibid.: 122), they discussed matters until four in the morning. Thus Goodman was clearly supported by the younger faculty at Harvard, who were all very soon familiar with the work that was done in Vienna, Prague, Warsaw and Berlin. But Goodman also received support and inspiration from his teacher and supervisor C. I. Lewis, the epistemologist and logician, and probably also from Whitehead. Let us first have a look at Whitehead’s influence. Born in 1861, Whitehead was already 63 years old when he was hired by Harvard as a professor of philosophy and came to the US. Until then he had been teaching mathematics for 40 years at Cambridge and London. As a mathematician,21 Whitehead was well known for his work on universal algebra and geometry. However, the mathematician Whitehead is certainly best recognized for Principia Mathematica, the foundation of modern logic, which he wrote with Russell 21
Nelson Goodman
and which was published in three volumes between 1910 and 1913. Principia Mathematica was the major tool and topic for Goodman’s generation of formal philosophers. As we have said, Carnap had already noticed that the formalism of modern logic greatly helped to clarify philosophical issues and Goodman was to follow him in this. But Principia Mathematica also served as a source for interesting new problems for Goodman’s generation of graduate students at Harvard. The subtitle of Quine’s PhD thesis The Logic of Sequences ([1932] 1990) was “A Generalization of Principia Mathematica”. Leonard’s thesis Singular Terms (1930) (for details see Chapter 5), was also intended and motivated as a further elaboration of Principia Mathematica. Clearly, Goodman was directly and indirectly influenced by this milestone of formal philosophy. When Whitehead then started to lecture at Harvard, he had his debut as a philosopher. Whitehead the philosopher – to the surprise (and shock)22 of the Harvard philosophy department – turned out to be deeply engaged in a sort of speculative metaphysics that was very unfashionable if considered in contrast to the awakening modern analytic philosophy. Nevertheless, even aspects of Whitehead’s philosophy might well have influenced Goodman. In particular, four themes from Whitehead can also be found in Goodman’s approach to philosophy: (i) the interest in a construction of a mereological theory; (ii) the distrust in natural (“ordinary”) language and the programme of reconceptualization as a means of solving philosophical problems; (iii) a form of ontological pluralism or anti-foundationalism; and (iv) an interest in other forms of symbolization and representation than merely science and language. The first of these four themes, the interest in mereology, was closely related to Whitehead the mathematician. The missing fourth volume of Principia Mathematica was to include Whitehead’s treatise on geometry (see Simons 1987: 81; Ridder 2001); instead, part of the material entered in the form of Whitehead’s mereology into his An Enquiry Concerning the Principles of Natural Knowledge (Whitehead 1919) and his Process and Reality (Whitehead [1929] 1978).23 Whitehead’s motivation to develop a mereology was the construction of geometrical entities such as lines and points from non-geometrical objects of knowledge and perception. This motivation and the resulting theory originated independent of and before the (better known) mereology of the Polish logician Stanisław Leśniewski.24 Whitehead’s mereology remained only an informal sketch of a theory that could serve as a basis for his notion of “extensive abstraction” (see 22
The worldmaker’s universe
Ridder 2001), a method by which geometrical entities are constructed as an abstraction from complex and concrete entities. Whitehead’s definition of geometrical points as “abstractive classes” of overlapping events later served as a prime example in Goodman’s writings for an explication or definition in his sense (we return to this later). As we said above, it also inspired Russell’s and Carnap’s projects of logical constitution. Leonard’s system in Singular Terms (1930) was a first somewhat rigorous formal outline and further development of Whitehead’s more or less sketchy ideas, even if (see Chapter 5) he applied it to domains other than geometry.25 Although Whitehead’s mereology was (if reconstructed formally) logically distinguishable from the theory of the Polish logicians Leśniewski and Tarski (see Sinsi 1966), in the hands of Leonard and Goodman it was turned into a theory formally equivalent to the Polish system (see Ridder 2001). Reflecting for a moment on Whitehead’s construction of geometrical points will lead us to the second commonality between Goodman’s and Whitehead’s thought. As is probably evident to the reader, and as is pointed out by Goodman many times, our common notion of a point (if there is any such common notion) is not that of an “abstractive class” of certain volumes or events. But allegiance to ordinary usage or ordinary language is not a criterion of adequacy in Whitehead’s conception of philosophy just as it is not in Goodman’s. As Whitehead emphasizes in Process and Reality ([1929] 1978), ordinary language is neither stable nor clear enough to allow for interesting philosophical insights on the basis of meaning analyses. Often a philosophical problem can be solved by reconceptualizing the whole problematic area rather than by analysing the ordinary usage of a term. Whitehead’s idea of reconceptions in philosophy is strikingly similar to Goodman’s conception of it. Whitehead’s project to construct points from volumes is not driven by the idea that points “really are” abstraction classes of volumes, but by the idea that volumes are entities of our acquaintance, in terms of which we might better understand what points are. If a better understanding can be achieved in this way, major revisions in our conceptual system are in order (see Hampe 1998). Whitehead, however, did not see the solution to all philosophical problems in a mere logical reconstruction of the traditional terminology. Instead he suggested major terminological revisions in all areas of philosophical discourse. Whitehead’s programme to revise philosophical terminology had a successor in Goodman and Elgin’s Reconceptions in Philosophy and other Arts and Sciences (1988), in 23
Nelson Goodman
which they plead for the replacement of some of the most central philosophical notions (as we shall see later). When Whitehead constructs points as abstractions from volumes and takes processes as basic in his ontology, he does not thereby claim that volumes are any more real than points or that everything but processes are mere fictional entities. On the contrary, Whitehead claims that points are just as real as the volumes they are abstracted from. This ontological tolerance (or whatever you want to call it) will reoccur in Leonard’s and Goodman’s writings (see Chapter 5) and constitutes another important feature that Goodman’s and Whitehead’s philosophies share in common. Finally, Whitehead’s later work promotes the idea that art, religion and the different sciences all use different “languages”, differing systems of symbolization and representation, dedicated to different perspectives on the world. Clearly, this is a topic that also appears in Goodman’s work, whereas it is less clear that Goodman was under the direct influence of Whitehead’s views on the matter (see Hampe 1998: 182–3). The second significant influence on Goodman from the older faculty of Harvard was C. I. Lewis, the supervisor of Goodman’s PhD thesis. Lewis is nowadays best recognized as a grandfather of modern modal logic. His interesting pragmatist epistemology, which combined influences from the German (or Austrian) “positivists” with the pragmatism of Peirce, William James and John Dewey, is often overlooked. It was this aspect of Lewis’s philosophy, however, that influenced Goodman. In his famous “The Pragmatic Element in Knowledge”, Lewis summarizes his pragmatist epistemology as follows: In short, if human knowledge at its best, in the application of mathematics and in the well-developed sciences, is typical of knowledge in general, then the picture we must frame of it is this: that there is in it an element of conceptual interpretation, theoretically always separable from any application to experience and capable of being studied in abstraction. When so isolated, concepts are like Platonic ideas, purely logical entities constituted by the pattern of their systematic relations. There is another element, the sensuous or given, likewise always separable by abstraction, though we should find it pure only in a mind which did not think but only felt. This given element, or stream of sensation, is what sets the problem of interpretation, when we approach it with our interests of action. The function of thought is to mediate between 24
The worldmaker’s universe
such interests and the given. Knowledge arises when we can frame the data of sense in a set of concepts which serves as guides for action, just as knowledge of space arises when we can fit a geometrical interpretation upon our direct perception of the spatial. The given experience does not produce the concepts in our minds. If it did, knowledge would be pure feeling, and thought would be superfluous. Nor do the concepts evoke the experience which fits them, or limit it to their pattern. Rather the growth of knowledge is a process of trial and error, in which we frame the content of the given now in one set of concepts, now in another, and are governed in our final decision by our relative success – by the degree to which our most vital needs and interests are satisfied. ([1926] 1970: 253–4) The first part of these two components of knowledge, the conceptual system that interprets what is given in experience, constituted the realm of the a priori. Nevertheless, this conceptual system is neither preconfigured in the mind, nor dictated by experience. As Lewis became aware in his study of different systems of logic (e.g. in his “Alternative Systems of Logic” (Lewis [1932] 1970)), which conceptual or logical system we choose is a matter of what we are interested in. His influential Mind and the World-Order (1929) argues that knowledge is in a crucial way relative to pragmatic considerations. We make truths in so far as our conceptual systems interpret what is given in experience, using conceptual schemes that are of our making. There is no objective scheme; our knowledge of the world represents it in a way that serves us best at a certain point in time: [T]he truths of experience must always be relative to our chosen conceptual system in terms of which they are expressed; and that amongst such conceptual systems there may be choice in application. Such choice will be determined, consciously or unconsciously on pragmatic grounds. New facts may cause a shifting of such grounds. When this happens, nothing literally becomes false, and nothing becomes true which was not always true. An old intellectual instrument has been given up. Old concepts lapse and new ones take their place. (Lewis [1926] 1970: 257) That truths are relative to conceptual systems, and that the choice of conceptual systems is a pragmatic affair is also an integral part of Goodman’s philosophy. He would, however, drop one important ingredient of Lewis’s epistemology: the indubitable given. 25
Nelson Goodman
As Lewis argues in “Logical Positivism and Pragmatism” ([1941] 1970) the main difference between the empiricism of the pragmatists and the empiricism of the logical positivists (especially the Carnap of Philosophy and Logical Syntax (1935)) is that the latter were ready to analyse empirical knowledge fully in the so-called formal mode, as more or less coherent systems of accepted sentences, some of which are “protocols”, some sentences of mathematics and logic, some generalizations and so on. The content of such knowledge would be explicated only in terms of deductive consequences and logical relations, but not in terms of the experiences connected with certain terms and statements. In particular, the formal mode would not distinguish between statements such as ‘This object looks red’ and ‘This object is red’. Instead, logical positivism would (according to Lewis’s interpretation of it) recognize both statements as one and the same “observation-sentence”. For Lewis, this sort of empiricism was not worth its name. After all, the experiential element did not seem to show up at all in this kind of formal analysis. Lewis claims that a proper empiricism must treat sentences of the form ‘This looks red’ as special, indubitable statements. We might err when classifying things as being red, but we cannot err when it comes to recognizing things as looking red. This is “the given” in experience, the phenomenal states we find ourselves in when making experiences. Without such an indubitable element, Lewis feared that our epistemology would necessarily collapse into a coherence theory of truth: [E]ither there must be some ground in experience, some factuality it directly affords, which plays an indispensable part in the validation of empirical beliefs, or what determines empirical truth is merely some logical relationship of a candidate-belief with other beliefs which have been accepted. And in the latter case any reason, apart from factualities afforded by experience, why these antecedent beliefs have been accepted remains obscure. Even passing that difficulty, this second alternative would seem to be merely a revival of the coherence theory of truth, whose defects have long been patent. (Lewis [1952] 1997: 112–13) Thus, besides the “chicken and egg” problem of why we should ever actually come to accept any system, we would not know how to choose between the many equally coherent systems that are all logically possible. As we shall discuss in much more detail in the following chapters, Goodman was ready to bite that bullet when throwing away the indubitable given. Lewis, the major advocate of pragmatism, 26
The worldmaker’s universe
commented on this move by Goodman that his “proposal is, I fear, a little more pragmatic than I dare to be” (ibid.: 118).26
Summary All this constituted the philosophical background against which Goodman was developing his philosophy. In the following chapters we shall see in more detail what Goodman made of these influences. Those that we have highlighted in this chapter were based for the most part on obvious personal relations rather than references provided by Goodman. When reading Goodman, one of the things to notice is that there are only relatively few footnotes devoted to the historical origins of a certain problem or some idea to its solution. This should not be interpreted as a sign of ignorance on his side. Goodman was interested not in the merely historical or exegetical questions of who had said what when, but in the systematic questions of philosophy. Occasionally we shall refer back to the historical background presented here. In Chapter 2, however, we shall jump right into the systematic way of dealing with philosophical problems, with Goodman’s infamous “new riddle of induction”.
27
Chapter 2
If this were an emerald it would be grue: problems and riddles of induction That the sun will rise every morning, that bread nourishes, that ravens are black and that matches will light when struck under favourable conditions are all truths about the world that we believe. Clearly, they are very helpful for us. Those just mentioned all have in common that they express regularities that hold in the world. They tell us that a certain state of affairs is always followed by a certain other state of affairs, or that certain things always have certain features. Regularities, expressed by general statements such as ‘all Fs are Gs’, are very valuable things to know when it comes to planning your day, driving a car, or just trying to survive for a while. Without knowledge of regularities, everything would always come as a complete surprise: be unexpected. It is easy to imagine how likely it would be that we all unintentionally killed ourselves within a couple of days without the capacity to have beliefs of regularities (possibly all by the same method, being unable to learn from the failure of others), given that our nosiness can overrule the guidance of our natural instincts. Since regularities play such a big role for us, we developed (by some long process of cultural evolution) the sciences to find as many of them as there are, using methods that (we hope) put our beliefs in regularities on a firm basis. After all, believing a general statement that is in fact false – for example that fly agarics nourish or that if one jumps from a very high building one will safely and smoothly land on one’s feet – might be just as damaging as believing none of them. The methods science uses to find out about regularities are observation, prediction and experiment. By using these methods we try to find true general statements (that do express regularities) and separate them from the false ones. Although we seem to be doing 28
If this were an emerald it would be grue
pretty well at this, it is almost a mystery why we do so well. How can observation, prediction and experiment lead to justified beliefs in general statements?
The Humean riddle of induction In his Treatise of Human Nature ([1739–40] 2000) and his Enquiry Concerning Human Understanding ([1748] 1999) David Hume attacked all rational bases for belief in any necessary connection that was not a truth of mathematics or logic. The justification for any belief in the necessary connection of causes and effects and therewith the belief in regularities inferred from past observations were also subject to Hume’s devastating attack. Hume started his investigation into the foundations of scientific knowledge by distinguishing two types of reasoning: reasoning concerning relations of ideas and reasoning concerning matters of fact and existence. All reasoning falls into one, and only one, of these categories. The first type seems largely unproblematic. Mathematics and logic are the sciences in which the relations of ideas are investigated. Both sciences use deductive arguments to arrive at their conclusions, forms of arguments that are necessarily truth preserving. If the premises of a valid deductive argument are true, so is the conclusion. But not all scientific reasoning is of this kind. If we, for example, infer that the sun will rise tomorrow from the observations that it did so every morning in the past, we make an inference from the observed (the rising of the sun in the past) to the unobserved (the rising of the sun tomorrow). This reasoning concerns matters of fact and existence. In such cases the conclusion (that the sun will rise tomorrow) goes beyond the content of the premise (the report of what has been observed in the past). As we said at the start of this chapter, the general statements that express real regularities are of immense importance for us, and it is their generality that enables them to play this role. These statements do not only talk about events that have happened at some specific day at some specific place, but they talk about what happens at all times at all places. The laws of physics, for example, state generalities that not only apply to the observed events that happened in the laboratory and led to the formulation and confirmation of these laws, but also to events far away from our galaxy in future times. This way we are able to use knowledge that we gained from past observations. 29
Nelson Goodman
Hume asked what the foundation of our inferences from the observed to the unobserved is, and came to the conclusion that such reasoning is based upon relations of cause and effect. We believe that certain states of affairs will regularly be followed by certain other states of affairs because we think that there is a causal connection between them. Lightning is followed by the sound of thunder, because the former causes the latter. Footprints in the sand indicate that some person recently walked here, because the former is a causal effect of the latter and so on. But if our apparent knowledge of regularities and our inferences from the observed to the unobserved are based on knowledge of cause–effect relations, how is this knowledge established? The first possibility that Hume considered was that our apparent knowledge of a cause–effect relation is a priori. Can we deduce from a cause what its effect is? A person having no prior experience with fire or snow can obviously not deduce that the former will feel hot and the latter cold. The same holds for the other direction from effect to cause: by just knowing (de re1) of some “effect” by knowing that a certain phenomenon occurred, pretty much anything might have caused it, as far as I can tell a priori. Causal relations are not deduced: knowledge of them must be based upon experience. But if we know about causal relations from experience, causality must be something observable, such that we can observe a “cause”, an “effect” and somehow the causal connection between them. But Hume argued that we cannot locate this third item anywhere. Every time we observe what we later extrapolate into general statements, we only observe temporal priority of what we then call the “cause”, spatiotemporal proximity of “cause” and “effect” and constant conjunction: if we repeat an experiment many times we will find the same “effects” following the same “causes”. But what we do not observe is some sort of causal relation or causal power that our extrapolation is based on. It is perfectly conceivable that next time the “effect” will not follow the “cause”; nothing in the observation that the “effect” followed the “cause” up to now guarantees that the “effect” will do so in the future. For example, having observed two events in spatiotemporal proximity only once, we cannot tell whether there is a causal connection between the two. Although we feel inclined to assume that there is a causal connection if we observe many times that events of the one kind are followed by events of the other kind, it might nonetheless only be coincidence that one event closely followed the other. Moreover, if the causal connection between the two was itself observable, we would 30
If this were an emerald it would be grue
not need repeated observations; instead we would be able to observe the causal connection in the first instance. But if we cannot deduce “effects” from “causes” nor “causes” from “effects” and cannot observe causal connections, what else could be the basis for our judgements about causal relations? Hume’s answer was that it is all just a matter of custom or habit. We observe some event e1 and see it followed by some other event e2. On another occasion we see some event e3 followed by some event e4. Events e1 and e3 are events that we group into a certain category, say ‘events of type C ’. Events e2 and e4 are classified as “events of type E ”. Relative to our classification, events of type E often follow events of type C, which is why after a while when we notice an event of type C we begin to expect an event of type E following it. That this is so is just a brute fact of human psychology, but completely without any logical necessity involved. Hume’s challenge, or the Humean riddle of induction, was to uncover the following problem. We wanted to know what justification we have for inferring unobserved events from observed ones, or on what basis we are justified in extrapolating generalities. Hume’s observation was that we base our justification on assumed relations of cause and effect. But what is our justification for assuming these? The answer is that we just psychologically anticipate that the future will be like the past, that nature is uniform. But that anticipation cannot be founded in anything else. If you want to base the conviction in the uniformity of nature on past experiences (“nature has always behaved in a uniform way, therefore it will continue to be uniform in the future”) you merely presuppose what you are attempting to establish, namely that nature is so uniform that it will continue to be uniform. If we need a justification why we should trust observed past uniformities at all, the fact that we have observed in the past the uniformity that trusting such observations was successful cannot suffice. It would obviously be circular (or would lead to infinite regress) to argue in that way. So there is no empirical justification forthcoming for the uniformity thesis. On the other hand, there is also no a priori justification to be expected; it is absolutely conceivable that nature is not uniform. Clearly, we do not expect it to be non-uniform, but we can also imagine non-uniformity without noticing any contradiction. What seemed to be the most central method of science, induction from observation, turns out to be unjustifiable. There is no reason why we should have any confidence in scientific predictions; for all we know, any scientific prediction might fail and, indeed, we cannot even 31
Nelson Goodman
say that scientific predictions are probable. There is – after Hume – no rational basis for placing more confidence in the predictions of science than in wild guesses.
Reactions to Hume’s problem Of course, philosophers of science could not just shrug off Hume’s conclusion and turn to other matters. Scientific predictions are more trustworthy than wild guesses or the predictions of fortune-tellers. But why is that so? We shall sketch two attempts to escape the old riddle in order to see how both approaches will be equally ineffective in giving a solution to Goodman’s new riddle (see below) and to present Goodman’s own solution of the Humean problem in the light of their failure. One family of solutions tries to save induction. One way to do this is by an ordinary-language dissolution of the problem (A. J. Ayer (1956) and Peter Strawson (1952) tried this), in which one argues that basing beliefs on inductive evidence is what it means to be rational, such that the problem of why it is rational to do so can no longer arise. Induction is a basic principle of rationality and there are no more fundamental principles we could turn to in order to justify it; that is what it means to be a basic principle. Another approach from the same family (going back to Hans Reichenbach; see Reichenbach 1951: 245) is to vindicate (in contrast to validate, i.e. proving it from more fundamental principles) induction by a pragmatic argument. In all possible circumstances we could be in, nature will either be uniform or not.
We use induction We do not use induction
Nature is uniform
Nature is not uniform
Success
Failure
Success or failure
Failure
If nature is not uniform we are unable to find regularities by induction, but we will do no better using any other method. In those possible circumstances in which nature shows uniformity, we will find it by the inductive method in the long run. Hence, it is overall better to use the inductive method if it is unknown whether or not nature is uniform. After all, if (in the bottom-right box) there was a better method (with more predictive success than induction), this, again, would be a regularity that the inductivist could find in the long run 32
If this were an emerald it would be grue
and exploit (using the alternative method found via induction from observed successes – sometimes called meta-induction – in the long run the meta-inductivist could then approximate the predictive success of the others). Therefore, whatever nature is like and whatever alternatives to induction there are, in the long run the inductivist is either better off than a non-inductivist or at least as well off as a non-inductivist. Neither of the accounts in that family is very promising. The first strategy is undermined by the second. If it makes sense to ask whether it will in most cases serve our purposes to reason by induction, the flight to the fact that induction is a basic principle is of no help (Hume’s problem would reappear in terms of vindication). Clearly, ‘Is it reasonable to be reasonable?’ sounds like a silly question. But we can distinguish two senses of ‘reasonable’ corresponding to two senses of ‘justification’ if we draw a distinction between validation and vindication. One sense of asking whether something is reasonable, the one corresponding to vindication, is to ask whether it is a good means for achieving a desired goal. We shall call this sense ‘reasonable1’. The other sense of ‘reasonable’, the one corresponding to validation, is a matter of adopting generally accepted basic principles of rationality. This sense we shall call ‘reasonable2’. Now we can ask, ‘Is it reasonable1 to be reasonable2?’ or, in other words, ‘Does using the accepted rules of inductive inference serve our goal of predicting correctly as often as possible?’ Clearly this is just a reformulation of the original Humean problem, not a step further (see Earman & Salmon 1999). The second strategy, however, seems doomed to failure. Worlds are conceivable that are non-uniform, but controlled by an evil omniscient demon who only informs his believers about the next events he has planned to happen in that world. Because the demon hates inductivists, he takes care to inform a particular believer only if the inductivist would not follow this believer’s predictions (because of the bad track record the believer had when the inductivist followed him). It seems possible that in such a world it would (in terms of predictive success) be overall better to be a believer than to be an inductivist. A second family of solutions accepts Hume’s argument and the conclusion that induction is not justifiable. However, in this family this is not seen as a devastating problem, for science never used induction as a method. But when induction does not occur in scientific methodology it is no longer in need of justification. Popperian deductivism (named after Karl Popper, who championed this approach) 33
Nelson Goodman
is one example of this strategy. Popper and his followers claimed that science does not need justification of the belief in regularities by induction. Science proceeds instead by inventing bold explanatory hypotheses about the world and testing them against observations. Doing this does not involve induction (only the deduction of testable consequences from the hypothesis). If the accepted empirical results are not in accordance with the prediction deduced with the help of the hypothesis under scrutiny, the hypothesis would be accepted as falsified (again, deductively, by a sort of modus tollens). This would not lend positive support to any single hypothesis, but would help us decide which of the alternative hypotheses we presently have at our disposal is the one we should continue to work with. Hypotheses are, in Popper’s view, not confirmed by past observations, but corroborated by having “survived” many rigorous tests. Whether that “solution” is promising is dubitable. If that approach really wants to do without any inductive element, the question will be what relevance a high degree of corroboration should have for working with this hypothesis in the future. Without reassurance that high corroboration indicates something about the future success of the hypothesis, there is a need for an explanation of why it should be of enough relevance for the scientists to care about it. Moreover, it seems that inductive as well as deductive methods are indeed used in the sciences; then, however, it is just not true that the sciences proceed only deductively.
Goodman’s answer to the Humean riddle Goodman approaches his own answer to Hume’s problem of induction by first investigating what kind of justification actually is called for when we ask for a justification of our inductive inferences. The problem cannot be to explain how we know that certain predictions will turn out to be true. That is something that we simply do not know in advance. Similarly the problem cannot be to tell true predictions from false ones in advance. If this were possible, we would have found a method of clairvoyance. Nor can the problem be to tell which prediction is more probable, for even this will either be something we cannot know in advance, namely if ‘being more probable’ expresses a relation between the prediction and certain future events, or something that seems totally irrelevant for matters of justification, namely if ‘being more probable’ has nothing to do with future events. 34
If this were an emerald it would be grue
Following his strategy of trying to change the question to another perspective if a problem seems hopeless, Goodman, leaving induction aside for the moment, first asks: what justifies deduction? Well, instances of deductive inferences as in (1) All ravens are black. (2) Paul is a raven. (3) Paul is black. are justified by showing that they are inferences in accordance with accepted rules of inference. If an inference is in accordance with accepted rules of inference, the inference is valid. That does not imply that its conclusion is true. Inference (3) might well be false: Paul may be white. The mere validity of the inference does not guarantee the truth of the conclusion. To check whether a deductive inference is justified does not involve checking the truth of any fact that is involved in the inference. All we check is whether the inference was in accordance with the accepted rules. How is that done? To show that the inference of (3) from (1) and (2) is in accordance with the rules, we would first reconstruct the argument in a deductive logical calculus and then check whether the inference above is valid in it. Indeed it can be represented in predicate logic:2 (1) (2) (3) (4)
∀x (Raven(x) ⊃ Black(x)) Raven(paul) Raven(paul) ⊃ Black(paul) Black(paul)
assumption assumption from (1), universal instantiation from (2) and (3), modus ponens
This inference is in accordance with the inference rules of predicate logic. Why is predicate logic of importance here, and not just some set of inference rules? It must, of course, be a system with valid rules, and the rules of predicate logic are valid. Why are they valid? Goodman’s answer to that question struck many as odd. According to Goodman the rules of classical predicate logic, or perhaps some alternative deductive system, are valid because they are more or less in accordance with what we accept as a valid deductive inference. On the one hand, we have certain intuitions about which deductive inferences are valid and on the other hand we have rules of inference. When we are confronted with an intuitively valid inference, we check whether it accords to the rules we have already accepted. If it does not we might reject the inference or the argument based on it as invalid. If, however, our intuition that the purported inference is valid 35
Nelson Goodman
is stronger than our confidence that the rules are correct, we might also consider amending the rules. This soon leads to a complicated process. We have to take into account that the rules must remain coherent and not too complicated to apply. In logic, for example, we want the rules to be topic neutral, that is, applicable to inferences (as far as possible) independent of the specific subject matter and so on. On the other hand, we also want to extract as much information from premises as possible, so we do not want to risk being too cautious in accepting rules. In this process we make adjustments on both sides, slowly bringing our judgements concerning validity in a reflective equilibrium3 with the rules for valid inferences until we finally get a stable system of accepted rules: This looks flagrantly circular. I have said that deductive inferences are justified by their conformity to valid general rules, and that general rules are justified by their conformity to valid inferences. But this circle is a virtuous one. The point is that rules and particular inferences alike are justified by being brought into agreement with each other. A rule is amended if it yields an inference we are unwilling to accept; an inference is rejected if it violates a rule we are unwilling to amend. The process of justification is the delicate one of making mutual adjustments between rules and accepted inferences; and in the agreement achieved lies the only justification for either. (FFF: 64/PP: 374) People who found that odd were quick to point out that the deductive inference rules are not just “intuitively” valid or in accordance with our intuitive judgements of validity.4 First, normal folk “intuitively” judge inferences to be valid that are not valid at all; this seems to be completely at odds with Goodman’s proposal. Does this not show, as, for example Stephen Stich and Richard Nisbett (Stich & Nisbett 1997)5 claim, that the actual “intuitive” judgements of normal folk have nothing to do with validity? They contend that the validity of inferences is an objective affair that we investigate in logic and of which we might have a false or insufficient reconstruction, for example in Aristotelian syllogistic as viewed from the standard of modern predicate logic. This might then be in need of revision, despite it not being a reconstruction of what we – as reasoners – believe about validity (we shall call such a position ‘logical realism’). Stich, for example, argues: In each of these cases … it is very likely that, for some people at least, the principles that capture their inferential practices would [be in reflective equilibrium for them]. If this is right, it 36
If this were an emerald it would be grue
indicates there is something very wrong with the Goodmanian analysis of justification. For on that analysis, to be justified is to [be in reflective equilibrium]. But few of us are prepared to say that if the gambler’s fallacy [which is the fallacy to infer that, for example, in a game of craps the likelihood of rolling a seven with a pair of dice increases each time a non-seven is rolled] is in reflective equilibrium for a person, then his inferences that accord with that principle are justified. (Stich 1998: 100) This objection does not appear to cut any ice, however. It seems to miss the point, since a reflective equilibrium of intuitive judgements and rules is not something that is achieved quickly by one person alone concerning one rule and one pattern of inference. We have to take into account a whole range of inferences intuitively judged to be deductively valid and a whole system of inference rules to represent that. This might well be a complicated affair. As can be seen in the logical inference above, we needed two rules to represent the inference that originally consisted of only three lines (not four). Thus, the “normal folk” inference, as in the unregimented argument, is represented in formal logic, but not necessarily with anything that resembles the original argument. To come up with a somewhat satisfying system of deductive validity took 2,500 years, counted from the first attempts to do so. This indicates that it is not that easy to find a consistent set of rules that is in accordance with previously accepted valid rules, intuitive judgements and so on. That normal folk make many mistakes (by our elaborated standards of validity) and do not come up with a system themselves that equals our standards does not come as a surprise. A second argument against Goodman’s position, championed by John Earman and Wesley C. Salmon, starts from the observation that there does not appear to be any reference to intuition when we establish that a certain system comprises only valid inference rules. In the case of predicate logic, for example, the deductive validity of the inference rules, the so-called soundness of the proof theory, is not checked by taking polls in the supermarket. It is checked by a soundness proof, that is, a proof that establishes that every rule that is allowed in predicate logic can never take us from a true premise to a false conclusion (Earman & Salmon 1999: 62). Earman and Salmon suggest abstracting away from the system of rules and sentences we have, the so-called “deductive system”, and looking at the inference rules from a metalevel. What a sentence means – if we follow the argument by Earman and Salmon – is considered to 37
Nelson Goodman
be independent of what syntactic proof rules apply to that sentence. If it is possible to show that any sentence that is inferred by syntactic rules of the deductive system will always be logically true if inferred from no premises, or true if inferred from true premises, the deductive system is apparently justified: it always leads from truth to truth.6 Such soundness proofs are metatheoretical proofs, since they do not prove the validity of the deductive system within the deductive system, but on a metalevel. If that is still circular (e.g. because the metatheory is deductive), this circularity is at least not as blatant. We are not proving the validity of the system of rules within the very same system, but show that the rules – if applied correctly – always lead from truth to truth. Earman and Salmon write: Goodman’s claim about deductive logic is difficult to defend. We reject, as fallacious, the form of affirming the consequent because it is easy to provide a general proof that it is not necessarily truthpreserving. The rejection is not the result of a delicate adjustment between particular arguments and general rules; it is based upon a demonstration that the form lacks one of the main features demanded of deductive rules. Other argument forms, such as modus ponens and modus tollens, are accepted because we can demonstrate generally that they are necessarily truth preserving. (Earman & Salmon 1999: 62) It is true that we typically check the adequacy of a deductive system by giving a “general” soundness proof. Soundness proofs establish that there is a certain relation between two formal systems. On the one hand there is the deductive system, which tells you from what kind of lines in a proof you are allowed to proceed to what other kinds of lines in a proof. On the other hand there is the model-theoretic semantics, which interprets the formal symbols in the language and all complex expressions build from them. If the deductive system allows you to go from a line P to a line C, the semantics must be such that it never assigns truth (or whatever corresponds to that notion) to the line P when assigning falsity (or whatever corresponds to that notion) to the line C. Soundness proofs establish nothing else; their results are relative to a deductive system and to a semantics. Clearly, this does not tell us anything about any sort of objective or general validity of rules of inference. The next question would be why we think that the model theory that accords with the deductive system is adequate. The question of whether the model theory is adequate, however, is in part a question of what we “intuitively” 38
If this were an emerald it would be grue
think is the right semantics for certain logical constants (e.g. the conditional), which, in turn, is a question of what role they play in intuitively acceptable inferences.7 Soundness proofs thus do not help us out of the aforementioned regress if we do not possess any independent reason to think that the model theory is adequate, but no such reason is presented by Earman and Salmon; nor does one appear to be forthcoming elsewhere.8 Goodman saw that problem quite clearly9 and offered the best solution he could: a reflective equilibrium of validity judgements and accepted rules. It is important to realize that there always might be more than one reflective equilibrium to which that process leads,10 which is why we nowadays have more than only one accepted system of deductive inference rules (and needless to say all of them have their soundness proofs). This shows us that the explication of valid deductive inference is still an open topic in philosophy. This is why there can be no such thing as “the true logic”, according to Goodman’s conception. A system of deductive rules might be pragmatically better than an alternative system, but there simply is no set of objectively valid rules. If what we argued above is true, namely, that any justification of deductive rules must at some point or other be founded on a reflective equilibrium between our intuitions and accepted rules, every logician who subscribes to some sort of logical realism (the view that there is only one true logic) is obviously in trouble. Logical realists, unlike Goodman, have the burden of proving why the specific reflective equilibrium arrived at is justified, if (a) for every initial set of intuitions and antecedently accepted rules more than one equilibrium is possible, and (b) the specific initial set of intuitions started with is different for subjects with other cultural or socioeconomic backgrounds (as Stich and Nisbett would presumably continue to argue). How Goodman thought about Hume’s problem becomes clear. Hume’s solution, namely that induction is just a matter of custom or habit might be incomplete, but it is basically correct. [The point concerning the justification of deduction] applies equally well to induction. An inductive inference, too, is justified by conformity to general rules. And a general rule by conformity to accepted inductive inferences. Predictions are justified if they conform to valid canons of induction; and the canons are valid if they accurately codify accepted inductive practice. (PP: 375/FFF: 64)
39
Nelson Goodman
The remaining task of solving the induction problem is then to explicate the pretheoretic notion of valid inductive inference by defining rules of inference that can be brought into a reflective equilibrium with intuitive judgements of inductive validity. This can be done in a variety of ways; integrating intuitive judgements as well as general considerations might lead to several alternative solutions.11 This project is undertaken in the various attempts to explicate the notion of confirmation and to construct systems of inductive logic.
The new riddle of induction The comfort we can take in the fact that the Humean challenge just disappeared will not last long. We shall now encounter Goodman’s own challenge to induction: the so-called “new riddle of induction”. Goodman first formulated his new riddle of induction in a brief paper of 1946 titled “A Query on Confirmation”. The paper was directed at the theories of confirmation and induction proposed by Carnap (1945a,b) and Carl Gustaf Hempel (1943, 1945). Hempel and Carnap both reacted to the riddle quite quickly (Carnap in print in 1947), but it seems fair to say that this first publication of what later became the new riddle of induction seemed to be a non-starter; at first very few people took notice of it. Since its second publication in Fact, Fiction, and Forecast nine years later, however, Goodman’s riddle of induction became one of the most discussed philosophical problems of the twentieth century and is still discussed today. In 1994 Douglas Stalker catalogued 300 learned discussions of it and the literature is still growing. A widespread explanation of the fact that it took a while for the riddle to attract the attention of other philosophers is that the earlier paper had three shortcomings: (i) it looked as if it was only highlighting a technical problem in Carnap’s and Hempel’s theories of confirmation, which were themselves pretty technical theories, and at that time there were not yet many philosophers around who were interested, let alone well trained, in formal logic and formal philosophy of science; (ii) the earlier paper did not discuss the astonishing symmetry between projectible and non-projectible predicates (i.e. the fact that whether a predicate is “positional” depends on which set of predicates you start with – we shall come to the notions of positionality and projectibility later), which might have led to the impression that there is an easy solution to the problem; (iii) nowadays you 40
If this were an emerald it would be grue
need a catchy terminology to get your point across – Quine’s fame is intimately connected with ‘gavagai’, Goodman’s certainly with ‘grue’ – but in the earlier paper it was not ‘grue’ that was used to make the point, but only a clumsy description concerning marbles drawn from an urn being red up to some day. (Some philosophers think that (iii) is doing most of the explanatory work here.) What is ‘grue’? What is the new riddle of induction? Consider the following two (true) statements: (B1) This piece of copper conducts electricity. (B2) This man in the room is a third son. B1 is a confirmation instance of the following regularity statement: (L1) All pieces of copper conduct electricity. But does B2 confirm anything like L2? (L2) All men in this room are third sons. Obviously it does not. But what makes the difference? Both regularity statements (L1 and L2) are built according to the exact same syntactical procedure from the evidence statements; they are generalizations of B1 and B2. Consider a reformulation in predicate logic to see this point: (B1*) (PieceOfCopper(a) ∧ ConductsElectricity(a)) (L1*) ∀x (PieceOfCopper(x) ⊃ ConductsElectricity(x)) (B2*) (ManInTheRoom(a) ∧ ThirdSon(a)) (L2*) ∀x (ManInTheRoom(x) ⊃ ThirdSon(x)) In both cases, the relation between the evidence statement and the regularity is syntactically the same. Therefore it does not seem to be for a syntactical reason that B1 confirms L1 but B2 fails to confirm L2. The reason is that statements like L1 are lawlike, whereas statements like L2 are not: L2 expresses only an accidentally true general statement. Lawlike statements in contrast to accidentally true general statements have two interesting features: they are confirmed by instances of them; and they support counterfactuals. Statement L1 supports the counterfactual claim that if this thing I have in my hand were a piece of copper, it would conduct electricity. Suppose, in contrast, that L2 is indeed true for all the men in this room. Still, L2 would not support that if that man on the street were here in the room, he would be a third son. To tell which statements are lawlike 41
Nelson Goodman
and which statements are not is therefore of high importance in the philosophy of science. A satisfying account of induction (or corroboration) as well as a satisfying account of explanation and prediction needs such a divide. Goodman, however, shows that this is extremely hard to get. Here comes the riddle. Suppose that you do research in gemmology. Your special interest lies in the colour properties of certain gemstones, in particular of emeralds. All the emeralds you have examined before a certain time t were green; your notebook is full of evidence statements of the form ‘Emerald no. xyz found at place yxz date yzx (yzx ≤ t) is green’. It seems that at t, this supports the hypothesis that all emeralds are green, L3. Now Goodman introduces the predicate ‘grue’. This predicate applies to all things examined before some time t just in case they are green but to other things (observed at or after t) just in case they are blue: (DEF1) x is grue =df x is examined before t and green ∨ x is not so examined and blue Until t it is obviously also the case that for each statement in your notebook there is a parallel statement asserting that the emerald no. xyz found at place yxz date yzx (yzx ≤ t) is grue! Each of these is analytically equivalent with a sentence in your original notebook.12 All of these grue-evidence statements taken together confirm the hypothesis that all emeralds are grue, L4, and they confirm this hypothesis to the exact same degree as the green-evidence statements confirmed the hypothesis that all emeralds are green. But if that is the case, then the following two predictions are also confirmed to the same degree: (P1) The next emerald examined after t will be green. (P2) The next emerald examined after t will be grue. However, to be a grue emerald examined after t is not to be a green emerald, an emerald examined after t is grue if, and only if (iff), it is blue. Hence we have two predictions. Both are confirmed to the same degree by the past evidence, but taken together they are nevertheless incompatible. This is the new riddle of induction. It is also obvious that we could potentially define infinitely many other predicates like ‘grue’ that would all lead to new similarly incompatible predictions (and we could do this for all kinds of things, not just emeralds). After Goodman, philosophers of science have cooked up quite a number of these. The immediate lesson should be that we cannot use all kinds of weird predicates to formulate hypotheses or to classify our evidence. 42
If this were an emerald it would be grue
Some predicates (which are the ones like ‘green’) can be used for this; other predicates such as ‘grue’ must be excluded if induction is supposed to make any sense. If there were absolutely no way of excluding such predicates anything could be predicted on the basis of the same evidence, using the appropriate grue-like predicate. That is already a very interesting result, since now we know that for valid inductive inferences it must matter what predicates we use. The problem this riddle poses is not that we lack a justification for accepting a general hypothesis as true on the basis of only positive instances and no counter-instances (which was Hume’s problem); it is to explain why some general statements (such as hypothesis L3) are confirmed by their instances, whereas others (such as L4) are not. Again, this is a matter of the lawlikeness of L3 in contrast to L4, but how are we supposed to tell the lawlike regularities from the illegitimate generalizations? A reply that immediately comes to mind is that the illegitimate generalization L4 involves a certain temporal restriction (whereas L2 was spatially restricted). That was also Carnap’s first reaction to the problem when he confronted it in the late 1940s (see Carnap [1947] 1997). The idea is that the predicates that cannot be used for induction are all analytically positional, that is, their definitions refer to individual constants (for places or times). A projectible predicate, that is, a predicate that can be used for induction, has no definition that would refer to such individual constants but is purely qualitative (e.g. because it is a basic predicate). The trouble with this reply is that it makes it essentially relative to a language whether or not a predicate is projectible, since it is relative to a language whether ‘grue’ is positional or ‘green’ qualitative. If we begin with a language containing the basic predicates ‘green’ and ‘blue’ (as in English), ‘grue’ and ‘bleen’ are positional, where ‘bleen’ is defined as follows: (DEF2 ) x is bleen =df x is examined before t and blue ∨ x is not so examined and green But if we start with a language that has ‘bleen’ and ‘grue’ as basic predicates, ‘green’ and ‘blue’ turn out to be positional: (DEF3 ) x is green =df x is examined before t and grue ∨ x is not so examined and bleen (DEF4 ) x is blue =df x is examined before t and bleen ∨ x is not so examined and grue The qualitative–positional distinction seems to depend solely on what 43
Nelson Goodman
language you start with. But then, both languages are completely symmetrical in all their semantic and syntactical properties.13 In other words, the positionality of predicates is not invariant with respect to linguistically equivalent transformations. But if that is the case then there is no semantic or syntactic criterion on the basis of which we could draw the line between projectible predicates and predicates that we cannot use for induction.14 This certainly was terrible news for logical positivists. However, we need to draw such a line. We cannot simply accept that the predicates we use in usual discourse in our language seem fine for induction and just go ahead: If our definition works for such hypotheses as are normally employed, isn’t that all we need? In a sense, yes; but only in the sense that we need no definition, no theory of induction and no philosophy of science. … The odd cases we have been considering are clinically pure cases that, though seldom encountered in practice, nevertheless display to best advantage the symptoms of a widespread and destructive malady. We have so far neither any answer nor any promising clue to an answer to the question what distinguishes lawlike or confirmable hypotheses from accidental or non-confirmable ones … It is this problem that I call the new riddle of induction. (PP: 386/FFF: 80–81) This time the problem is not a problem for inductivists only. Although Feyerabend (1968) and others have thought that Goodman’s riddle is no threat to (Popperian) deductivism, it certainly is. Everything that was said about confirmation could also be phrased in terms of corroboration and everything said about “positive instances” could also be phrased in terms of “negative instances” (see Foster [1969] 1997). It is also of no help to retreat to ostensive learnability. Many have thought that exactly those predicates should be projectible that are ostensively learnable. However, many predicates, such as ‘conducts electricity’, are not so learnable but clearly projectible, whereas gruelike predicates might well be ostensively learnable. Note that they need not have to involve an implicit reference to a certain date, but might make reference to modes or states of the observer (see Hullett & Schwartz [1967] 1997). Before we learn what Goodman’s own answer to his new riddle of induction is, we shall briefly follow the route by which Goodman came to the problem of induction in order to see what areas of epistemology, 44
If this were an emerald it would be grue
philosophy of language and philosophy of science are involved. This is very instructive, for it will become clear that the new riddle of induction is not merely a technical puzzle for certain kinds of formal theories of confirmation. Following Goodman’s footsteps also shows how Goodman approaches philosophical problems, and how he arrives at solutions to them.
Induction and projectibility: counterfactuals In Fact, Fiction, and Forecast, Goodman published two lectures in one book. The first lecture was presented to the New York Philosophical Circle on 11 May 1946. In this lecture he discusses the problem of counterfactual conditionals, which eventually leads him to the problem of lawlikeness and to the problem concerning which predicates are projectible (which he could not solve in 1946). The second part of the book contains the Special Lectures in Philosophy that Goodman gave at the University of London on 21, 26 and 28 May 1953. In this part he again departs from the problem of counterfactuals, discusses its connection with dispositions, gives an analysis of modality and then turns to the problem of induction and projectibility (and argues for his solution of the problem). How are these topics connected? The issue of counterfactuals seems to be a problem in the philosophy of language (or formal semantics), the problem of modality seems to be a problem of ontology, whereas the projectibility issue belongs to philosophy of science and epistemology. Let us begin with counterfactuals. Counterfactuals such as (E1) If this match had been struck it would have lit. seem to have weird truth-conditions. Typically their antecedent is actually false; the match has not been struck. Moreover, in this case the consequent is false too; the match has not lit. The normal truthconditions of a conditional do not apply here. The truth-function expressed by the material conditional is displayed in the following truth-table: p T T F F
q T F T F
p⊃q T F T T
45
Nelson Goodman
Hence, a material conditional with a false antecedent is always true, which would make (E2) If this match had been struck it would not have lit also true, but it clearly is not if (E1) is true. Goodman identifies two problems with counterfactuals. One is the problem regarding which implicit boundary conditions are stated when a counterfactual is stated. The second problem is which quality does the implicit regularity have that is stated when a counterfactual is stated. Boundary conditions specify the states of affairs the counterfactual is about. The explicit boundary conditions are that, in the states of affairs the counterfactual is about, there is a match and that it is struck. Some boundary conditions must also implicitly be stated when such a counterfactual is stated. Matches do not always light; they only do so if the match is dry, if there is enough oxygen in their vicinity and so on. Thus (E1) is only true if such boundary conditions were also the case; thus the implicit boundary conditions are necessary truthconditions of counterfactuals. The set of these boundary conditions, together with the sentence ‘This match is struck’ and the regularity that matches light when struck under favourable conditions, should then lead to the consequent of (E1) expressing that the match is lit. On the other hand, these boundary conditions do not include all that is the case about the match, since one of these things that are the case about the match is that the match is not struck. But then the union of all boundary conditions and the sentence ‘The match is struck’ would imply everything and would, again, also make (E2) true (ex falsum quodlibet sequitur; given an inconsistency any formula whatsoever can be derived15). Hence one problem with the truth-conditions of counterfactuals is that we need to know what the right set of boundary conditions is that are implicitly stated by a counterfactual and thus belong to its truth-conditions. For Goodman, this part of the problem is a dead end. There does not seem to be a way to single out the right boundary conditions without using a counterfactual, but that would render the analysis of counterfactuals viciously circular. The closest we get to an analysis of counterfactuals is that we need to find a set of boundary conditions B such that B together with the antecedent A and a law imply the consequent. But the restrictions on B are the trouble. (DEF5) A counterfactual with antecedent A and consequent C is true iff there is a set B of true sentences such that: 46
If this were an emerald it would be grue
(i) B is compatible with C and B is compatible with ¬C, (ii) The set that contains all the sentences in B and also the sentence A is consistent, and all sentences of that set are simultaneously assertable, (iii) the conjunction of all sentences of B with A, A*, is such that there is a law and (a) ‘A* ⊃ C’ is implied by that law, (b) there is no set B* compatible with C and compatible with ¬C such that B* together with A and a law would imply ¬C, (c) there is no law such that B or B* would be implied by ¬A and that law. Whether or not this analysis is confronted with further counterexamples is a side issue. The second half of condition (ii), the simultaneous assertability of the sentences of B together with A, is implicitly nothing but a further counterfactual: to check whether B together with A is simultaneously assertable is to check whether ‘If A were true, the conjunction of all sentences in B would be false’ is true, which in turn (by application of DEF5) would mean to check whether there is a set B**, which contains sentences that are simultaneously assertable with A and imply ¬B (by some law) and so on. The second problem with counterfactuals is the law that is needed to imply the consequent together with the appropriate B. General statements such as ‘All matches which are struck under favourable conditions light’ do support counterfactuals (e.g. ‘If this match had been struck under favourable conditions it would have lit’), whereas general statements as ‘All items in your fridge have gone off’ do not support counterfactuals (check ‘If this yoghurt were in your fridge it would have gone off’, unless you have a magical fridge that does not like you at all).
Dispositions Having convinced himself that the problem of counterfactuals does not seem to have an easy solution and therefore does not promise to be illuminating, Goodman turns to dispositions instead. Disposition statements are usually stated in indicative form; they talk about “inner states” of things rather than external boundary conditions but are nevertheless intimately connected with counterfactuals. In fact one reason to analyse counterfactuals was to find out what disposition statements say. But maybe this way of analysis went in the wrong direction. Counterfactuals seem very complicated; maybe they are more complicated 47
Nelson Goodman
than disposition statements and therefore make a bad analysans. Maybe the path to explication lies in the other direction. Goodman cannot accept that disposition predicates refer to properties; such an analysis would not help to clarify the issue at all, since talk of “properties” is even more obscure than talk of dispositions. We have disposition predicates and the objects they apply to, but where on earth are the properties of the latter? Properties are on the list of notions Goodman finds in need of explanation, so he could not take any comfort in an explanation that referred to them. As he says at the beginning of Fact, Fiction, and Forecast: [S]ome of the things that seem to me inacceptable [sic.] without explanation are powers or dispositions, counterfactual assertion, entities or experiences that are possible but not actual, neutrinos, angels, devils, and classes. … You may decry some of these scruples and protest that there are more things in heaven and earth than are dreamt of in my philosophy. I am concerned, rather, that there should not be more things dreamt of in my philosophy than there are in heaven or earth. (FFF: 33–4) The task must therefore be to analyse disposition predicates as expressions that apply to actual objects. However, a disposition predicate such as ‘inflammable’, which applies to the book you are now reading, seems to talk about a possible state of this book; the book does not burn, but under favourable circumstances it could. As is clear from the quote just given, talk about possible states is as unacceptable for Goodman as talk about properties, if this cannot be translated into talk about manifest predicates of concrete things. Goodman’s idea is to analyse disposition predicates as extensions of certain manifest predicates. Imagine a book that is actually burning. Here the predicate ‘burns’ is manifest. The book you are reading now, on the other hand, is in the extension of inflammable, because the predicate ‘burns’ is so connected with the predicate ‘inflammable’ that extending ‘burns’ by ‘inflammable’ also covers this book. The question is under which circumstances it is allowable to extend ‘burns’ in this way. This might in single cases be a question of which manifest predicates we take as good indicators of inflammability. If having chemical constitution X is taken to be a good indicator, then a partial definition of ‘inflammable’ is ‘burns or has chemical constitution X’. This way we are not talking about things that could happen to this book, but about manifest predicates true of this book. In the general case it is a question of which predicates are projectible. 48
If this were an emerald it would be grue
Possibilia Goodman’s analysis secures that disposition statements talk about actual things, not about possible objects. He extends this into an actualist analysis of possibilia. To see why that is an extremely attractive position, compare the following alternative. Say you have two books on your table, in particular two copies of Heidegger’s Sein und Zeit, which you received as Christmas presents from someone who does not know you well and someone who does not like you much. After reading some pages you decide to turn to more practical matters and set one of the copies on fire. Now, one copy burns, the other does not, but both copies are inflammable. Why is the copy that actually does not burn inflammable? One story philosophers might tell you to explain this is that counterfactual sentences do not directly talk about anything in our actual world, but about things and states of affairs in other mere possible worlds. What follows is the story that a so-called modal realist might tell you to “explain” the semantics of counterfactuals. There are as many worlds as there are possible ways this world could be, all concrete entities just like the world you live in, although causally unconnected with it.16 When we here in our world say that something is possible, we are really talking about what is the case in one of these worlds. These possible worlds are ordered. Some of them are closer to the actual world, not spatially (due to their spatiotemporal isolation from our world), but in the sense that they are more similar in many respects, whereas there are other possible worlds, very different from the actual world and so farther away. In some of the closer worlds there are counterparts of your copies of Sein und Zeit on your counterpart’s table; and, again, in some of them the counterpart of you has decided to burn the other copy of the book or both, or lightning has struck your counterpart’s desk, or your counterpart’s house burns down. Whatever happened there, the counterpart of the other copy burns. If it burns in all close worlds in which it is set alight under favourable conditions then this is why in your world this copy is inflammable although it is not actually burning. Now, if this kind of talk can be replaced by talking of manifest predicates that apply to the objects there on your table, we do not seem to overcrowd the universe of discourse with all these strange entities and might eventually arrive at an explanation that is worth its name. Modalities are brought back to earth in this way: the non-burning copy is inflammable only because one copy actually burns and because the other copy is actually of a certain chemical constitution. The 49
Nelson Goodman
question is whether by mere projection of manifest predicates we can produce as many possibilities as we want to use (for semantics, metaphysics and epistemology). Goodman thinks that the possibilities we need for a philosophy of science and epistemology are indeed analysable this way.17 The only remaining question to answer then is which predicates are projectible, and we will be able to tell which things have which dispositions. So we can at last turn to Goodman’s own solution to his riddle of induction.
Goodman’s solution: a theory of projectibility Of course, ‘projectible’ is itself a disposition predicate and we do not yet know generally how dispositions should be analysed. We know, however, that single disposition predicates can be (partially) analysed without presupposing a general solution for all disposition predicates. This is what we did with the disposition predicate ‘inflammable’ above; it applies to all things that burn and the things that have chemical constitution X (which we take as a good indicator for inflammability). If we can do the same thing with ‘projectible’ we could solve our problems concerning dispositions once and for all. The inflammability example showed us what we have to do if we want to analyse a single disposition predicate. The lesson we learnt there can now help us to solve the problem here. When we know when to project ‘inflammable’, we know what indicators tell apart for us the inflammable things from the not-inflammable ones. Obviously, if we knew when to project ‘projectible’, we would know which indicators tell apart for us the projectible predicates from the unprojectible ones. But this is just what we need to solve the disposition problem once and for all. When we know how to project ‘projectible’, we know when to project any given predicate. But this also will tell us when it is legitimate in general to project a “manifest” predicate to a disposition predicate. We have then reduced the old general problem of how to analyse disposition predicates to the singular problem of how to analyse the disposition predicate ‘projectible’. The project is then to define what distinction our mind is making when it distinguishes legitimate from illegitimate projections. Just as we did with ‘inflammable’ we shall now take some manifest predicate and extend it to cover all hypotheses with predicates that are projectible (for brevity’s sake we shall call a hypothesis with a projectible predicate a ‘projectible hypothesis’). The first predicate that comes to mind is ‘projected’. 50
If this were an emerald it would be grue
(DEF6) A hypothesis is projected iff (i) it is accepted after the positive test of some of its instances (predictions), (ii) not all of its instances are tested yet, and (iii) no instance falsified the hypothesis. Projected hypotheses are not necessarily all (and still) projectible. Whereas all burning things obviously are inflammable,18 not all projected hypotheses are projectible (some at earlier times projected hypotheses might now be known to be false or are now exhausted). To find the extension we want, we need to exclude all projected hypotheses that are not projectible and include all projectible hypotheses not yet projected. Let us start with the exclusions. We shall first exclude all hypotheses that are at the moment exhausted or violated by negative instances. But what about our hypothesis L4, ‘All emeralds are grue’? Hypothesis L4 is not yet exhausted, and nor is it falsified. However, we know that L4 is in conflict with L3, which says that all emeralds are green. We know that there is no syntactical (or semantical) difference between the two hypotheses that we could use to make a decision. There is, however, a historical difference that we can use to make a decision. ‘Green’ is much better entrenched in our language, which means that we projected much more hypotheses with that predicate or predicates with the same extension than we did with ‘grue’ (in fact, we never projected any hypothesis with ‘grue’ or with any predicate coextensional with it). Thus, a hypothesis is excluded from projection if it is in conflict with a better entrenched predicate; in that case the hypothesis is overridden. Given these ideas about exclusion and this terminology, we can define the rule by which we decide projectibility, unprojectibility and nonprojectibility: (DEF7) A hypothesis is projectible iff it is supported, unviolated and unexhausted, and all hypotheses conflicting with it are overridden. (DEF8) A hypothesis is unprojectible iff it is unsupported, exhausted, violated or overridden. (DEF9) A hypothesis is nonprojectible if it and a conflicting hypothesis are supported, unviolated, unexhausted and not overridden. The last definition takes care of the fact that we are confronted with two hypotheses that are in conflict and neither has a better entrenched predicate. Entrenchment can even be further refined to account for 51
Nelson Goodman
cases in which a predicate inherits the entrenchment of another that it is derivative from and so on. Of course, this makes projectibility essentially a matter of what language we use and used to describe and predict the behaviour of our world. Is that satisfying? You might think that whether or not a hypothesis is lawlike should depend not on such contingent matters as the way we use and used our language up to now, but on more fundamental things. Should it matter whether a predicate refers to a real property or whether a kind-term refers to a real substance? What we want is that the predicates of our hypotheses cut nature at its joints, but can we really be sure that the history of our language use will guarantee that? As Goodman puts it: Are we not trusting too blindly to a capricious Fate to see to it that just the right predicates get themselves comfortably entrenched? … In the case of new predicates … the legitimacy of any projection has to be decided on the basis of their relationship to older predicates; and whether the new ones will come to be frequently projected depends upon such decisions. But in the case of our main stock of well-worn predicates, I submit that the judgment of projectibility has derived from our habitual projection, rather than the habitual projection from the judgment of projectibility. The reason why only the right predicates happen so luckily to have become well entrenched is just that the well entrenched predicates have thereby become the right ones. (FFF: 98) Making predictions and having theories about lions, emeralds and green and blue things is of interest if we suppose that there are lions, emeralds and green and blue things. But this does not seem to play the tiniest role in this definition of projectibility. If we recall how Goodman defined his project, we find that such further demands are inadequate (if we agree to his definition of the project). Goodman’s task was to define ‘projectible’ in a way that conforms extensionally more or less with the rule we use to tell projectible hypotheses from unprojectible or non-projectible ones. For justifiedly finding fault with the result you need to present extensional differences between the defined rule and the actual practice to a degree that would make the definition given above unacceptable as a proper explication. We are not suggesting that this cannot be done. What we want to emphasize is that the mere fact that the definition sounds unfamiliar does not mean that it is inadequate (by Goodman’s own standards).
52
If this were an emerald it would be grue
Summary In this chapter we have tried to give a first impression of Goodman’s philosophy. We have chosen Goodman’s new riddle of induction for that, because we thought that the reader might already have encountered it in an epistemology or philosophy of science class. We hope to have shown three interesting things about Goodman’s philosophy that we want to follow up in subsequent chapters. The first is his technique to detect dead ends and to change the direction of explication. In Chapter 3, in which we shall say more about Goodman’s general conception of philosophy, we hope to make it somewhat clearer why he thinks that this might sometimes be a good idea. Secondly, and closely connected, is the whole issue of explication as the main project of philosophy. What exactly is an explication? How does it differ from other kinds of definitions? We have said that Goodman thinks that it is important that an explication is extensionally more or less in accordance with our practice. Why does he think so? This, again, will be a central topic of Chapter 3. Finally, his analysis of projectibility in terms of entrenchment sounds rather constructivist. After all, language is of our making and arbitrary, but is it equally up to us and arbitrary whether all emeralds are grue or all emeralds are green? Does Goodman really think that contingent facts about our practice to project certain predicates rather than others in the end determine which hypotheses are at all projectible? This is an issue that will be a topic not only in Chapter 3 but also throughout the rest of the book. In this chapter we have tried to present Goodman’s views on induction, justification of inference rules, counterfactuals, dispositions, modality and projectibility. To keep things at a comprehensible level, we did not follow all details and all aspects of Goodman’s thoughts on these matters and, moreover, have not said much (next to nothing) about other people’s views on the topics involved. In Chapter 9 we shall make good this omission with respect to the views held by others. With respect to further aspects of Goodman’s thought on the matter, some things will be supplemented in the chapters to come, for other things you might want to read Fact, Fiction, and Forecast, and Problems and Projects.
53
Chapter 3
The big picture
In this chapter we sketch how Goodman conceives of the purpose, aim and method of philosophy. We also indicate the way in which his thoughts on seemingly unrelated matters, such as art and induction, are connected to form a coherent whole. Few philosophers of the twentieth century have been as influential as Goodman, particularly across such diverse fields as the philosophy of art and the philosophy of science. What we hope to show is that this is due not only to an idiosyncratic coincidence of interests, but also to Goodman’s account of philosophy. What gives unity to Goodman’s thought is his conception of philosophy as an activity directed at understanding and elucidation. For Goodman, such elucidation is achieved by way of definitions. One of the most remarkable aspects of Goodman’s theory of definitions is that, in contrast to the ordinary-language tradition, a definition does not need to be synonymous with (i.e. identical in meaning to) the expression defined. As we shall explain, this is partly due to Goodman’s conviction that no two distinct expressions of a language are identical in meaning. This provocative thesis is central to Goodman’s thought.
What is philosophy good for? Before turning to the questions of what – according to Goodman – philosophy is about and how it gets done, it might be instructive to see how Goodman conceived of its purpose and use. These days, philosophy is increasingly challenged to prove its worth to society. Not only philosophy, but all sciences are evaluated under that perspective. The 54
The big picture
more immediate and practical the expected benefit of a given pursuit, the more worthwhile it appears. Accordingly, sciences involved in fundamental research are in some trouble. Although a pharmaceutical research project might be able to promise the development of a very important new drug that will help many people after three years of research, scientists doing fundamental research can usually at best claim only indirect value for their expected outcomes. Quantum mechanics made lasers possible, which in turn led to the invention of CD-players. But CD-players were absolutely unforeseen by the forefathers of quantum mechanics, and it would have been ridiculous if Niels Bohr had written anything like “In fifty years we shall be able to develop new technology that will cause a revolution in the music business and sell like hell” into a proposal for a research grant. Philosophers are even worse off. Although they can justifiably claim that the computer and the whole idea of an artificial programming language goes back to philosophers and/or philosophically interested mathematicians, everybody knows that of the very many things that Heidegger (and most other philosophers) said, nothing eventually found its way into technology (which is, presumably, a very good thing) or into any other practical improvement. So what could reasonably be said in favour of philosophy? To questions about the purpose of such things as philosophy, art, fundamental science and so on, Goodman would (as so often) reply that the question is asked in the wrong way: it does not apply at all to philosophy and the like. Philosophy, art and science answer the question ‘Why do we do x?’, but their existence does not raise it. It is nonsense to ask what philosophy is good for. Everything else is good for philosophy. It does not need a purpose; it is the purpose of all other things we do. Many people who stumble over this remark find it either very arrogant or worse. It seems arrogant for it seems to downgrade the existence of all people who do not happen to work in philosophy or science departments. Is the fact that some philosopher is sitting in his office and reflects on his pet problem what gives meaning to all these poor fools who work hard to make a living for them and their families? Goodman did not think of philosophy, art or science as the privilege of the few people who later become philosophers, artists or scientists. Art and science education should basically be taught to everyone. That it could be taught and how such education could be systematically improved was the subject of Project Zero at Harvard. Project Zero emphasized the cognitive aspect of art and the like such that art became universally teachable (compared with the received 55
Nelson Goodman
view that art is some kind of entertainment) and systematically accessible (in contrast to the also received view that understanding art is the privilege of a gifted few). Serious study of education for the arts has also been stunted and side-tracked by the prevalent notion that the arts are instruments of entertainment. Some newspapers list plays, concerts, and exhibitions under “amusements”; and among a week’s amusements may be a Bach Mass, King Lear, and an exhibition of Goya’s Disasters of War. No real progress in attitudes toward education can be hoped for when Cezanne’s pictures are classed with cookouts, and arts programmes with playgrounds. On the other hand, we encounter almost as often the equally detrimental mistake of exalting the arts to a plane far above most human activities, accessible only to an elite. (BA: 5) Art, philosophy and the sciences enhance our understanding. They do not do this for some extrinsic purpose; understanding is rather a good in itself. This autonomy of science was independently motivated from another perspective. In his talk “Definition and Dogma” first delivered in 1951, five years after the Second World War, Goodman argues against the challenge that esoteric1 science would even undermine the strength of a society and its resistance to totalitarian propaganda: As the philosopher becomes more scientific in the sense that he becomes more an investigator of problems than an advocate of doctrines, he becomes more and more the object of complaints like those currently raised against the scientist. He is charged with wasting his time on trivial matters of intellectual curiosity rather than on problems of vital and pressing importance to human survival. He is charged with failing to attend to the important problem of establishing a scale of human values. And his skeptical spirit, his critical examination of even the fondest beliefs, and his exposure of the fallacies in even the most elementary and widely accepted opinion, are held to be responsible for undermining the faith that we need to protect us from the inroads of totalitarian doctrine. (PP: 52) Goodman’s reply to this challenge is twofold. First he argues that even if science would like to answer the great and grave problems confronting society, it could not possibly do so while the simplest and most familiar matters that fundamental science and philosophy deal with remain 56
The big picture
unsolved puzzles. “Isn’t this like trying to solve equations in higher algebra before we have the laws of arithmetic straight?” (PP: 53). Goodman’s second argument is directed against the more specific charge that the sceptical attitude of science undermines faith and thereby the strength of a society to resist the influence of evil-doers and fanatics; that, therefore, the esoteric research undertaken by science does not serve society. Goodman does not deny this; indeed, science and analytic philosophy do not establish a scale of values. They do not produce a counter-dogma or any sort of religion substitute that could serve as a basis for society. But that is not a problem. On the contrary, it is being devout to whatever dogma that makes individuals most susceptible to propaganda: But let us turn now to the charge that science and analytic philosophy undermine all our faiths and leave us prey to any onset of forceful propaganda. It seems to me that no idea could be more mistaken – that there could be no worse misunderstanding of the dynamics of belief. As a matter of fact, the most promising candidate for apostleship in a new creed is the passionate devotee of an old one, not the skeptic who subjects all proposals to critical scrutiny. There is nothing very surprising about the political fanatic who suddenly becomes a religious zealot. What leaves us open to new dogmas is the habit of uncritical acceptance of beliefs. (PP: 53) Science, with its sceptical attitude, provides just the right antidote to totalitarian or religious propaganda, whereas to present a counterreligion would be to undermine the purpose. To avoid getting drunk on bourbon by drinking copious amounts of gin is of little help, when what you really desire is to be sober. As Goodman says, “What one needs to stay sober is to learn how to distinguish between various offers and how to say ‘no’. This is also just what we need to protect ourselves from propaganda” (PP: 53). But even if a scientific attitude might be sufficient to resist propaganda, is it also sufficient as cement for society? Does it, in situations of conflict and danger, sufficiently motivate people to defend their society, maybe even risk their lives for it? Goodman thinks so and found encouragement in reports saying that most soldiers in the Second World War were not fighting for “shibboleths or slogans or grandiose ideals, they were not fighting for an abstract idea or for somebody’s system” what they were fighting for was “to get home and eat some blueberry pie”. 57
Nelson Goodman
Let no one delude us into thinking that our only choice is between dogma and dogma. As long as we are free to choose, let us choose definition rather than dogma – hypothesis rather than hysteria. It is time enough for hysteria when hope is lost; and time enough for dogma when it is forced upon us. (PP: 55) Having said what purpose philosophy has according to Goodman, we can now turn to its aim. We have already said that philosophy aims at understanding. What this means and how it is to be achieved will be our topic for the rest of the book.
The systematic background: kinds of definitions As we saw in Chapter 2, Goodman took the task of philosophy to be to provide definitions of certain kinds. Understanding – as the aim of philosophy – is hence achieved if problematic notions (such as, for example, ‘induction’) are given a rigorous definition in unproblematic terms. As we saw in Chapter 2, Goodman thinks the solution to the problem of induction is to provide definitions of inductive rules: “The problem of induction is not a problem of demonstration but a problem of defining the difference between valid and invalid predictions” (PP: 375/FFF: 65). If analytic philosophy is done in this way, we should try to find out what a definition is and when it is acceptable for the solution of a philosophical problem. First of all, there are different types of definitions, which all serve different purposes within philosophy. Moreover, each type has its own standards of adequacy. We shall briefly review them and discuss their differences. The types we shall discuss are nominal definitions, real definitions and explications. The classic distinction is between nominal definitions and real definitions. We shall first discuss nominal definitions, which are the ones that you frequently hear cannot be true or false: “everybody can define their terms as they like” and so on. Nominal definitions, indeed, are mere stipulations. They usually introduce new words into the language that serve as abbreviations for longer complex expressions of the language. Here is an example: (D1) The term ‘tiglon’ is an abbreviation (is synonymous with) ‘offspring of a male tiger and a female lion’. We shall call the new term, which is defined by the definition, the ‘definiendum’, and the terms of the language, by which the term is 58
The big picture
defined, the ‘definiens’. In the example just given, ‘tiglon’ is the definiendum and ‘offspring of a male tiger and a female lion’ the definiens. A nominal definition stipulates that definiens and definiendum should be used synonymously. A nominal definition thus gives meaning to a new term of the language; it assigns the definiendum the meaning of the definiens. Since these definitions assign meanings to terms via other terms, they are expressly about linguistic expressions. A new linguistic expression is introduced as an abbreviation for a longer one. Definition (D1) is not about tigers or lions; it is about the word ‘tiglon’ and the string of words ‘offspring of a male tiger and a female lion’ as well as all strings synonymous with it. Since these kinds of definitions are stipulations, they cannot be true or false. They, nevertheless, have to obey certain criteria of adequacy, such as: ● Eliminability
A nominal definition has to state the conditions under which the definiendum can be eliminated from every context in which it occurs. ● Conservativeness A nominal definition must not lead to new truths in the theory, that is, every proposition that is true according to the theory after the definition is introduced must have been so before. Without such adequacy criteria it would be possible to introduce contradictions into the language. As you can see from the criterion of eliminability, it is important that we can eliminate definienda from all contexts (i.e. complex sentences) in which they occur. To summarize, nominal definitions are about terms, they cannot be true or false but are correct or incorrect (with respect to the adequacy criteria) and they introduce new terms into the language. Now let us turn to real definitions. Traditionally, real definitions state the “essence” or the “nature” of things. It is not very clear what is meant by that. Consider the following example: (D2) x is a living organism iff (i) x is composed of a discrete amount of matter with a definite boundary, (ii) x continuously interchanges matter with its surroundings without manifest alteration of properties over short periods of time and (iii) x came into existence by some process of division or fractionation from one or two pre-existing objects of the same kind. 59
Nelson Goodman
The term ‘living organism’ is already a term of the language. If this real definition is about linguistic expressions, it does not introduce a new word, but talks about the meaning of an expression that was already part of the language. We shall call real definitions interpreted this way ‘meaning analyses’. A meaning analysis can be true or false. It is true iff the meaning of the definiendum is the same as the meaning of the definiens (to be more precise, a meaning analysis is true for a language S iff the term d1 of S stating the definiendum is synonymous in S with the term d2 of S stating the definiens). Definition (D2) does not have to be about the linguistic expressions involved, however; it might well be about living organisms. In that case it tells us not what the meaning of ‘living organism’ is in English, but what living organisms are. We shall call these definitions ‘empirical analyses’. An empirical analysis can also be true or false. It is true iff the phenomenon referred to by the definiendum always, with nomological necessity, coincides with the phenomenon referred to by the definiens. To summarize, meaning analyses are about terms that are already in use in a language. They can be true or false and their truth-value depends on whether or not they state synonymies in a language. Empirical analyses are about the referents of terms, objects in the world. They also can be true or false but their truth-value depends on the obtaining of nomological supervenience relations between the referents of definiendum and definiens. Nominal definitions are of use in philosophy whenever new technical terms are introduced. Examples for such technical terms are ‘a priori’, ‘consequentialism’, ‘supervenience’, ‘nominal definition’ and the like. As we have seen, the logical empiricists thought that philosophy is merely concerned with the clarification of scientific language and not with the empirical analysis that scientific theories are about. This was assumed to be the job of the sciences. Therefore, empirical analyses are not within the scope of philosophy. What is in the scope of philosophy is the analysis of meaning. As we have learnt from the brief remarks on Carnap’s philosophy, however, logical empiricists did not think that a meaning analysis of natural language would in all cases lead to a sufficient clarification. Natural language is unstable. Some speakers might use terms synonymously that others would not use that way; the meaning of some terms might lack sharp boundaries; these terms might be vague; others are ambiguous and so on. Although meaning analysis is an important part of early analytic philosophy, it is not its end. The aim of ideal-language philosophy was 60
The big picture
to overcome the ambiguity and vagueness of natural language. This was done with the help of modern logic and by way of explications. An explication is a type of definition that is a hybrid of nominal definitions and meaning analyses. Since philosophy tries to clarify our way of speaking, it has to depart from the terms and expressions we already use in the language. These terms are assumed to be imprecise and that is why they lead to philosophical problems. If we want to clarify these terms, we have to substitute them with terms that are in the relevant respect clearer than the ones we have started with. The new terms, however, should cover the same subject matter as the old ones, of course. Otherwise philosophy would be too easy. Consider the following example. Say we have a philosophical problem stated in normal English to do with the notion of free will. The philosophical problem is how we can be free in our will (the way we assume we are) when at the same time we are subject to physical laws that, together with initial conditions that were in place long before our birth, absolutely determine what we will do and want (as we also assume we determine). A philosopher might think that it is because of the concept of “free will” that this puzzle arises, and that we should clarify English to get rid of the problem. Nothing seems easier than that. Just restrict English to the fragment that is about elementary algebra and you are done. The problem of free will can not reoccur in “clarified English” simply because there are no terms in clarified English with which to state the problem. There is no such term as ‘free will’ to begin with. Therefore we want the problematic expression of ordinary or prescientific language, the explicandum, to be approximated by the meaning of the substitute, the explicatum. That they will differ to some degree in their meaning is, of course, intended. After all, the explicandum had a meaning that was too vague or ambiguous for the purposes of science or analytic philosophy. To be as precise as possible when it comes to formulate the datum for an explication, the explicandum, a trick must be used. Since the explicandum is vague or ambiguous, it cannot be directly used as a datum. Instead the scientist or philosopher chooses sentences of ordinary, pre-systematic discourse in which the explicandum is used in a meaning that the scientist wishes to preserve. These sentences then serve as additional adequacy criteria for the explication. The explicatum must be such that it is interchangeable salva veritate for the explicandum in all these sentences (i.e. if the explicandum is replaced by the explicatum in these sentences, the sentences do not change 61
Nelson Goodman
their truth-values). Other requirements for an adequate explication are exactness, fruitfulness and simplicity. The first guarantees that the explicatum is introduced into a well-connected system of scientific discourse. The fruitfulness criterion is satisfied if the new concept is useful for the formulation of many interesting universal statements. Finally, an explicatum should be as simple as possible, as measured by the form of its definition and measured by the forms of the laws connecting it with other concepts. (We shall say more about simplicity in Chapter 5.) When Goodman speaks of definitions as the method by which philosophy arrives at rational reconstructions of ‘induction’ or ‘pictorial representation’ and so on, he has explications in mind. As we have seen, explications presuppose a certain familiarity with the explicandum, the term as used in ordinary or pretheoretic discourse. This familiarity can be achieved by the method of conceptual analysis, that is, a meaning analysis of the concepts involved. In this respect Goodman acknowledges the accomplishments of British ordinary-language philosophy. Conceptual analysis of ordinary-language concepts can clarify the explicandum to a degree that makes it possible to judge the similarity of explicandum and explicatum. Conceptual analysis, however, is only half way towards a satisfying philosophical solution. The next steps involve a creative act: the clever stipulation of a substitute concept that fits nicely into a satisfying system of definitions. But when are such definitions satisfying?
Systems of definitions Recall Goodman’s solution to the problem of induction from Chapter 2. He intended his system of definitions concerning projectibility (DEF6–DEF9) to be (more or less) coextensional with our pretheoretic notion of projectibility. What does that mean? The grue paradox taught us a certain lesson, namely that a mere syntactic definition of the rules of induction would also justify considering a hypothesis such as: (L4) All emeralds are grue to be well supported by the empirical evidence, for only green emeralds were observed up to t. This causes a problem in the definition of induction: it seems that all kinds of incompatible predictions could be made on the same evidence (if they predict incompatible states 62
The big picture
after t). A theory of justified inference, however, should not be that permissive. Only some predictions should be justified on the basis of certain evidence, not virtually everything. We know what has to go: L4 is not justified by the evidence we have. We would not regard it as well justified intuitively. This is an aspect of our pretheoretic notion of induction that has not yet been captured by the syntactic definition of induction. This is why Goodman wants to supplant it with a notion of projectibility. The explicandum now is ‘projectible’. Intuitively L4 is not projectible, whereas L3 clearly is. But is it possible to find a good definition of ‘projectible’? When is a definition a good one? Well, it is good if it can tell L4 apart from L3. We have seen that positionality could not serve as a satisfying explicatum, for it was dependent on which language we start with. If we started with a language having ‘grue’ and ‘bleen’ as basic predicates, positionality would count L4 again as projectible and L3 as non-projectible. Instead Goodman offers a definition based on the notion of entrenchment. Entrenchment could be used to define when a hypothesis is overridden and could thus serve as a partial explication of projectibility. Entrenchment is not language dependent in the way positionality is. If we started with just the ‘grue and bleen’ language, but project predicates that are coextensional with the predicates that are in fact entrenched, we would project a hypothesis that is coextensional with L3, but not with L4. So far, so good. The important point is that this explication is almost coextensional with our pretheoretic concept of projectibility; each concept applies to almost all the cases that the other one also applies to. This is why the definition is deemed satisfying (at least by Goodman). Coextensionality does not imply sameness of meaning. An explication in Goodman’s sense does not aim at finding a definition that agrees in all actual and logically possible cases with the pretheoretic concept. ‘Organism with a heart’ and ‘organism with kidneys’ are coextensional predicates; every organism with a heart also has kidneys and vice versa. However, the two expressions are not synonymous. Hence, coextensionality does not imply synonymy. There might even be cases in which a definition does not have to be coextensional. Clearly, when it comes to explications coextensionality might already be too strong a criterion. If the pretheoretic usage is muddled, our explicata should be less muddled. That implies that the explicatum and the explicandum will not agree in all cases. Borderline cases of the explicandum might be clear cases of the explicatum, or they might also fall out of its extension, if that seems more fruitful. 63
Nelson Goodman
Zoologists explicated the term ‘fish’ as it is used in ordinary language but the explicatum does not include whales or dolphins. The reason was that the zoologists found out that animals living in water have nowhere near as many properties in common as the animals that live in water, are cold-blooded vertebrates and have gills throughout life. It seemed more fruitful to restrict the extension of ‘fish’ in the latter way. As we shall see in the following section, even approximated coextensionality might be too strong a requirement for a system of definitions. This is the case if the structure of something is of more interest to us than its specific realization. In these cases it is enough if the system of definitions preserves the important relations between the explicatum structure and the structure of the explicandum, that is, that they are structured in the same way in the relevant aspects.
The reconception of philosophy As we have said, explications replace problematic concepts in order to lead to better theories. In our example above, the meaning of ‘fish’ was replaced by a new zoological meaning, which included fewer animals than the ordinary-language concept did. Sometimes explications might also shift the whole extension over the domain of individuals. In such cases the new concept might have some new instances that the ordinary-language concept did not apply to, and at the same time might cease to apply to others. If such decisions promise to lead to theoretical advances, nothing is wrong with that. Goodman’s reconception of philosophy by way of explication allows for even more radical shifts, as we shall see in the following chapters. We shall, however, partially outline the main idea here. Goodman tried to solve philosophical problems by explication; that is, by reconceptualizing the problematic terms. Whenever Goodman encounters a philosophical problem that seems to be at a dead end, judged by the history of failed attempts to solve it, he tries to tackle it by taking some other perspective on the matter. We have already seen examples for this when discussing the problem of induction; there did not seem to be a forthcoming general solution for the general problem of disposition predicates until Goodman recognized that definitions for singular disposition predicates are sometimes nevertheless available and that a partial solution for the disposition predicate ‘projectible’ will yield a general solution after all. Thus shifting the perspective might 64
The big picture
sometimes lead to new observations and new insights and might be a promising strategy if we seem constantly to be banging our heads against the same wall. To see how far this reconceptualization might take us, consider the concepts of truth, certainty and knowledge, which are clearly key concepts of philosophy. All three prove to be problematic at best. Truth is problematic if conceived as correspondence of discourse with reality. There does not seem to be a world independent of discourse, and, anyway, correspondence of a description with something undescribed seems problematic to grasp. (There will be more on this in Chapter 8.) Certainty is an even less clear notion. If ‘certainty’ means trust or faith, then it is independent of truth, but merely depends on a psychological relation to a given proposition. If it means incontestability then it is trivially satisfied by all true sentences and even by some false ones whose falsehood cannot be demonstrated. Finally, knowledge seems to be related to truth and certainty and therefore inherits their problems. If, however, truth is not built on certainty, but on likeliness or high probability, we get into further trouble. Assume that you believe, justified on stochastic considerations, that the first card in a well-shuffled deck of cards is not the ace of hearts. If that is true, did you know it was true? These notions prove to be very problematic when it comes to being captured by satisfying explications. They are, obviously, mutually related. Again Goodman pleads for a replacement of these concepts, but this time a much more radical replacement. Instead of explicating the notions with others that are almost coextensional, Goodman pleads (in a paper jointly written with Elgin (RP: ch. X )) for their replacement with notions that only preserve the interrelatedness in a certain way, but have an otherwise wholly different extension. As a replacement of ‘truth’, Goodman proposes ‘rightness of symbolic function’. (We shall discuss Goodman’s theory of symbols in detail in Chapter 6.) This notion is much broader: whereas ‘truth’ applies only to linguistic statements, ‘rightness’ also applies to all ways a symbol might work, be it in denotation, depiction, exemplification or expression. ‘Rightness’ sometimes just replaces ‘truth’ when it comes to statements, but sometimes a statement might be “true” (e.g. ‘Snow is white’) but just not “right” (e.g. if ‘Snow is white’ is put forward as an answer to the question ‘What density does granite have?’). Hence, ‘rightness’ is clearly not coextensional with ‘truth’. However, ‘rightness’ might be easier to characterize, since it involves not the explication of the relation between world and Reality, but the degree of fit of 65
Nelson Goodman
a symbol into a discourse and the way it is effective in it. Moreover, rightness seems to have a relation to adoption and understanding similar to the relation truth has to certainty and knowledge. It is thus proposed to replace ‘certainty’ by ‘adoption’. ‘Adoption’ is as broad as ‘rightness’: we can “adopt” habits, strategies, terminologies and styles as well as statements. Again this notion is clearly not coextensional with ‘certainty’. However, adoption is clearly connected with entrenchment. Entrenchment is nothing but continued adoption. This way adoption contributes to rightness, since the entrenched predicates are the “right” predicates in a given discourse to represent the world. As a replacement of ‘knowledge’, Goodman proposes ‘understanding’. Again this notion is sufficiently broad to cover more than just statements or only those entities that are truth-apt; one can “understand” (or not) questions and orders, for instance, but it does not make much sense to say that one knows a question (it must not be confused with elliptical uses of this phrase, which are short for ‘knowing the answer to that question’ or ‘knowing that someone asked that question’, ‘knowing that the quiz master has that question on his list’, ‘knowing that this question will turn up in the exam’, etc.). Thus ‘understanding’ fits with ‘rightness’ and ‘adoption’. Understanding might take the position of knowledge, since it seems to be a reasonable candidate for what our cognitive endeavours aim at. If cognition involves more than truth-evaluable entities, but also symbols that are only classified by correctness, as Goodman thinks, knowledge would be far too restricted as the overarching aim of philosophy and other arts and sciences.
Meaning, synonymy, analyticity Goodman’s insistence that meaning analyses impose much too strong criteria on good definitions, “sameness of meaning” in particular, can also be explained from another angle. As we have said, one of the consequences that the logical positivists drew from their basic assumptions was that philosophy, being concerned with the “syntax of scientific language”, was dealing with statements clearly different in kind from the statements the other sciences were concerned with. Whereas the empirical sciences were associated with synthetic statements, philosophy was associated with analytic statements: statement that are true in virtue of their meaning alone. The truths of 66
The big picture
logic and logical analysis were considered to be analytic truths, just as the statements in which conceptual analyses are expressed were. The analytic–synthetic distinction seemed to be clear enough to the logical positivists. One of the major turning points in philosophy during the twentieth century, however, was when Morton White, Quine and Goodman suddenly declared their discontent with the distinction. On 25 May 1947, White, who was at that time Goodman’s colleague at the University of Pennsylvania (and soon to become Quine’s colleague at Harvard), wrote a letter to Quine asking for advice on a paper in which he tried to deal with a solution to C. H. Langford’s paradox of analysis proposed by Alonzo Church. Langford had raised the problem for G. E. Moore’s conception of analysis, but it applies to the notions of analysis in this chapter as well:2 Let us call what is to be analyzed the analysandum, and let us call that which does the analyzing the analysans. The analysis then states an appropriate relation of equivalence between the analysandum and the analysans. And the paradox of analysis is to the effect that, if the verbal expression representing the analysandum has the same meaning as the verbal expression representing the analysans, the analysis states a bare identity and is trivial; but if the two verbal expressions do not have the same meaning, the analysis is incorrect. (Langford 1942: 323) The way Langford formulates the problem, and the way White discusses it, shows that they are concerned with a paradox of reference analysis, applying to real analyses. The paradox arises easily, however, for meaning analyses as introduced in this chapter. As we noted above, the truth requirement for a meaning analysis says that a meaning analysis is true iff the term d1, stating the definiendum, is synonymous with the term d2, stating the definiens. This requirement seems to lead to paradox, however: meaning analyses are doomed to be either trivial or false. The argument that leads to this paradox goes as follows: Consider the following analyses: (D3) ‘Bachelor’ is synonymous with ‘unmarried man’. (D4) ‘Bachelor’ is synonymous with ‘bachelor’. Clearly, (D3) states that two different expressions are synonymous, whereas (D4) just says that one and the same expression is 67
Nelson Goodman
synonymous with itself. We regard statements like (D4) to be trivial and uninformative. Hence, (P1) (D4) is trivial. The question of whether (D3) is also trivial could be understood to be a question of whether (D3) has the same meaning as (D4) (if triviality of a statement is understood as a matter of the meaning of this statement). Thus: (P2) All statements that are synonymous with (have the same meaning as) a trivial statement, are themselves trivial. If we turn to (D3) again, the following seems to be presupposed for (D3) and (D4): (P3) Either (D3) is synonymous with (D4) or it is not. If we think that (D3) and (D4) have the same meaning and therefore are synonymous, it seems we must, given (P2) and (P1), also accept that (D3) will then be trivial: (P4) If (D3) and (D4) are synonymous, (D3) is trivial. That is already the first horn of the dilemma caused by Langford’s paradox of analysis. To get to the second horn, we need a principle of compositionality3 for meaning and a relatively unproblematic observation concerning (D3) and (D4). First, the principle of compositionality: (P5) Compositionality: The meaning of a complex expression is solely a function of the meaning of its subexpressions and the way they are combined syntactically. If (D3) is supposed to be non-trivial, it thus must not be synonymous with (D4). In other words, there has to be a difference in meaning between (D3) and (D4). Compositionality tells us that two expressions cannot differ in meaning if both are of the same syntactical structure and if the subexpressions of both have the same meaning. Statements (D3) and (D4) are, however, of the same syntactical structure: (P6) The syntactical structure of (D3) and (D4) is the same. Moreover, (D3) and (D4) have most of their syntactical parts in common. The only difference seems to be the syntactical part ‘unmarried man’ instead of ‘bachelor’. 68
The big picture
(P7) The only subexpressions that make a difference between (D3) and (D4) are ‘unmarried man’ and ‘bachelor’.4 Compositionality, however, assures us that: (P8) If (D3) is not synonymous with (D4), then there is a meaning difference between ‘bachelor’ and ‘unmarried man’. We know already that: (P9) If two expressions differ in their meaning, they are not synonymous. We also have said that: (P10) Meaning analyses are true iff the term d1 stating the definiendum is synonymous with the term d2 stating the definiens. But that leads us straight to the second horn of Langford’s dilemma: (P11) If (D3) is not synonymous with (D4), (D3) is not correct. Therefore, meaning analyses are either trivial or incorrect. There are a number of ways to avoid this problem. The ways discussed by White had to do with the fineness of grain of the meaning analysis. Clearly, one of the problematic premises is (P2). Maybe it is possible to make a difference between, say, ‘sameness of meaning1’, which is a necessary and sufficient condition for the adequacy of meaning analyses (and applies to (D4)), and ‘sameness of meaning2’, which is a sufficient condition for two terms to be trivial if one of them is. To make such differences, some philosophers of language postulated abstract objects, intensions, such that a difference in intensions of sorts could explain the difference between ‘sameness of meaning1’ and ‘sameness of meaning2’. To solve the problem for real analyses one could postulate Fregean senses (as one kind of intension), but in order to solve the problem for meaning analyses we would need something even more finely grained. The details of these accounts do not matter too much for us here. The important point is only that certain abstract objects were postulated to explicate the notion of synonymy. White sent his discontentment with the proposed solutions to Quine (White’s original paper appeared in print in 1948 under the title “On the Church–Frege Solution of the Paradox of Analysis”) and then sent Quine’s answer to Goodman. In 1947 the three discussed the matter by letter,5 until White was eventually chosen to write a 69
Nelson Goodman
survey of their discussion, which appeared in print in 1950 under the title “The Analytic and the Synthetic: An Untenable Dualism”. Quine presented his view on the matter in an address to the American Philosophical Association, which was published in 1951 as his famous “Two Dogmas of Empiricism” (Quine 1951c). At that time the described postulation of new abstract objects in order to explicate a certain notion of synonymy seemed an unacceptable move for someone with nominalist6 leanings such as Quine and a purist such as White. For Quine, an explication of ‘synonymy’ or ‘analyticity’ and the like should rather be given in behaviouristic terms. The explication should tell us in what way ‘analyticity’ and ‘synonymy’ make a difference in speaker behaviour. To learn that the difference lies in postulated abstract objects did not seem to explicate the notions in any promising way. Goodman’s original discontentment with the whole situation was even more serious. In a letter to White and Quine he claimed that not only did he find the explications of ‘synonymy’ and ‘analyticity’ so far provided to be problematic, but he did not even understand what these terms were supposed to mean pretheoretically: When I say I don’t understand the meaning of ‘analytic’ I mean that very literally. I mean that I don’t even know how to apply the terms. I do not accept the analogy with the problem of defining, say, confirmation. I don’t understand what confirmation is, or let us say projectibility, in the sense that I can’t frame any adequate definition; but give me any predicate and (usually) I can tell you whether it is projectible or not. I understand the term in extension. But ‘analytic’ I don’t even understand this far; give me a sentence and I can’t tell you whether it is analytic because I haven’t even implicit criteria. … I can’t look for a definition when I don’t know what it is I am defining. (Goodman in a letter to Quine and White, 2 July 1947, in White 1999: 347) The official result of their discussion was that any sharp analytic–synthetic distinction is untenable and should just be abandoned: I think that the problem is clear, and that all considerations point to the need for dropping the myth of a sharp distinction between essential and accidental predication (to use the language of the older Aristotelians) as well as its contemporary formulation – the sharp distinction between analytic and synthetic. (White 1950: 330) 70
The big picture
Goodman’s view on the matter had already appeared in print in 1949 under the title “On Likeness of Meaning”. Goodman here proposes a purely extensional analysis of meaning, the upshot of which is that no two different expressions in a language are synonymous. Goodman first compares an intensional theory of meaning with an extensional one. The main difference between the two is epistemic in nature: there is no comprehensible way by which we can know whether two expressions have the same intension, but there is a relatively unproblematic way to find out whether two terms have the same extension. This is so because of the assumed nature of intensions. Here are some well-known examples and their respective problems: (i) Two terms/expressions have the same intension iff they stand for the same real essence or same Platonic idea. This proposal is confronted with the epistemological problem simply because we have no idea how to tell what Platonic idea a term stands for. We clearly need something more practical. (ii) Two terms/expressions have the same intension iff they stand for the same mental idea or image. Now we have dragged the problem from Platonic heaven down to human psychology. However, this proposal is still confronted with two problems. First, it does not seem to be very clear what we can and what we cannot imagine. Secondly, many terms (e.g. ‘supersonic’) do not seem to have mental images associated with them. That should lead to a different construction that is more liberal than imaginability: (iii) Two terms/expressions have the same intension iff we cannot conceive of something (or some states of affairs) that satisfies one but not the other. Many would have the objection to this analysis that we can conceive pretty much everything, leaving us with no criterion at all.7 In any case, the only limitation that seems to be imposed on our conceptual powers is possibility. But if only those conceptions that correspond to some real possibility should count, the criterion becomes possibility rather than conceivability: (iv) Two terms/expressions have the same intension iff there is nothing possible that satisfies one but not the other. Here the question is whether we talk of something that actually satisfies one but not the other, or something that possibly satisfies one 71
Nelson Goodman
but not the other. If we take the former, this proposal is purely extensional: if I know that two predicates are satisfied by the same actual objects this already excludes the possibility that they are not satisfied by them. If, however, also merely possible objects should matter, rather than only actual objects, we are confronted with the problem of how to tell in any non-circular way what is possible. Conceivability was already ruled out in reaction to proposal (iii). An alternative could be found in consistency; such that, for example, two predicates differ in meaning if ‘is a P or a Q but not both’ can be shown to be consistent. However, this does not help if P and Q are different predicates and the logic at our disposal does not include meaning postulates. In that case, for every couple of predicates P and Q this expression is consistent. In a logic with meaning postulates the problem is not solvable in general if the language is sufficiently rich (see Cohnitz 2005a) and, anyway, where do we know the meaning postulates from? This was the original question; we are back where we started. An extensional theory for sameness of meaning is not plagued by these or similar arguments. The claim that: (v) Two terms/expressions have the same meaning iff they have the same extension. has the advantage that, as Goodman argues, we can “decide by induction, conjecture, or other means that two predicates have the same extension without knowing exactly all the things they apply to” (PP: 225). But an extensional theory is not thereby free of problems. Consider, for example, the expression ‘unicorn’ and ‘centaur’, which have the same extension (namely the null-extension) but differ in meaning. Hence, whereas sameness of extension is a necessary condition for sameness of meaning, sameness of extension does not seem to be sufficient for sameness of meaning. Goodman proposes an extensional fix to this problem that gives necessary and sufficient conditions for sameness of meaning. He observes that although ‘unicorn’ and ‘centaur’ have the same extension, simply because of the trivial fact that they denote nothing, ‘centaur-picture’ and ‘unicorn-picture’ do have different extensions. Clearly, not all centaur-pictures are unicorn-pictures and vice versa. Thus the flight to compounds makes an extensional criterion possible: “[I]f we call the extension of a predicate by itself its primary extension, and the extension of its compounds as secondary intension, the thesis is as follows: Two terms have the same meaning iff they have the same primary and 72
The big picture
secondary extensions” (PP: 227). The primary extensions of ‘unicorn’ and ‘centaur’ are the same (the null-extension), but their secondary extensions do differ: the compounds ‘unicorn-picture’ and ‘centaurpicture’ differ in extension. (We shall discuss this proposal in detail in Chapter 6.) If we allow all kinds of compounds equally, we arrive immediately at the result that by our new criterion no two different expressions have the same meaning. Take the expressions ‘bachelor’ and ‘unmarried man’: ‘is a bachelor but not an unmarried man’ is a bachelor description that is not an unmarried-man description. Hence, by our criterion, the secondary extensions of ‘bachelor’ and ‘unmarried man’ differ because the primary intensions of at least one of their compounds do. Since the same trick can be pulled with any two expressions, Goodman is left with the result that no two different expressions are synonymous, but he is ready to bite this bullet. P-descriptions that are not Q-descriptions are easy to construct for any P and Q (provided these are different terms) and these constructions might well be relatively uninteresting. If only such uninteresting constructs are available to make a difference in secondary extension, P and Q, despite being not strictly synonymous, might be more synonymous than a pair of predicates for which we are able to find interesting compounds (as in the case of ‘centaur’ and ‘unicorn’). This turns sameness of meaning of different terms into likeness of meaning, and synonymy and analyticity into a matter of degree.8 This conclusion is, however, in full accordance with the conclusion that was drawn by White as the result of the exchange between White, Quine and Goodman: I am not arguing that a criterion of analyticity and synonymy can never be given. I argue that none has been given and, more positively, that a suitable criterion is likely to make the distinction a matter of degree. If this is tenable, then a dualism which has been shared by both scholastics and empiricists will have been challenged successfully. Analytic philosophy will no longer be sharply separated from science, and an unbridgeable chasm will no longer divide those who see meanings or essences and those who collect facts. Another revolt against dualism will have succeeded. (White 1950: 330)
73
Nelson Goodman
Summary In this chapter we have introduced Goodman’s general idea of the purpose, aim and method of philosophy. For the purpose, Goodman has a negative answer: philosophy is just its own purpose; there is nothing reasonable that philosophy is good for, however, there does not need to be – which is what he would say about the arts and science too. All that matters is that our understanding is promoted by these activities, as this is, in itself, valuable. We then turned to Goodman’s conception of aim and method of philosophy. Philosophy aims at understanding and clarification and it does so by way of explication. We introduced the notion of an explication, and in what respect explications differ from other kinds of definition. Finally we have introduced a vital aspect of Goodman’s philosophy: the rejection of the analytic–synthetic distinction and his purely extensional analysis of meaning.
74
Chapter 4
Particulars and parts
In Chapter 3 we saw that one consequence of taking philosophy to be directed at understanding and elucidation is that a philosophical explication cannot be given in terms of unintelligible entities. That was the reason why Goodman did not accept an analysis of meaning in terms of intensions. Intensions are not the only philosophical constructions he repudiates, however. Goodman is, among other things, most famous for another radical doctrine: his nominalism. In the twentieth century the debate between realism and nominalism was an almost foolproof source of heated discussion and verbose polemic (and sometimes rather entertaining rhetoric), and it still is today. The conflict is over ontology. The nominalist denies the existence of certain objects, while the realist affirms their existence. There are different ways in which one can be a nominalist depending on, for example, what kinds of things it is that one does not believe to exist. We shall distinguish these below. Often the nominalist denies the existence of abstract objects (such as numbers, shapes or groups) or universals (such as redness, courage or density). Doing so has a deep impact on disciplines such as philosophy of science and philosophy of mathematics, but also in all areas where metaphysical considerations play a role. We shall also see that there is a specific twist to Goodman’s nominalism. The debate has been rekindled (if it ever really died out) in the past twenty years or so, which makes Goodman’s contribution to the field of particular interest. Goodman stands on the side of the nominalists, and has defended his view by compelling arguments and by way of example, namely, by producing rigorous and fruitful philosophical theories that are acceptable to the nominalist. He has produced some remarkable results in this field, which we shall deal with below. 75
Nelson Goodman
In this chapter we are mainly concerned with a precise definition of Goodman’s nominalism, and with distinguishing it from related notions. We also introduce the calculus of individuals, Goodman’s mereological system. Once we have equipped ourselves with the necessary techniques, we show, in Chapter 5, how Goodman puts these to use in A Study of Qualities and The Structure of Appearance.
Nominalisms One of the most controversial features of Goodman’s philosophy is probably his nominalism. It is advisable, though, to pay close attention to characterizing Goodman’s position exactly. A range of different views is routinely subsumed under the label ‘nominalism’, and it is important to be clear about which of these sundry positions Goodman actually holds. In A Study of Qualities, Goodman traces nominalistic thoughts back to Plato’s Sophist (SQ: 22–4). The first important and more or less systematic discussion, though, seems to have occurred in the Middle Ages. The participant in that discussion who is most prominent today is William of Ockham (c.1285–c.1349), who rejected the existence of universals. Universals such as redness or courage are taken to be entities that exist over and above the (particular) things that are red, or (particular) persons that are courageous. The existence of universals is asserted by the medieval realists, and is also fairly common among contemporary philosophers. Ockham insists that the postulation of such entities makes no sense, and that only particulars exist. Today, Ockham is probably best remembered for the principle of ontological parsimony named after him, Ockham’s razor, which is often stated as: “entities are not to be multiplied beyond necessity”. In a very intuitive sense, this principle seems to be acceptable to every philosopher. Why would anyone want to assume the existence of more things than one has to in order to explain everything there is to be explained? (This, at least, is the most common understanding of the necessity mentioned in Ockham’s razor.) One can often read that Ockham’s motivation for his nominalism can be found in his acceptance of this principle, despite the fact that it does not occur in this form in any of his writings. At least some current Ockham scholars, however, disagree with this claim that Ockham’s nominalism is grounded in his razor.1 This historical debate is not going to concern us here. 76
Particulars and parts
We can, in any case, note two positions that are commonly called ‘nominalism’: (N1) Universals do not exist. Everything is particular. (N2) The more parsimonious ontology is to be preferred (all other things being equal). It is easy to see how (N1) is more controversial than (N2), and indeed we already said that probably every philosopher could agree on (N2). It should also be clear, however, that neither follows from the other. That means that everyone is free, in principle, to adopt one view and reject the other, or adopt both, or reject both. We say that (N1) and (N2) are independent of each other. One does not run into a contradiction merely by adopting any one of the possible combinations of these views (as one would if one held that it was true that cats exist, agreed that cats are mammals, but denied that mammals exist). (N1)-nominalism is sometimes rejected by philosophers who seek to explain certain things by appealing to universals. David Armstrong, for example, holds that one needs universals in one’s ontology to give an account of the laws of nature (see e.g. Armstrong 1997: 223–30). If he is right, then it is necessary in order to give an account of these laws to assume the existence of universals in the sense of Ockham’s razor as stated above, and the parsimony constraint of (N2) would not be violated. There is something else that one can believe that is often called nominalism. It is connected in one way or another with physicalism: the view that everything in the world is physical, made out of the matter that physics talks about, and governed by the laws of nature (the physical laws). What a nominalist in this sense denies is the existence of abstract objects: (N3) Abstract objects do not exist. Typical examples of abstract objects include numbers, shapes, directions and lengths, but also sets, groups and maybe football teams (which are not to be confused with their players). Goodman’s favourite abstract objects are qualia; we return to them in Chapter 5. What exactly an abstract object is supposed to be is up for debate. No definition has been found yet that is generally accepted.2 The rough and ready idea, though, will suffice for our purpose here. Abstract objects are contrasted with concrete objects, such as tables, chairs, you and me; or perhaps electrons, quarks or strings. The claim that abstract 77
Nelson Goodman
entities exist not only seems to contradict physicalism, but might also give rise to an epistemic worry. As a physicalist especially, one might have a causal theory of knowledge that says, roughly, that we gain our knowledge of things by standing somewhere in a chain of the right sort of causal connections to these things. To pick a simple example, we know that there is an apple in front of us because light bounces off it and into our eyes, that there causes some biochemical reaction that transmits this information to our brains. You, the reader, now have knowledge that at the time of writing this chapter there was an apple in front of us, because this caused us to type it into the computer, which we saved as an electronic file, a copy of which was used somewhere in the process of printing the very book that you hold in your hand now and read by receiving the light that bounces off the pages of this book and into your eyes and so on. If the causal theory of knowledge is right, though, abstract objects are more than dubious: we could never have knowledge of them, since we can never get into causal contact with them,3 simply because that is exactly what it means to be abstract (or, at least, that very often features in attempts to define ‘abstract’). There is certainly a lot of room for manoeuvre here. (N3)-nominalism is not the same as physicalism (although (N3)-nominalists are typically physicalists), and physicalism certainly comes in different varieties as well. (Unfortunately, there is no space here to go into that.) Moreover, neither physicalism nor the rejection of abstract objects is wedded to the causal theory of knowledge. There are many rival theories of knowledge out there that can be adopted. As a reason to reject the causal theory of knowledge, one might point to something that many philosophers believe, namely, that we can have a priori knowledge, that is, roughly, knowledge that is not based on our experience of the world. Knowledge of the laws of logic, or mathematical knowledge, is often held to be of that sort. To know that ‘2 + 2 = 4’ is true, we do not have to investigate the world; reason alone tells us this, independently of any experience. One can also argue that physicalism is not at all incoherent with the postulation of abstract objects. To reject abstract objects without being a physicalist might be trickier, but ghosts or Cartesian egos may provide examples of things that are supposed to be non-abstract but also non-physical. A tempting thought might be that particulars are always concrete objects, and that if universals are supposed to be, as we said, something over and above these, they have to be abstract. None of this is the case, however. First of all, some philosophers, most notably 78
Particulars and parts
Frege, believe numbers to be objects, that is, particulars. It seems very obvious, however, that numbers are not concrete, physical objects; we cannot find them in the world like sticks and stones.4 Therefore, they must be abstract, and hence abstract particulars. Goodman also recognizes things that he takes to be abstract particulars: for example. spacetimes and mereological sums of these (dealt with in detail in Chapter 5.) So one can believe in abstract particulars but reject universals. Nor need universals be abstract. They might be construed this way, and often are, but Armstrong, to give a famous example, holds that universals are physical, despite being something over and above the particulars that instantiate them (see Armstrong 1978b, 1989). Statements (N2) and (N3), again, are logically independent of each other, for pretty much the same reasons as given for (N1) and (N2). And (N1) and (N3) are also independent of each other. We already mentioned that universals need not be considered as abstract. If that is an option, then one could hold (N3) while believing in universals and hence rejecting (N1). Can one also reject universals and embrace abstracta? Yes, one can. We have already brought mathematics into our discussion here as an example. We have now singled out three doctrines, each of which is frequently labelled ‘nominalism’, and each of which is independent of the other two. These are by no means all the different kinds of nominalism that one can distinguish. Armstrong presents (and discusses) a greater variety of nominalisms, in part by distinguishing more finely between views that we lump together here (Armstrong 1978a: chs 2–5). The coarser categorization is sufficient for our purpose, though. Which one of these is Goodman’s nominalism? The surprising answer is: none of them. But it is a common misunderstanding, partially caused by Goodman himself, that he held (N3). We shall investigate how this misunderstanding came about, before we go on to see what Goodman’s nominalism is. (We shall say a little bit more about the historical development of Goodman’s nominalism in Chapter 5.)
The 1947 nominalism of Goodman and Quine We do not believe in abstract entities. No one supposes that abstract entities – classes, relations, properties, etc. – exist in space-time; but we mean more than this. We renounce them altogether. … Any system that countenances abstract entities we deem unsatisfactory as a final philosophy. (PP: 173) 79
Nelson Goodman
Thus starts the paper, jointly written by Goodman and Quine in 1947, called “Steps Toward a Constructive Nominalism”. Given this rather strong statement it might seem excusable that one should get the idea that Goodman and Quine held (N3), at least at this point. On the very same page, however, the first page of the article, they retract. Abstract universals, for example, phenomenal qualities, are admitted as the basic building blocks of the world, just as they admit physical objects or events. And, indeed, Goodman already had used qualia in his construction in his PhD thesis A Study of Qualities and would go on to use them in his later The Structure of Appearance, which appeared three years after the 1947 joint paper with Quine. (We return to this in Chapter 5). The target of “Steps Toward a Constructive Nominalism” is mathematics. Goodman and Quine show how some notions that were generally believed to be mathematical through and through, with no hope for an account that does not appeal to abstract, mathematical objects, can be recast in a nominalistically acceptable way. (Here and for the rest of this section, ‘nominalistic’ is meant in the sense of (N3) unless stated otherwise.) The paper is still, with very few competitors for the title, one of the most remarkable efforts in that direction, if not the most remarkable one. It can be divided into two parts. In the first part, Goodman and Quine show how some statements that seem to be about abstract mathematical entities can be paraphrased and formalized in such a way that any mention of abstracta disappears. In the second part, they successfully undertake what is still, in spite of their demonstration (which has apparently been largely forgotten by the philosophical community), often believed to be unachievable: the construction of the syntax for a (formal) language by nominalistically acceptable means, and a likewise acceptable definition of ‘theorem’ and ‘proof ’. A full exposition of the material of the paper would go too far here, and some of the constructions are technically rather complicated; this is especially true of the second part. We want to give a few examples, however, of how this “nominalization” works. First, it has to be noted that neither Quine, who is famous for this,5 nor Goodman believes that predicates refer to abstract entities or universals that predicates such as ‘is red’ stand for. Thus ‘a is red’ requires no more than that a exists, and that it is red. No universal property redness is required that a instantiates and that ‘is red’ refers to; ‘a is red’ merely requires that ‘is red’ is true of a. (In that sense, Goodman and Quine here also advocate (N1).) 80
Particulars and parts
One of the main targets in the paper is classes. Classes, or sets as they are more commonly called today, are collections of objects, which are called their members. Sets are also reified, that is, they are objects themselves. They are abstract particulars. We shall say more about classes below, since Goodman has another axe to grind here. Consider the statement: (1) Class A has three members. Class A could, for example, be the class of planets in our solar system that are closer to the Sun than Mars. (There are indeed three: Mercury, Venus and Earth.) On the face of it, sentence (1) talks about a class, and maybe even about numbers (‘three’). Goodman and Quine suggest rephrasing the statement as: (1′) There are distinct objects, x, y and z, such that anything is A iff it is identical to x or y or z. Recast in the formalism of predicate logic, the statement looks like this: (1′′) ∃x∃y∃z(x ≠ y ∧ y ≠ z ∧ x ≠ z ∧ ∀w(Aw ≡ (w = x ∨ w = y ∨ w = z))) Now ‘A’ in (1′) and (1′′) is a predicate that is true of all and only the things that non-nominalists, or ‘platonists’ as they are commonly called,6 would take to be members of the class A. In our example it would be the predicate: ‘is a planet in our solar system that is closer to the Sun than to Mars’, which is true of exactly three things (Mercury, Venus and Earth). More generally, since classes can contain all sorts of random things that might not fit under such a neat description like the one for A, but might be sundry things such as Buckingham Palace, the Moon and the present First Minister of Scotland, any specification of the class will do as a predicate, even if it is a mere list of things. In the latter case, for instance, it would be ‘being either Buckingham Palace, the Moon or the present First Minister of Scotland’. In a similar fashion, statements about the relations between classes can also be “nominalized”. A class A is said to be a subclass of a class B, if all members of A are also members of B. That leaves it open whether A and B have exactly the same members. In that case, A and B would be identical. Identity between classes is defined extensionally: A and B are the same class iff they have exactly the same members. If A is a subclass of B, but the two are not identical, we say that A is a proper subclass of B. The class of all mammals, for example, is a proper subclass of the class of all animals. But this statement can 81
Nelson Goodman
straightforwardly be rephrased in a way acceptable to the nominalist: ‘All mammals are animals, and there are animals that are not mammals’. It is probably obvious now how even some statements that on the surface explicitly claim the existence of a class can be rephrased so as to be palatable to the nominalist. Take for example: (2) There is a class that class A is a proper subclass of. To rephrase (2), we just generalize the insight from the mammals example above: (2′) There is something that is not A. (2′′) ∃x ¬ Ax Since any objects whatsoever, taken together, form a class, the only thing (2) requires is that there is one further thing that is not a member of A. A class containing that thing and everything that is a member of A will be a class that A is a proper subclass of. That there is such a thing that the predicate A does not apply to is exactly what (2′) and (2′′) express. Goodman and Quine present more nominalistically acceptable rephrases of platonistic statements, and of much more complicated ones. Among these are ‘There are more human cells than humans’ and ‘There are exactly one third as many Canadians as Mexicans’. Rephrasing these sentences requires a bit more technical apparatus than mere predicate logic. Since this will only be introduced in the following section, we leave it at that here, trusting that the impression one gets from the examples we give is sufficient to see what Goodman and Quine are up to in this paper. What more is needed is commonly called ‘mereology’: the theory of parts and wholes. Rather than coming back to the examples given in the paper, we shall give some of Goodman’s constructions of The Structure of Appearance as examples of how the mereological apparatus can be put to use. It is not all that important to cover all the examples Goodman and Quine give here. There is no general way, no standard recipe for translating platonistic statements into nominalistically acceptable language; at least there was none known to Goodman and Quine, and there still is none known today.7 The enterprise of their paper was not to recover all of the platonists’ talk; it seems likely that this is impossible, at least with the resources that they allow. (Other resources might be available to the nominalist; see for example our discussion in § “Logic and mereology”, p. 218.) Their aim is to save what is salvageable. Talk about classes is deemed unacceptable in “Steps Toward a Constructive Nominalism”, so Goodman and Quine give some examples of how to 82
Particulars and parts
do without it. The hope is that this will suffice to do everything one wants – and needs – to do. One of those things is to construct a syntax and a proof theory for a formal language, since these are necessary tools for the nominalist as well as the platonist. The platonist uses classes to construct a syntax and to give a proof theory for a formal language. This is familiar from every introductory book of mathematical logic. But this is not available to the nominalist. If nominalists cannot even account for these tools which they need to use, nominalism must be bankrupt. The refutation of this criticism is one of the reasons why Goodman and Quine undertake the project of a nominalistically acceptable construction of syntax and proof theory. There is a second reason, however. If abstract objects, and classes in particular, are basically unintelligible, how is it then that mathematicians can work with them, and agree, or reasonably disagree, about what they are doing? How can mathematics examinations be assessed and mistakes in them penalized? For Goodman and Quine, the: answer is that such intelligibility as mathematics possesses derives from the syntactical or metamathematical rules governing those marks. Accordingly we shall try to develop a syntax language that will treat mathematical expressions as concrete objects – as actual strings of physical marks. (PP: 182) The mathematical language they take as an example, the language of set theory, is treated as mere “strings of marks without meaning”, so the issue created by the fact that Goodman and Quine find the language unintelligible does not arise; no meaning of the marks has to be assumed. The wide agreement among mathematicians concerning mathematical methods and proofs derives, according to Goodman and Quine in 1947, from the possibility of dealing with them in such a schematic, uninterpreted way. Again we are not going into detail here too much, but merely describe the first few steps of the constructions. We shall stop at the point where mereological resources come into play because we have not yet introduced them. The steps we take will suffice to give a good idea of the flavour of the project. Commonly, the vocabulary of a language would be described as a class of signs (e.g. letters), the formulae would be the class of strings of such signs that are put together according to the formation rules for the language. This is not only unacceptable to the nominalist because of the use of classes, but also because letters and other signs, thus construed, are abstract objects as well. One common, platonistic 83
Nelson Goodman
manner of speaking is to say that letters are abstract types that have tokens as their instances. The two ns in ‘instance’, for example – yes, the very marks in front of you now, the black ink on this page of this very book – are two tokens of the type n. The tokens are therefore concrete physical objects: marks on paper, in this case. Without resorting to the type–token story, Goodman and Quine build up the whole syntax for the formal language they consider by just talking about such concrete marks, or inscriptions, as they call them. They do so without appealing to types, classes or other abstract objects. The concrete marks are distinguished by their falling under different predicates. A particular v-inscription, for instance, can be distinguished from a ⊃-inscription, because the predicate ‘is a v-shaped inscription’ is true of the v-inscription, but false of the ⊃-inscription.8 Such predicates are innocuous from the nominalist point of view, as we have already seen above. The vs are the variables in the system. One variable, of course, will not be enough, and there is no telling how many are going to be needed. Platonistic syntax often uses the natural numbers (1, 2, 3, …) to index the variables (‘v1’, ‘v2’, ‘v3’, …), but these are not available to the nominalist. Instead, the accent, ‘′’, is used to generate distinct variables, ‘v’, ‘v′’, ‘v′′’, ‘v′′′’, … . An aside is needed at this point. Since inscriptions are physical marks, the number of variables will be constrained by the size of the actual physical universe, which is very big, yet finite as current science tells us. For any actual mathematical practice this seems to be good enough, but full-blooded platonists will balk at these finitist limitations, which they do not have; they have infinitely many numbers, sets, and other abstract objects in their toolkit. In their 1947 paper, Goodman and Quine declare: We decline to assume that there are infinitely many objects. … If in fact the concrete world is finite, acceptance of any theory that presupposes infinity would require us to assume that in addition to the concrete objects, finite in number, there are also abstract entities. (PP: 174) This sounds, as if they want to suggest that nominalism and finitism have to go hand in hand, but in fact neither follows from the other. In his 1957 publication “A World of Individuals” Goodman states this explicitly. Nominalism is not incompatible with the rejection of finitism, it is “at most incongruous … The nominalist is unlikely to be a non-finitist only in much the way a bricklayer is unlikely to be a ballet dancer” (PP: 166).9 84
Particulars and parts
Let us return to the construction of the nominalistically acceptable syntax. The device used to chain the signs of the language together is called ‘concatenation’. To say that signs are concatenated basically means that they stand next to each other. In the construction, ‘C’ is used to abbreviate concatenation. ‘Cxyz ’ says that x is an inscription that consists of y followed by z. For example, ‘v′’ is the concatenation of ‘v’ and ‘′’. On the basis of this, longer concatenations can be defined. Next, the syntactic rules of the language are provided. First, the basic pieces of vocabulary are called ‘characters’ (i.e. all inscriptions that fall under the predicates describing the shapes of the inscriptions, ‘is a v-shaped inscription’, for example), and a predicate ‘Char’ is introduced that is true of all and only the characters. Next follows the definition of an inscription. Something is an inscription iff either ‘Char’ is true of it, or it is the concatenation of two inscriptions. Meticulously, the whole syntax is built up like this, using only resources acceptable to the nominalist. Goodman and Quine define what a variable is (either a ‘v’, or a concatenation of a ‘v’ and a string of accents, where a string of accents is either ‘′’ or a concatenation of a string of accents with ‘′’), what a quantifier is (the concatenation of a left parenthesis and a variable and a right parenthesis), and so on, step by tedious step, until they can say what a formula is, namely, an inscription in which the signs are put together in the right way. Formulae are basically the well-formed sentences of a formal language. In predicate logic, for example, our (2′′) is a formula, but ‘x ( ⊃∀ ¬ F ’ is not, although of course the latter is still an inscription, that is, a concatenation of a series of characters. To cut the rest of the story short, a theorem (i.e. a statement that is provable in a formal system) is defined as a formula for which there is a proof, and a proof is a series of inscriptions, which are lines of the proof, such that each line is a formula that is an axiom of the system, or a direct consequence of the lines that precede it, according to the (syntactic) inference rules of the system. Shortly after the joint paper Quine abandoned the “no classes” stance. In his 1975 paper “On the Individuation of Attributes” he says that he came to “grudgingly admit” classes into his ontology (Quine 1981: 100). Quine came to believe that classes were indispensable after all, despite all the efforts he and Goodman had made, and Goodman continued to make throughout his career. But this is not the only difference between Goodman and Quine. Even though the nominalist tendencies of both converged in the 1947 rebellion against classes, the motivation for each of them was different. We shall see in Chapter 5 85
Nelson Goodman
that Goodman admits qualia, which he takes to be abstract entities, as a basis for his construction in A Study of Qualities, and continued to do so in The Structure of Appearance. Quine, on the other hand, had already proposed a series of class theories, for example in his “New Foundations for Mathematical Logic” in 1937.10 So it might seem that this joint paper was more of a flirtation with strict nominalism for Quine, a mere experiment to see how far one could get, although this might not do justice to the situation. In Of Mind and Other Matters, Goodman retrospectively reviews the situation like this: From the beginning, our formulations of the basic principles of nominalism differed. For Quine, nominalism could countenance nothing abstract but only concrete physical objects. For me, nominalism could countenance no classes, but only individuals. This difference, noted in the second paragraph of our joint paper “Steps Toward a Constructive Nominalism”, in no way affected our interest in or the value of the constructions undertaken in that paper. … Since “Steps”, Quine has somewhat reluctantly adopted the platonist’s luxuriant apparatus, while I inch along in stubborn austerity. Actually, the difference is not that marked; for Quine, given the chance, would gladly trade any platonistic construction for a nominalistic one, and I sometimes make use of platonistic constructions as a temporary expedients awaiting eventual nominalization. (MM: 50–51) We shall return to the question of why Quine stepped away from nominalism below. Here, however, we have finally arrived at a formulation of Goodman’s nominalism. He says, “Nominalism for me consists specifically in the refusal to recognize classes” (PP: 156). (N4) No classes are allowed in constructing a system of the world.11
Goodman’s nominalism Goodman also rejects many things that the typical (N1)- or (N3)nominalist rejects. Properties, for example, are often taken to be universals, and Goodman sees no room for them in a final philosophy. He mainly dislikes properties because they are intensional,12 though, and 86
Particulars and parts
not merely because they are universals. Goodman is also generally suspicious of universals, but he does not consider this to constitue his nominalism, which only refuses to recognize classes. For Goodman, universals should generally be shaved off by Ockham’s razor. He does not consider it necessary to assume anything over and above individuals. The classes that Goodman rejects according to (N4) are sometimes taken to be universals (by Quine, for example), but probably more often they are taken to be particulars. They are, however, also quite often taken to be abstract objects. An (N3)-nominalist who believes this will, like Goodman, reject them. Goodman does not hold (N3), though; he accepts qualia as abstract universals (see Chapter 5). He is also wary of the distinction between abstract and concrete. He finds it “vague and capricious” (PP: 156).13 Without a proper definition of ‘abstract’ one would not even know what to reject. It is easy to see that this variety of nominalism is of a different quality from what we have encountered so far as (N1)–(N3). When Goodman wrote about nominalism in A Study of Qualities, he meant something like (N3), as we shall see in Chapter 5. For Goodman, from The Structure of Appearance on, ‘nominalistic’ is a predicate that applies to systems; roughly speaking, it is true of systems that do not incorporate classes or something similar to them. Put in precise terms, it is a matter of the system’s generating relation (Goodman 1958a). A system is nominalistic iff it does not generate more than one entity from exactly the same atoms. The atoms of the system are those entities from which other entities are generated and which themselves do no have any proper parts (see § “Mereology”, p. 92, for a precise characterization of ‘proper part’). A nominalist in the sense of (N4) might well accept abstract individuals as a basis for his system (in fact, Goodman does in The Structure of Appearance). What he rejects is the “undue multiplication of entities” (PP: 163). (Strictly speaking, this is not quite the same as (N4), even though Goodman seems to take it to be. We return to this later, and for now follow Goodman’s way of presenting the issue.) Goodman finds the generating relation of the theory of classes (or set theory) offensive in this way. As we mentioned above, classes contain individuals, are identified by what objects they contain, and are themselves entities. The class containing the individuals a and b is denoted {a, b}. Since {a, b} is an object itself again, we can stick it in another class, say, together with a again: {a, {a, b}}. This latter class has two members: a and {a, b}, just as {a, b} has two members, a and b. Since the members of the two classes are different, the classes are 87
Nelson Goodman
different as well, although they are generated from the same individuals. This is what Goodman objects to. Such a multiplication of entities he finds unintelligible, and hence, as he and Quine write, “unsatisfactory as a final philosophy” (PP: 173). What is acceptable to Goodman is the generating relation of the theory of parts and wholes, called ‘mereology’. If we have the two atoms a and b again, there is only one further thing that is generated from those, namely the whole that has as parts a and b.14 Further adding a to it does not make any sense: it is already a part of it. Imagine that the world contains only a banana b, which is composed of two atoms: its interior i and its skin s. Neither i nor s is taken to have any smaller parts that it is made up from, since we presuppose here that they are atoms. ‘Atom’ here is obviously not to be understood in the sense that modern physics ascribes to it, as having protons and neutrons as a nucleus with electrons whizzing around it. ‘Atom’ in our context means nothing more and nothing less than ‘thing that is not composed’ or ‘thing that does not have any proper parts’. (We shall cover mereology in more detail in § “Mereology”, p. 92.) So, if we consider s and i together, this is b. But there is no sense in adding s again to b; it is part of b already. If we form a class of s and i, though, {s, i}, this is different from the class {{s, i}, s}. Even if we wanted to identify {s, i} with b, {s, i} would still be different from {s, b}. If b is identical with {s, i}, then just exchanging the names in ‘{{s, i}, s}’ does not make any difference. And we can go on like this: {{s, i}, s} is different from {{{s, i}, s}, s}, and both are different from {{{{s, i}, s}, s}, s}, and so on. Out of the two atoms we started with, class theory generates infinitely many entities in this way. To be more precise, the atoms are not needed at all to pull off this “trick”. We already talked about the empty class being an entity as well. We can denote it with ‘{ }’ or ‘∅’; the latter is the common sign for it. Now, given what we already know, we can form the class that contains the empty class, ‘{∅}’, which is different from the empty class: It has a member (namely the empty class), but the empty class has no members. Then we can put this class into yet another class, either alone, ‘{{∅}}’, or together with the empty class, ‘{{∅}, ∅}’, and continue like this indefinitely. Voilá – we have generated infinitely many entities out of nothing! With only mereological composition, however, the number of entities that are generated from a given number of atoms is comparatively moderate. From three atoms, for example, exactly seven individuals are generated: the three atoms individually, each pair of them taken together, 88
Particulars and parts
and all three of them together. More generally, starting out with any number n of atoms, one ends up with 2n – 1 individuals.15 Finitely many atoms, therefore, only ever give rise to finitely many individuals. Goodman’s worry about what sense it makes to inflate one’s ontology in the way the theory of classes does is what nurtures his nominalism; he finds it incomprehensible. This does not mean, of course, that he does not understand the theory of classes from a technical point of view (after all, he used it in A Study of Qualities); it does not make any sense to him from a philosophical perspective. If philosophy is meant to make sense of, for example, science (see Chapter 3), and clarify and explain things, and classes are unintelligible, then classes must not be used in the philosophical enterprise. Is this nominalism though? Classical nominalism seems to be something that is much better characterized as (N1) or (N3), but (N4) seems to be something different altogether. As one probably already expects, given Goodman’s approach to philosophy that we got to know in Chapter 3, he is not too worried about this. He defines precisely what he means by his nominalism. He takes his nominalism to be motivated by the classical worry about the “undue multiplication of entities”, that is, something like Ockham’s razor, and therefore by a position that motivated our (N2). He also takes his nominalism to be a reasonable formulation of this traditional view, and remarks, “I willingly submit this claim to Father Bocheński16 for adjunction. If he rules against me, he deprives me of nothing but a label that incites opposition” (PP: 163). Goodman’s motivation might have something to do with (N2), but (N4) does not follow from it, of course. Quine gives up his nominalism because he takes classes to be required for science, that is, his rejects (N4), but he surely holds (N2), as probably every sensible philosopher does. Quine steps away from (N4) because he takes mathematics to be indispensable for science (Quine 1960: §55).17 Roughly, his thought is that modern science is the best theory of the physical world we currently have, but it makes heavy use of mathematics and it cannot do without it. Putnam, who is on Quine’s side, states that: quantification over mathematical entities is indispensable for science, both formal and physical; therefore we should accept such quantification; but this commits us to accepting the existence of the mathematical entities in question. This type of argument stems, of course, from Quine, who has for years stressed both the indispensability of quantification over mathematical entities and the intellectual dishonesty of denying the existence of what one daily presupposes. (Putnam 1979b: 347) 89
Nelson Goodman
The theory of classes, or set theory, comes into play since it is strong enough to represent virtually any part of mathematics within it, and many mathematicians (at least whose who care about these questions) and some philosophers of mathematics in the twentieth century as well as today, would take it to be the foundation for mathematics. In the more recent debate, Quine’s and Putnam’s claim that mathematics is indispensable to science has been challenged by the next generation of nominalists. Hartry Field, for example, in 1980 published a book called Science without Numbers. In true Goodmanian spirit, he provides a construction that covers the whole of Newtonian gravitational theory without any use of numbers or classes. Admittedly, to achieve this he had to modify Goodman’s system, which we do not have space to discuss here. He still, however, sticks to (N4).18 Field’s book, Goodman and Quine’s “Steps Toward a Constructive Nominalism” and Goodman’s The Structure of Appearance have much in common. All are astounding achievements in constructing systems on a very slender basis and in particular obeying (N4), and all three went far beyond what was believed to be achievable with nominalistically acceptable means. Also, all these constructions provoked massive uproar among the platonistically minded majority of the philosophical community. Gideon Rosen and John Burgess, for instance, launched an attack on nominalism after Field published his book. One among the many points that they raised concerned the awkward, unnatural and cumbersome appearance of Field’s system, which is unsuitable for scientific practice, and will only hinder scientific research and progress (Burgess 1983: 98; Burgess & Rosen 1997). Having merely seen the very first steps of Goodman and Quine’s construction, one might be able to guess where they are coming from. Field replies that what seems awkward and unnatural largely depends on what one is used to; for someone who was only used to nominalistic techniques, mathematical theories must seem highly awkward and unnatural. He also illustrated his point as follows: As I once heard Hilary Putnam remark, you need to introduce an awful lot of definitions and an awful lot of background theory before you can formulate the laws of electromagnetism on a Tshirt. Imagine what they’d look like formulated in set theoretic terms, using ‘∈’ as the only mathematical primitive! (Field 1990: 210) Goodman, responding to the same kind of worry, reminds us that he does not want to force scientists to change what they are doing. 90
Particulars and parts
The scientist may use platonistic class constructions, complex numbers, divination by inspection of entrails, or any claptrappery that he thinks may help him to get the results he wants. But what he produces then becomes raw material for the philosopher, whose task is to make sense of all this: to clarify, simplify, explain, interpret in understandable terms. (PP: 168) So, since Goodman takes classes to be incomprehensible and to make no sense from a philosophical point of view, the use of classes stands in the way of this task, and cannot be admitted. In this light, it seems unfair to charge Goodman with “intellectual dishonesty”. Quite the opposite seems to be the case; intellectual honesty in his unwillingness to make use of something in his philosophy that he finds incomprehensible drives Goodman to “inch along in stubborn austerity” (MM: 51). Another criticism that is often put forward against nominalism is how little can be achieved using only the limited resources that the nominalists admits. Goodman’s system, of course, is much weaker than full-blown platonistic mathematics, so, in a sense, nominalists restricting themselves to these means will never be able to use something that equals the luxurious resources of the platonist. But there is no telling how far the nominalist can go. Until Field’s advances, Quine’s contention that science is impossible without mathematics seemed obviously true. Now, there is at least a strong doubt.19 Goodman suggests that more nominalistically acceptable techniques might be found, and it might well be that all the physics that the platonist can do today will one day be available to the nominalist as well. At the very least, there is no way of showing that it is impossible. On the flipside, the platonist has no guarantee either, that all of future physics can be done with the mathematics available today (e.g. PP: 167; MM: 51).20 One thing especially puzzles Goodman concerning the platonistic criticism of the nominalist’s efforts: every method used by the nominalist is acceptable by the platonist’s light; everything the nominalist does, the platonists do as well, or at least they could do it. The progress is slower (which might partly be because far fewer people are working within a nominalistic framework), but when “the nominalist and the platonist say au revoir, only the nominalist can be counted on to comply with the familiar parting admonition they may exchange: ‘Don’t do anything I wouldn’t do’” (PP: 171). The advantage of a nominalistic system is, as Goodman points out in the original preface to The Structure of Appearance, that any nominalistic system 91
Nelson Goodman
can straightforwardly be turned into a platonistic one without any problems. The reverse, though, is not the case.21 On purely methodological grounds, he takes nominalistic systems therefore to be preferable to platonistic ones.
Mereology Around 1930, together with Henry Leonard, Goodman developed the system called the ‘calculus of individuals’. Leonard used some of their results in his PhD thesis, Singular Terms (1930), which was probably also influenced by Whitehead’s views on this matter, Whitehead being Leonard’s supervisor.22 In 1936 Leonard and Goodman read a paper presenting the system before the Association of Symbolic Logic, and it was published in 1940 (Elgin 1997a). The calculus of individuals is a mereological system. In slightly altered (but equivalent) form, Goodman later used it for his construction in A Study of Qualities and The Structure of Appearance. The calculus of individuals was originally developed by Leonard and Goodman in order to solve some technical problems dealt with in Leonard’s Singular Terms and Goodman’s A Study of Qualities. It was meant as a supplement to the calculus of classes, rather than as a replacement of it.23 The calculus only later becomes important for Goodman in his nominalist programme and his rejection of classes. It turns out that the calculus of individuals actually suffices to carry out Goodman’s construction in The Structure of Appearance; the calculus of classes is not needed. The constructions using the calculus of individuals also satisfy Goodman’s demands for an acceptable explication, as presented in Chapter 3, whereas the “unintelligible” classes do not. In this chapter we shall discuss Goodman’s later nominalist attitude towards the calculus of individuals. We shall return to the technical advantages of the calculus for the constructions of A Study of Qualities and The Structure of Appearance in Chapter 5. Leonard’s and Goodman’s system was not entirely new, however. Quine, who learnt of Leonard and Goodman’s project in 1935 noticed an independent predecessor (Quine 1985: 122). The Polish logician Stanisław Leśniewski (1886–1939) developed the first formal mereological systems, and published his first results in 1916. Later Tarski, who was Leśniewski’s student, presented mereological systems as well, which were based on modern predicate logic rather than on Leśniewski’s term logic.24 92
Particulars and parts
Leśniewski’s system struck Leonard and Goodman as “rather inaccessible”, lacking many definitions that they wanted to make use of, and “set forth in the language of an unfamiliar doctrine” (Elgin 1997a: 130). However, all these systems – Leśniewski’s, Tarski’s, Leonard and Goodman’s, and Goodman’s later system – are equivalent. They can be translated into each other, and the same sentences are theorems according to them in the respective translations.25 Leonard and Goodman remarked on this, without proof, in their 1940 publication. We shall present the basics of Goodman’s calculus of individuals from The Structure of Appearance. Above, where we described the construction of “Steps Toward a Constructive Nominalism”, we noted the enormous generosity of Goodman and Quine concerning predicates; basically, anything of the right grammatical category that applies to individuals will do. In The Structure of Appearance, Goodman is yet more explicit about this. A nominalistic language may even contain “platonistic-sounding” predicates, such as ‘belongs to some class satisfying the function F ’. The nominalist does not acknowledge classes and functions by allowing for such a predicate. The predicate is a string of words that cannot be broken down into its components, since the words of the predicate are not separable units of the language. The only restriction on predicates is that they have to be predicates of individuals. The choice of predicates “will be governed not by demands of nominalism, but by such general considerations as clarity and economy” (SA: 27). For his version of the calculus of individuals, Goodman uses a very economical choice (i.e. one made according to simplicity considerations) of predicates indeed: he has only one primitive predicate. An expression is said to be primitive in a system iff it is not defined in that system. This primitive (two-place) predicate of his system is ‘overlaps’. (In the Leonard–Goodman version of the calculus, incidentally, the system was built on the notion of discreteness, which will be defined in the The Structure of Appearance calculus later on the basis of overlapping.) Goodman uses the symbol ‘o’ for ‘overlaps’ in the formalism. He informally glosses what the predicate is intended to mean as: “Two individuals overlap if they have some common content, whether or not either is wholly contained in the other” (SA: 34). The first thing that can be said more formally about overlapping is that in all and only those cases where two individuals x and y overlap, there is an individual z such that whatever overlaps with z also overlaps with x and y. x o y ≡ ∃z∀w(w o z ⊃ (w o x ∧ w o y))
93
Nelson Goodman
It is easiest to imagine this with a spatial example. If two regions in space overlap, then there is a region that is wholly contained in both of them; namely in that region where the overlap occurs. Whatever overlaps with that region, has to overlap the two original ones as well (you can try to draw a two-dimensional picture of this). It is easy to see that this still holds when one of the two regions is wholly contained in the other one (like Scotland in the United Kingdom), or even when the regions are identical. With the help of ‘overlaps’, ‘is discrete from’ can easily be defined as well. Two individuals are discrete iff they do not overlap. (Had we started with ‘is discrete from’, we could have just run the definition in the other direction.) x ∫ y =df ¬ x o y We can also define what a part is, which is vital, of course, since we want to end up with a mereology, that is, a theory of parts and wholes. So, x is a part of y iff everything that overlaps with x also overlaps with y. x < y =df ∀z(z o x ⊃ z o y) This does not yet rule out the identity of the individuals. In other words, on this definition of ‘is part of’, everything is a part of itself (‘∀x (x < x)’). It might be useful for some purposes, though, to have a notion of ‘is part of’ available, where if x is a part of y then they are not identical. This can easily be defined from ‘