Embodiment, Ego-Space, and Action (Carnegie Mellon Symposia on Cognition)

  • 88 561 3
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Embodiment, Ego-Space, and Action (Carnegie Mellon Symposia on Cognition)

Embodiment, Ego-Space, and Action Carnegie Mellon Symposia on Cognition David Klahr, Series Editor Anderson • Cognitiv

1,187 255 5MB

Pages 448 Page size 420.96 x 654 pts Year 2008

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

Embodiment, Ego-Space, and Action

Carnegie Mellon Symposia on Cognition David Klahr, Series Editor Anderson • Cognitive Skills and Thei r Acquisition Carroll/Payne • Cognition and Social Behavior Carver/Klahr • Cognition and Instruction: Twenty-Five Years of Progress Clark/Fiske • Affect and Cognition Cohen/Schooler • Scientific Approaches to Consciousness Cole • Percetpion and Production of Fluent Speech Farah/Ratcliff • The Neuropsychology of High-Level Vision: Collected Tutorial E ssays Gershkoff-Stowe/Rakison • Building Object Categories in Developmental Time Granrud • Visual Perception and Cognition in Infancy Gregg • Knowledge and Cognition Just/Carpenter • Cognitive Processes in Comprehension Kimshi/Behrmann/Olson • Perceptual Organization in Vision: Behavioral and Neural Perspetives Klahr • Cognition and Instruction Klahr/Kotovsky • Complex Information Processing: The Impact of Herbert A. Simon Lau/Sears • Political Cognition Lovett/Shah • Thinking With Data MacWhinney • The Emergence of Language MacWhinney • Mechanisms of Language Acquisition McClellan/Siegler • Mechanisms of Cognitiive Development: Behavioral and Neural Perspectives Reder • Implicit Memory and Metacognition Siegler • Children’s Thinking: What Develops? Sophian • Origins of Cognitive Skills Steier/Mitchell • Mind Matters: A Tribute to Allen Newell VanLehn • Architectures for Intelligence

Embodiment, Ego-Space, and Action Edited by

Roberta L. Klatzky Brian MacWhinney Marlene Behrmann

Psychology Press Taylor & Francis Group 270 Madison Avenue New York, NY 10016

Psychology Press Taylor & Francis Group 27 Church Road Hove, East Sussex BN3 2FA

© 2008 by Taylor & Francis Group, LLC Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-13: 978-0-8058-6288-1 (0) Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the Psychology Press Web site at http://www.psypress.com

Contents

About the Editors

vii

Contributors

ix

Editors’ Preface Roberta L. Klatzky, Marlene Behrmann, and Brian MacWhinney

xi

1

Measuring Spatial Perception with Spatial Updating and Action Jack M. Loomis and John W. Philbeck

2

Bodily and Motor Contributions to Action Perception Günther Knoblich

3

The Social Dance: On-Line Body Perception in the Context of Others Catherine L. Reed and Daniel N. McIntosh

4

Embodied Motion Perception: Psychophysical Studies of the Factors Defining Visual Sensitivity to Self- and Other-Generated Actions Maggie Shiffrar

1 45

79

113

5

The Embodied Actor in Multiple Frames of Reference Roberta L. Klatzky and Bing Wu

145

6

An Action-Specific Approach to Spatial Perception Dennis R. Proffitt

179

7

The Affordance Competition Hypothesis: A Framework for Embodied Behavior Paul Cisek

8

fMRI Investigations of Reaching and Ego Space in Human Superior Parieto-Occipital Cortex Jody C. Culham, Jason Gallivan, Cristiana Cavina-Pratesi, and Derek J. Quinlan

203

247

v

vi 9

10

11

Contents The Growing Body in Action: What Infant Locomotion Tells Us About Perceptually Guided Action Karen E. Adolph

275

Motor Knowledge and Action Understanding: A Developmental Perspective Bennett I. Bertenthal and Matthew R. Longo

323

How Mental Models Encode Embodied Linguistic Perspectives Brian MacWhinney

369

Author Index

411

Subject Index

419

About the Editors

Marlene Behrmann, PhD, i s a p rofessor i n t he Depa rtment o f Psychology, C arnegie M ellon U niversity, a nd ha s a ppointments in t he C enter f or t he Neural Ba sis o f C ognition (Carnegie Mellon University a nd U niversity o f P ittsburgh) a nd i n t he Depa rtments of N euroscience a nd C ommunication Di sorders a t t he U niversity of P ittsburgh. Her research focuses on t he ps ychological a nd neural mechanisms that underlie the ability to recognize visual scenes and objects, represent them internally in visual imagery, and interact with them through eye movements, reaching and grasping, and navigation. One major research approach involves the study of individuals wh o ha ve su stained b rain d amage t hat s electively a ffects their v isual p rocesses, i ncluding i ndividuals w ith l esions t o t he parietal cortex and to the temporal cortex. Thi s neuropsychological approach is combined with several other methodologies, including behavioral st udies w ith n ormal sub jects, s imulating n eural b reakdown u sing neural network models, a nd ex amining t he biological substrate using functional and structural neuroimaging to elucidate the neural mechanisms supporting visual cognition. Roberta L. Klatzky, PhD, is a professor of psychology at Carnegie Mellon University, where she is also on the faculty of the Center for the Neural Basis of Cognition a nd t he Human–Computer Interaction Institute. She received a BS in mathematics from the University of Michigan and a PhD in experimental psychology from Stanford University. Before coming to Carnegie Mellon, she was a member of the faculty at the University of California, Santa Barbara. Klatzky’s research interests are in human perception and cognition, with special emphasis on spatial cognition and haptic perception. She ha s done ex tensive research on human haptic a nd v isual object recognition, navigation under visual and nonvisual guidance, and perceptually guided action. Her work has application to navigavii

viii

About the Editors

tion aids for the blind, haptic interfaces, exploratory robotics, teleoperation, and virtual environments. Professor Klatzky is the author of over 200 articles and chapters, and she has authored or edited six books. Brian MacWhinney, PhD, is a professor of psychology at Carnegie Mellon University. He is also on the faculty of Modern Languages and the Language Technologies Institute. His work has examined a variety of issues in first and second language learning and processing. Recently, he has been exploring the role of embodiment in mental imagery as a support for language processing. He proposes that this embodied mental imagery is organized through a system of perspective taking that operates on the levels of direct perception, space/time/ aspect, action plans, and social schemas. Grammatical structures, such as pronominalization and relativization, provide methods for signaling perspective switches on each of these levels. He is interested in relating t his h igher l evel ps ycholinguistic ac count t o ba sic n eural a nd perceptual mechanisms for t he construction a nd projection of t he body image.

Contributors

Karen E. Adolph, PhD Department of Psychology New York University New York, New York (USA) Bennett I. Bertenthal, PhD Department of Psychology University of Indiana Bloomington, Indiana (USA) Cristiana Cavina-Pratesi, PhD Department of Psychology University of Durham Durham, Great Britain (UK) Paul Cisek, PhD Department of Physiology University of Montreal Montreal, Quebec (Canada) Jody C. Culham, PhD Department of Psychology and Ne uroscience Program University of Western Ontario London, Ontario (Canada) Jason Gallivan, PhD Neuroscience Program University of Western Ontario London, Ontario (Canada)

Günther Knoblich, PhD Rutgers University Newark, New Jersey (USA) and Center for Interdisciplinary Research University of Bielefeld Bielefeld, Eastern Westphalia (Germany) Matthew R. Longo, PhD Institute of Cognitive Neuroscience University College London London, Great Britain (UK) Jack M. Loomis, PhD Department of Psychology University of California, Santa Barbara Santa Barbara, California (USA) Daniel N. McIntosh, PhD Department of Psychology University of Denver Denver, Colorado (USA) John W. Philbeck, PhD Department of Psychology George Washington University Washington, D.C. (USA)

ix

x

Contributors

Dennis R. Proffitt, PhD Department of Psychology University of Virginia Charlottesville, Virginia (USA)

Maggie Shiffrar, PhD Department of Psychology Rutgers University Newark, New Jersey (USA)

Derek J. Quinlan, PhD Neuroscience Program University of Western Ontario London, Ontario (Canada)

Bing Wu, PhD Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania (USA)

Catherine L. Reed, PhD Department of Psychology University of Denver Denver, Colorado (USA)

Editors’ Preface

This volume is a collection of papers presented at the 34th Carnegie Symposium on Cognition, held at Carnegie Mellon University in Pittsburgh in June 2006. The s ymposium w as motivated by t he increasing visibility of a research approach that has come to be called embodiment. But what, exactly, is embodiment? For an insight into this question, consider a s eemingly elementary ac tion: breathing. W hen we breathe, we engage in a series of inhalations that are largely involuntary. These inhalations are physiologically equivalent to the inhalations we produce voluntarily when we want to sniff a flower. Even so, are the effects of the involuntary inhalations the same as the effects of the voluntary sniffs? Brain imaging shows that there are clear differences in activation, depending on the intention of the sniffer (Zelano et al., 2005). If it is true that even something as basic as breathing is co ntextualized, w e sh ould n ot be su rprised t o find t hat p erception a nd cog nition a re per meated by t he context of t he s elf i n t he world, where t hat context i s s ensory, spatial, temporal, soc ial, a nd goal directed. That is embodiment. Our definition o f em bodiment t akes co ntextual i nfluence a s i ts cornerstone. It is based on the assumption that the way people perceive and act in the world around them is influenced by their ongoing representations of th emselves in th at world. The overarching goal of the 34th symposium was to further our understanding of embodiment f rom m ultiple perspec tives—mechanistic (including co mputational), ne urophysiological, a nd d evelopmental. W e i ntended to identify i mportant p henomena t hat r eflect em bodiment, adv ance theoretical u nderstanding ac ross a b road r ange o f r elated fields, and provide a foundation for future efforts in this emerging area of research.

xi

xii

Editors’ Preface

The f ormulation o f em bodiment u nderlying t he sha ping o f t he symposium w as p urposefully g eneral, i n o rder t o ac commodate the variety of approaches we saw as relevant. One particularly clear position has been ar ticulated by Clark (1999). Clark seized on J. J. Gibson’s concept of affordance as t he central construct of embodiment. Gibson proposed that the environment offers or “affords” to the perceiver/actor d irect c ues about what ac tions a re pos sible. A s the organism acts on the environment, its own state changes, as do the affordances offered by the world, creating an ongoing dynamic chain. W hen a n o rganism i s a ble to c reate a n i nternal si mulation of t hese dy namic e vents, presumably by i nvoking its i ntrinsic perceptual–motor mechanisms, something new happens: A mode of processing emerges that facilitates perception and cognition. It is the embodied mode. The idea that an embodied processing mode exists has won increasing acceptance, but its scope remains a po int of contention. As formulated by Clark (1999), an extreme view of embodied cognitive science is that all thinking is embedded in body-based dynamic simulations; there is no fi xed modular structure for the mind, nor do t heorists n eed t o post ulate abst racted r epresentations. W ilson (2002, p. 626) described as radical the view that, “The information flow between mind and world is so dens e and continuous t hat, for scientists studying the nature of cognitive activity, the mind alone is not a meaningful unit of analysis.” We are not such extremists, but we find the range of phenomena that potentially reflect the embodied context to be rich indeed. Consider some e veryday obs ervations t hat m ight i nvoke s imulation of the body: A person turns to a friend while walking but maintains her course unerringly. An infant who is just mastering the act of walking ac commodates i mmediately to wearing a h eavy coat. A ten nis player finds her game improved after watching a professional match. A newborn imitates a face he sees. Grandparents and grandchildren watching a p uppet show respond to t he dolls’ gestures and expressions as if they were people. Few formalisms have been put forward to describe how the context of the perceiver/actor functions at a mechanistic level and what neural structures support those functions. At this point, we have more phenomena than we have mechanisms. Behavioral research has revealed a n umber of tantalizing outcomes t hat point to a r ole for the representation of the body in basic human function. Embodi-

Editors’ Preface

xiii

ment has been theorized to play a r ole in eye movements, reaching and g rasping, locomotion a nd navigation, i nfant i mitation, spatial and soc ial perspec tive t aking, p roblem so lving, a nd dy sfunctions as d iverse a s phantom l imb pa in a nd autism. Neuroscientists have identified multiple sensorimotor maps of the body within the cortex, specific brain areas devoted to t he representation of space and place, and cells that acknowledge the relation between one’s own and another’s movements. Developmental researchers have studied neonatal behaviors i ndicating a r epresentation of s elf a nd have t raced the course of spatially oriented action across the early years. Computational m odelers ha ve po inted t o s ensory-based f eed-forward mechanisms in motor control. In organizing the symposium, we felt that what was needed was a shared effort to merge these perspectives to further our understanding of the forms and functional roles of the embodied representation. As a po tentially u seful st arting po int, w e su ggest t hree em bodied perspectives t hat m ight form a co ntext for perceiving, ac ting, a nd thinking: 1. 2.

3.

The body image is the ongoing internal record of the relative disposition of body parts across time. The bod y sche ma i s t he s et o f p otential b ody i mages, w here t he potential pertains either to the capability of all members of a species or specifically to one’s own self. (We acknowledge that these terms have a variety of definitions in the literature—see Paillard, 1999, following Head & Holmes, 1911.) The spatial image is a representation of the current disposition of the body within surrounding space at a given point in time.

How might the body image, body schema, and spatial image play a role in perception and action? Directly opposing the extreme view of embodiment, what we will call the nonembodied approach would postulate t hat body -based r epresentations a re m erely el ements o f information p rocessing t hat p rovide n ecessary d ata f or co mputations. F or ex ample, co rollary-discharge t heory de scribes a n a lgorithm that enables the organism to keep a stable spatial image during an eye movement, whereby the updating of eye-position coordinates cancels the flow of visible elements in retinal coordinates. Both coordinate systems exist and provide necessary data, but this mechanism imputes no special status to them. As another example, updating of the spatial image while walking without vision might be performed

xiv

Editors’ Preface

by sensing proprioceptive signals, deriving estimates of translational and rotational velocity, and feeding those to an internal process that integrates the signals over time. In this example, as in the first, the afferent s ignals a nd t he der ived e stimate a re m erely d ata; t he fac t that they represent the body has no special mechanistic implications. This model requires neural mechanisms related to the self solely to support a coherent body experience. A second model, which we call the mapping model, proposes that the body image and schema function as integrated representations of relatively complex sensorimotor patterns. However, this model further stipulates that body representations play no fundamental role beyond s erving a s complex elements for purposes of i nput matching and output generation. On the input side, sensorimotor patterns would r esonate t o per ceptual i nputs, h ence enab ling r ecognition (conscious or unconscious) of one’s own or another’s action. On the output s ide, t he body i mage s erves a s a t oken for motor s ynergies that execute complex ac tions. Activation of t he body i mage would initiate execution of the corresponding motor program, but beyond this “ button-pressing” f unction, embodiment would play no direct role i n motor control. The mapping model requires neural mechanisms to represent complex patterns i n perception a nd ac tion, but does not propose that they are simulated to provide context. A third approach, which we call the embodied model, assumes that body representations are a unique form of data that are mechanistically involved in a broad set of information-processing capabilities, by virtue of perceptual–motor simulation and the context it provides. The visual input from another’s actions might be i nterpreted by activating the body image, analogously to the analysis-by-synthesis theory of speech perception, which postulates that listeners create or synthesize an ongoing predictive model of the speech that they are hearing. The body s chema might a llow per formers to compare t he observed behavior of another to their own habitual actions, enabling them to improve by watching an expert. The spatial image would be used to plan pathways through the immediate environment. This f raming of t he em bodied m odel g ives r ise t o f undamental issues, including: How are t he body i mage, body s chema, and spatial i mage i mplemented, f unctionally a nd n europhysiologically? (Some i mplementation i s r equired, wh ether one a ssumes t he n onembodied model, where t hese representations merely support subjective impressions, or the mapping or embodied model, where they

Editors’ Preface

xv

function directly in information processing.) How do these entities function in thinking, as well as perceiving and acting? How and for what purposes a re d iverse body pa rts i ntegrated i nto a r epresentation of the self, and how is this representation updated as the person/ environment linkage changes through external forces or the person’s own ac tions? W hat k inds o f n eural st ructures su pport s imulated movements o f t he body t hat m ight be u sed f or l earning a nd p removement planning? What are the developmental origins and timecourse of the body image, spatial image, and body schema? Authors of this volume bring to bear on these and related questions a broad range of theory and empirical findings. Biological foundations and models are dealt with by Culham and Cisek; the spatial image is the focus of chapters by Klatzky, Loomis, and Proffitt; Reed and Shiffrar consider t he body i mage a nd body s chema; K noblich, Adolph, a nd Bertenthal provide t he de velopmental v iewpoint; a nd MacWhinney brings a linguistic perspective. The symposium leading to this book was charged with excitement about the specific research presented and the overall perspective of embodiment, and it is our hope that its publication will enable readers to share in that excitement. We gratefully acknowledge the symposium support provided by the National Science Foundation under Grant No. 0544568. Roberta Klatzky Marlene Behrmann Brian MacWhinney References Clark, A. (1999). An embodied cognitive science? Trends i n C ognitive S ciences, 3, 345–351. Head, H., & Holmes, G. (1911). Sensory disturbances from cerebral lesions. Brain, 34, 102–254. Paillard, J. ( 1999). B ody s chema a nd b ody i mage—A double d issociation in d eafferented pa tients. I n G . N. G antchev, S . M ori, & J. Ma ssion (Eds.), Motor control, today and tomorrow (pp. 197–214). Sofia, Bulgaria: Academic. Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9, 625–636. Zelano, C., Bensafi, M., Porter, J., Mainland, J., Johnson, B., Bremner, E. et al. (2005). Attentional modulation in human primary olfactory cortex. Nature Neuroscience, 8, 114–120.

1 Measuring Spatial Perception with Spatial Updating and Action

Jack M. Loomis and John W. Philbeck

Measurement of perceived egocentric distance, whether of visual or auditory t argets, i s a t opic of f undamental i mportance t hat i s st ill being actively pursued and debated. Beyond its intrinsic interest to psychologists a nd philosophers a like, it is i mportant to t he u nderstanding of many other topics which involve distance perception. For example, many complex behaviors like driving, piloting of aircraft, sport activities, and dance often involve distance perception. Consequently, understanding when and why errors in distance perception occur w ill i lluminate t he reasons for er ror a nd d isfluency i n t hese behaviors. Also, the understanding of distance perception is important in the current debate about the “two visual systems,” one ostensibly concerned with the conscious perception of 3-D space and the other with on-line control of action. Similarly, determining whether nonsensory fac tors, such a s i ntention t o ac t a nd en ergetic st ate o f the obs erver, i nfluence per ceived d istance, a s ha s be en cla imed (e.g. P roffitt, Stefanucci, Ba nton, & Epstei n, 2 003; Witt, Proffitt, & Epstein, 2004, 2005) depends critically on the meaning of distance perception a nd how it is to be m easured. Still a nother topic where measurement o f d istance per ception is cr itical is sp atial u pdating (the imaginal updating of a t arget perceived only prior to observer movement) in volving o bserver tr anslation. B eing a ble t o m easure 1

2

Embodiment, Ego-Space, and Action

the accuracy of spatial updating depends upon being able to partial out errors due to misperception of the initial target distance (Böök & Gärling, 1981; Loomis, Klatzky, Philbeck, & Golledge, 1998; Loomis, Lippa, Klatzky, & Golledge, 2002; Philbeck, Loomis, & Beall, 1997). Finally, m easurement o f d istance per ception is i mportant f or t he development of effective visual and auditory displays of 3-D space. Indeed, de veloping v irtual r eality s ystems t hat ex hibit na turally appearing scale has proven an enormous challenge, both for visual virtual reality (Loomis & Knapp, 2003) and for auditory virtual reality (Loomis, Klatzky, & G olledge, 1999), and there has been a spate of recent research articles concerned with understanding the causes for uniform scale compression in many visual virtual environments (e.g., Creem-Regehr, Willemsen, Gooch, & Tho mpson, 2005; Knapp, 1999; K napp & L oomis, 2004; Sahm, Creem-Regehr, Tho mpson, & Willemsen, 2 005; Thompson, W illemsen, G ooch, Cr eem-Regehr, Loomis e t al ., 2 004). Virtual reality s ystems th at successfully c reate a realistic sense of scale will enjoy even greater aesthetic impact and user acceptance and will prove even more useful in the training of skills, such as safe road crossing behavior by blind and sighted children. Indirectness of Perception Naïve realism is the commonsense view that the world we encounter in everyday life is identical with the physical world that we come to k now about t hrough our s chooling. Following dec ades of i ntellectual inquiry, philosophers of mind and scientists have come to an alternate v iew referred to a s “representative realism”—that contact with t he physical world is indirect a nd t hat what we experience in everyday life is a representation created by our senses and central nervous system (e.g., Brain, 1951; Koch, 2003; Lehar, 2003; Loomis, 1992; R ussell, 1 948; S mythies, 1 994). I ndeed, t his r epresentation, generally referred to as the phenomenal world, is so highly consistent and veridical that we routinely make life-depending decisions without ever suspecting that the perceptual information upon which we are relying is once removed from the physical world. The high degree of functionality of the perceptual process accounts for its being selfconcealing and for the reason that most laypeople and indeed many scientists think of perception as little more than attention to aspects of the environment.

Measuring Spatial Perception with Spatial Updating and Action

3

The r epresentational na ture o f per ceptual ex perience is ea sy t o appreciate w ith co lor v ision bec ause t he ma pping f rom p hysical stimulation to perceptual spac e entails a h uge loss of i nformation, from the many dimensions of spectral lights to the three perceptual dimensions of photopic color vision. In order to appreciate the representational nature of perception more generally it is helpful to keep in mind such perceptual phenomena as diplopia, binocular stereopsis elicited by stereograms, geometric visual illusions, and motion illusions; such phenomena point to a physical world beyond the world of appearance. A lthough experiencing such p henomena momentarily reminds us of the representational nature of perception, we too easily la pse back i nto na ïve r ealism wh en d riving o ur c ars, eng aging in sports ac tivity, a nd i nteracting w ith other pe ople. It i s quite a n intellectual challenge to appreciate that the very three-dimensional world we experience in day-to-day life is an elaborate perceptual representation. Indeed, many people seem to be naïve realists when it comes to visual space perception, for they think of visual space perception largely as one of judging distance. But, visual space perception is so much more than this—it gives rise to our experience of the surrounding v isual w orld, c onsisting o f su rfaces a nd objects ly ing in dep th ( e.g., G ogel, 1 990; H oward & R ogers, 2 002; L oomis, Da Silva, Fujita, & F ukusima, 1992; Marr, 1982; Ooi, Wu, & H e, 2006; Wu, O oi, & H e, 2 004). V irtual r eality ma kes t he r epresentational nature o f v isual spac e per ception ob vious ( Loomis, 1 992), f or t he user experiences being immersed within environments which have no physical existence (other t han being bits in computer memory). Teleoperator s ystems a re u seful f or d rawing t he s ame co nclusion. Consider a visual teleoperator system consisting of a head-mounted binocular d isplay a nd ex ternally mounted v ideo c ameras for d riving t he display. The user of such a tel eoperator system ex periences full presence in the physical environment while being intellectually aware t hat t he v isual st imulation co mes o nly i ndirectly b y w ay o f the display. Because the added deg ree of mediation associated with the display pales in comparison with the degree of mediation associated w ith v isual processing, t he representational nature of perception when using a teleoperator points to the representational nature of ordinary perception. How one conceives of perception determines how one goes about measuring perceived distance. For the researcher who accepts naïve realism, perceiving distance is simply a matter of judging distance in “physical space.” Under t his conception, one can simply ask t he

4

Embodiment, Ego-Space, and Action

observer how far away objects are and then correct for any judgmental biases, such as reporting 1 m as 2 m. In contrast, for researchers who adhere to the representational conception, the measurement of distance perception is a major challenge, inasmuch as one is attempting to measure aspects of an internal representation. Because one starts with behavior of some kind (e.g., verbal report, action) and because there c an be d istortions a ssociated w ith t he readout f rom i nternal representation to behavior, measurement of perception depends on a theory connecting internal representation to behavior, a theory that is best developed using multiple response measures (e.g., Foley, 1977; Philbeck & Loomis, 1997).

Some Methods for Measuring Perceived Distance Verbal report and magnitude estimation are t wo traditional methods f or m easuring per ceived d istance ( Da S ilva, 1 985). F igure 1 .1 gives the results of a number of studies using verbal report for target distances out to 28 m (Andre & R ogers, 2006; Foley, Ribeiro, & Da Silva, 2 004; Kelly, L oomis, & B eall, 2 004; K napp & L oomis, 2 004; Loomis et al., 1998; Philbeck & Loomis, 1997). The data sets are generally well fit by linear functions with 0 intercepts, but the slopes are generally less than 1.0. The mean slope is 0.80. Concerns abo ut t he pos sible i ntrusion o f k nowledge a nd bel ief into such judgments (Carlson, 1977; Gogel, 1974) have prompted the search for alternative methods. So-called indirect methods make use of other perceptual judgments thought to be less subject to intrusion by cognitive factors and then derive estimates of perceived distance by way of theory. Several of these methods rely on so-called perceptpercept co uplings. S pace per ception r esearchers ha ve l ong k nown that per ceptual va riables o ften co vary w ith o ne a nother ( Epstein, 1982; Gogel, 1984; Sedgwick, 1986). In some cases these covariations may be t he result of joint determination by common stimulus variables, but in other cases variation in one perceptual variable causes variation in another (Epstein, 1982; Gogel, 1990; Oyama, 1977); such causal covariations are referred to as percept-percept couplings. The best k nown coupling i s t hat be tween perceived s ize a nd perceived egocentric distance and is referred to as size-distance invariance (Gilinsky, 1 951; M cCready, 1 985; S edgwick, 1 986). S ize-distance invariance i s t he r elationship be tween perceived s ize (S’) a nd perceived egocentric distance (D’) for a v isual stimulus of angular size

Measuring Spatial Perception with Spatial Updating and Action

5

α: S ’ = 2D’ t an (α/2). A spec ial c ase o f s ize-distance i nvariance i s Emmert’s Law—varying the perceived distance of a stimulus causes perceived s ize to v ary p roportionally, w ith a ngular s ize h eld co nstant. Another coupling of perceptual variables is that between the perceived distance of a target and its perceived motion (Gogel, 1982, 1993). G ogel dem onstrated t hat t he perceived m otion o f a n object can be altered by mere changes in its perceived distance while keeping all other variables constant. He developed a quantitative theory for this coupling between perceived distance and perceived motion and applied it in explaining the apparent motion of a variety of stationary ob jects, such a s dep th-reversing figures a nd t he i nverted facial mask (Gogel, 1990). The existence of percept-percept couplings is m ethodologically i mportant, for t hese couplings c an be u sed t o measure perceived distance in situations where the researcher wishes observers not to be aware that perceived distance is being measured. Judgments of perceived size and perceived motion have been used to measure perceived distance (e.g., Gogel, Loomis, Newman, & Sha rkey, 1985; Loomis & K napp, 2003) and to demonstrate the effect of an e xperimental manipulation on perceived d istance (e.g., Hutchison & Loomis, 2006a). Another indirect method of measuring perceived distance involves judgments of collinearity and relies on the perception of exocentric direction. A visible pointer is adjusted by the observer to be aligned with t he t arget st imulus (W u, K latzky, Sh elton, & S tetten, 2 005); these authors used the method to measure the perceived distance of targets within arm’s reach under the assumption that the pointer is perceived correctly. Application of the method to the measurement of la rge perceived d istances seems promising, but t he method w ill have to compensate for systematic biases in exocentric direction perception (Cuijpers, Kappers, & Koenderink, 2000; Kelly et al., 2004). Still other indirect methods rely on judgments of perceived exocentric extent and attempt, by way of theory, to construct scales of perceived d istance. The best k nown ex ample is t he work by Gi linsky (1951) a nd, more recently, O oi a nd He (2007). I n t heir ex periments, observers constructed a s et of equal-appearing intervals on the g round ex tending d irectly a way f rom t he o bserver. The more distant i ntervals had t o be made p rogressively la rger i n o rder t o appear of constant size. Assuming that perceived egocentric distance over t he g round plane to a g iven point i s t he concatenation of t he equal appearing intervals up to that point, the derived perceived distance can be associated with the corresponding cumulative physical

6

Embodiment, Ego-Space, and Action

Figure 1.1 Summary of verbal reports of distance for visual targets. The data from t he d ifferent s tudies h ave b een d isplaced ve rtically for pu rposes of c larity. The dashed line in each case represents correct responding. S ources from top to bottom: Experiment 3 of P hilbeck and Loomis (1997), experimental condition of Experiment 2A of Loomis et al. (1998), calibration condition of Experiment 2A of Loomis et al. (1998), results for gymnasts in Experiment 1 of Loomis et al. (1998), results of f ull field of v iew condition of K napp & L oomis (2004), mean data from the control conditions in the 3 experiments of Andre and Rogers (2006), results of Kelly, Loomis, and Beall (2004), and egocentric distance judgments of Foley et al. (2004).

distance. The der ived f unction o f per ceived eg ocentric d istance i s compressively nonlinear even within 10 m a nd under full-cue conditions. Because t he derived f unction is noticeably discrepant with other functions to be d iscussed here and because there are process interpretations f or d oubting t hat t he der ived f unction i s i ndeed a measure of perceived distance, we do not discuss it further.

Methods Based on Action and Spatial Updating Given t he i mportance of d istance perception a nd t he lack o f consensus about how to measure it, researchers have occasionally pro-

Measuring Spatial Perception with Spatial Updating and Action

7

posed n ew m easurement p rocedures. H ere, w e f ocus o n r elatively new methods for measuring perceived distance that rely on action, sometimes w ith t he i nvolvement o f spa tial u pdating. The typical procedure begins with the stationary observer viewing or listening to a target stimulus. After this period of “preview,” further perceptual i nformation abo ut t he t arget i s r emoved b y oc cluding v ision and hearing, a nd t he observer attempts to demonstrate k nowledge of the target’s location by some form of action (e.g. pointing, walking, or throwing a ball). Visually directed pointing was a term coined by Foley and Held (1972) to refer to blind pointing with the finger to the 3-D location of a v isual target that had been previously viewed. This type of response has been used in other studies to measure the perceived locations of v isual targets w ithin a rm’s reach (e.g., Bingham, Bradley, Bailey, & Vinner, 2001; Foley, 1977; Loomis, Philbeck, & Zahorik, 2002). For more distant targets, ball or bean bag throwing has been used (Eby & Loomis, 1987; He, Wu, Ooi, Yarbrough, & Wu, 2004; Sahm et a l., 2005; Smith & S mith, 1961). A nother form of v isually d irected ac tion, blind walking (sometimes c alled “open loop walking”), has been used to study the perception of distances of “action space” (distances beyond reaching but within the range of most action planning; Cutting & V ishton, 1995); here t he observer typically v iews a t arget on t he g round a nd attempts to w alk to its location without vision. These various forms of open-loop behavior, along with others to be discussed, are referred to collectively as “perceptually directed action.” Many studies have used blind walking to assess t he accuracy of perceiving the distances of targets viewed on the ground under fullcue co nditions, f or d istances u p t o 2 8 m ( Andre & R ogers, 2 006; Corlett, Byblow, & Taylor, 1990; Corlett & Patla, 1987; Creem-Regehr et al., 2005; Elliott, 1987; Elliott, Jones, & Gray, 1990; Knapp & Loomis, 2004; Loomis et al., 1992; Loomis et al., 1998; Messing & Durgin, 2005; Rieser, Ashmead, Talor, & Youngquist, 1990; Steenhuis & Goodale, 1988; Thomson, 1983; Wu e t a l., 2 004). Figure 1.2 shows many of the results, with the data sets shifted vertically for purposes of clarity. Except for two data sets, perceived distance is proportional to physical distance with no evidence of systematic error (slopes of the best fitting l inear f unctions a re generally close to 1, a nd i ntercepts are near zero). In contrast, when t he same task, modified for audition, is u sed t o st udy d istance per ception o f so und-emitting sources heard out-of-doors, systematic errors are observed over the

8

Embodiment, Ego-Space, and Action

Figure 1.2 Summary of blind walking results for vision. The data from the different studies have been displaced vertically for purposes of clarity. The dashed line in each case represents correct responding. Sources f rom top to b ottom: E xperiment 3 of P hilbeck and Loomis (1997), Experiment 1 of W u, Ooi, and He (2004), mean data from the control conditions in the 3 experiments of Andre and Rogers (2006), results of E lliott (1987), average of t wo g roups of ob servers f rom E xperiment 1 of Loomis et al. (1998), Experiment 2b of Loomis et al. (1998), Experiment 1 of Loomis et al. (1992), Thomson (1983), Rieser et al. (1990), Steenhuis and Goodale (1988), and Experiment 2a of Loomis et al. (1998).

same range of d istances (Ashmead, DeFord, & N orthington, 1995; Loomis et al., 1998; Speigle & Loomis, 1993). Figure 1.3 shows representative results; this time, the data sets have not been shifted vertically. The be st l inear f unctions have slopes close to 0. 5, i ndicating response co mpression r elative t o t he st imulus r ange, a nd t here i s considerable variability in the intercepts. We should mention, however, that in a recent review of these and other results obtained using other response measures including verbal report, Zahorik, Brungart, and Bronkhorst (2005) fit power functions to the data and generally found exponents less than 1.0, the interpretation being that perceived auditory d istance i s a co mpressively n onlinear f unction o f so urce distance. Still, for t he range of d istances i n Figure 1.3, t he conclusion that they are linear functions with roughly constant slope but

Measuring Spatial Perception with Spatial Updating and Action

9

Figure 1.3 Summary of bl ind walking results for aud ition. These are the actual data and have not been displaced vertically for purposes of clarity. The dashed line represents correct responding. Sources from top to bottom: Ashmead et al. (1995), Speigle and Loomis (1993), Experiment 1 of L oomis et al. (1998), and Experiment 2a of Loomis et al. (1998)

varying intercept is justified. The source of the variation in intercept is a mystery. With vision, systematic errors do arise. Sinai, Ooi, and He (1998) and He et al. (2004) have found that when the ground surface is interrupted by a g ap, v isual targets resting on t he ground are mislocalized even with full cue viewing. Larger systematic errors occur when visual cues to distance are minimal. Figure 1.4 gives the results of a st udy by Philbeck a nd L oomis (1997) i n wh ich blind w alking responses a nd v erbal r eport w ere ob tained u nder t wo co nditions: reduced cues (luminous targets of constant angular size at eye level in the dark) and full-cues (the same targets placed on the floor with normal room lighting). When cues were minimal, both types of judgment showed large systematic errors, and when cues were abundant, both types of judgment showed small systematic errors. This study also showed that when the verbal responses were plotted against the walking responses in these and two other conditions, the data were

10

Embodiment, Ego-Space, and Action

Figure 1.4 Results of a n e xperiment u sing b oth ve rbal re port a nd v isuallydirected blind walking in reduced-cue and full-cue conditions. Adaptation of Figure 5 f rom Philbeck, J. W. & L oomis, J. M . (1997). Comparison of t wo indicators of v isually p erceived e gocentric d istance u nder f ull-cue a nd reduced-cue conditions. Journal of E xperimental Ps ychology: H uman P erception a nd P erformance, 23, 72–85.

well fit by a single linear function, suggesting that variations in the two response measures are controlled by the same internal variable, visually perceived distance. A related experiment was concerned with measuring the perceptual er rors i n v isual v irtual r eality (Sahm e t a l., 2 005). Obs ervers performed blind walking and bean bag throwing to targets in both a real environment and a v irtual environment modeled on the real environment. Prior to testing, observers were given feedback only about t heir t hrowing per formance i n t he r eal en vironment. The results are given in Figure 1.5. The fact that the transition from the real to virtual environment produces the same errors in walking and throwing supports the claim that the two actions, one that involves locomotion and the other that does not, are controlled by the same internal variable, visually perceived distance. In addition, the results provide further support for the growing consensus that current virtual reality s ystems produce u nder perception of d istance (Knapp, 1999; Thompson et al., 2004). Triangulation Methods The s imilarity o f t he w alking m easures a nd v erbal r eports abo ve might be taken as evidence of a simple strategy for performing blind

Measuring Spatial Perception with Spatial Updating and Action

11

Figure 1.5 Results of an experiment using both visually-directed blind walking and v isually d irected t hrowing i n re al a nd v irtual e nvironments. Re printing of Figure 3 of S ahm, C. S., Creem-Regehr, S. H., Thompson, W. B., & Wi llemsen, P. (2005). Throwing versus walking as indicators of distance perception in similar real and virtual environments. ACM Transactions on Applied Perception, 2, 35–45.

walking—while perceiving the target, estimate its distance in feet or meters, and then, with vision and hearing occluded, walk a distance equal to the estimate. Whereas t he blind walking t ask m ight well be per formed u sing this s imple st rategy, t here a re o ther cl osely r elated t asks t hat c annot. F oremost a re t riangulation t asks t hat r equire t he obs erver t o constantly u pdate t he est imated l ocation o f t he t arget wh ile m oving about i n t he absence of f urther perceptual i nput specifying its location. Figure 1.6 depicts three triangulation tasks that have been used. In “triangulation by pointing,” t he observer v iews (or listens

Figure 1.6

Three triangulation methods (see text for explanation).

12

Embodiment, Ego-Space, and Action

to) a target and, then without further perceptual information about its location, walks along a st raight path to a new location (specified by auditory or haptic signal) and then points toward the target. The pointing direction is used to triangulate the initially perceived and spatially updated target location. In one variant, the arm orientation was monitored continuously as the observer walked along a straight path (Loomis et a l., 1992) a fter v iewing a t arget on t he g round up to 5.7 m away; the average pointing responses were highly accurate. “Triangulation by walking” (also, “triangulated walking”) is similar to triangulation by pointing except that after the initial straight path, the observer turns and walks a short distance toward the target. The walking direction after the turn is used to triangulate the perceived and updated target location. Finally, in the “ indirect walking” version of triangulation, the observer walks to a turn point (specified by auditory or haptic signal), and then attempts to walk the rest of the way to the updated target location. Figure 1.7 gives the results of a number of experiments using the different triangulation methods to measure the perceived distance of targets viewed under full-cue conditions (Fukusima, Loomis, & Da Silva, 1997; Knapp, 1999; Loomis et al., 1998; Philbeck et al., 1997; Thompson et al., 2004); as in Figure 1.2, the data sets have been vertically shifted for cla rity. A lthough the data are more variable than with blind walking (Figure 1.2), they indicate overall that perceived distance is proportional to target distance with little systematic error. A Model of Perceptually Directed Action Because blind walking and the triangulation methods just mentioned rely on actions that occur after the percept has disappeared, it might be argued that these methods cannot be used to measure perception because the action depends upon postperceptual processes (e.g., Proffitt et al., 2006). However, we maintain that a valid measurement method is one for which the variations in the indicated values (those resulting f rom t he measurement process) are coupled to variations in the variable being measured and for which a calibration between the two has been established (Hutchison & Loomis, 2006b). As with any measurement device (e.g., a thermometer with an electronic display), the indirectness of the mechanism between the variable being measured and the indicator has no bearing on whether the indicated

Measuring Spatial Perception with Spatial Updating and Action

13

Figure 1.7 Summary of t riangulation re sults for v ision u sing t riangulation by pointing, triangulation by walking, and indirect walking, all obtained under fullcue conditions. The data from the different studies have been displaced vertically for purposes of clarity. The dashed line in each case represents correct responding. Sources from top to b ottom: results of i ndirect walking by P hilbeck et al. (1997), results of t riangulation by w alking in real environment (Thompson et a l., 2004), outdoor results of Knapp (1999), average of two conditions from Experiment 3 (triangulation by w alking) of Fu kusima et al. (1997), Experiment 3 (direct and indirect walking) of Loomis et al. (1998), average of two conditions from Experiment 4 (triangulation by walking) of Fukusima et al. (1997), and average of two conditions from Experiment 2 (triangulation by pointing) of Fukusima et al. (1997).

values are proper measures of the variable of interest, here perceived distance. In the case of perceptually directed action, what is required is a theory linking the indicated value to perceived distance. Action can be used to measure perception provided that the postperceptual processes introduce no systematic biases or, if they do, that the biases can be co rrected for by w ay of c alibration. O f course, a s w ith a ny measurement device or method, the precision of measurement will ultimately be limited by random noise associated with each of the subsequent processes, even if systematic biases can be eliminated by calibration. Here, we present a model of perceptually directed action that links the perceptual representation to the observed behavior. The model

14

Embodiment, Ego-Space, and Action

Figure 1.8 A block diagram of perceptually directed action (see text for explanation).

involves a number of processing stages (Figure 1.8). For similar models, see Böök and Gärling (1981), Loomis et a l. (1992), Medendorp, Van A sselt, a nd Gi elen (1999), a nd R ieser (1989). F irst, t he v isual, auditory, or haptic stimulus gives rise to the percept, which may or may not be coincident with the target location (Figure 1.9). Accompanying t he percept i s a m ore abst ract a nd probably m ore d iffuse “spatial i mage,” wh ich continues to ex ist i n representational spac e even after the percept ceases. There is evidence that the spatial images from different modalities are functionally equivalent, perhaps even amodal in nature (Avraamides, Loomis, Klatzky, & Golledge, 2004; Klatzky, L ippa, L oomis, & G olledge, 2 003; L oomis, L ippa e t a l., 2002). We assume that the spatial image is coincident with the percept, but future research may challenge this assumption; for now, it appears that error in the percept is carried over to the spatial image. When the actor begins moving, sensed changes in position and orientation (path i ntegration) re sult i n s patial up dating of t he s patial image (Böök & G ärling, 1981; L oarer & S avoyant, 1991; L oomis et al., 1992; Loomis, Klatzky, Golledge, & Philbeck, 1999; Rieser, 1989; Thomson, 1983). At any point in the traverse, as depicted in Figures 1.8 and 1.9, the observer may be asked to make some nonlocomotion response, such as pointing at or throwing to the target or verbally reporting the remaining distance. The response processes clearly are different for different types of response. An important assumption, to be discussed later, is that the response is computed in precisely the s ame fa shion wh ether ba sed o n t he co ncurrent per cept o f t he target or on the spatial image of the target (whenever the percept is absent). This assumption is depicted in Figure 1.8 by the convergence

Measuring Spatial Perception with Spatial Updating and Action

15

Figure 1.9 From le ft to r ight, d epiction of 3 s uccessive mome nts d uring p erceptually directed action. A. The observer perceives a target closer than its physical distance. Accompanying t he perceived target is a more a bstract and spatially diff use spatial image. B. With the stimulus and its percept no longe r present, the observer move s t hrough s pace, up dating t he e gocentric d istance a nd d irection of t he target. I f path integration is accurate (as depicted here), t he spatial image remains stationary with respect to the physical environment. C. After moving, the observer can make another response to t he updated spatial image, by c ontinuing to move toward it, by pointing at it, by throwing at it, or by making a verbal report of the distance remaining.

of percepts, initial spatial images, and updated spatial images onto the output transforms for different types of responses. Not dep icted i n F igure 1.8 i s a n onperceptual i nput t o t he c reation of a spa tial image. Loomis and his colleagues (Avraamides et al., 2004; Klatzky et al., 2004; Loomis, Lippa et al., 2002) have shown that once a perso n forms a spa tial i mage, whether based on a spa tial percept or based on spatial language, subsequent behaviors (like spatial updating a nd exocentric d irection judgments) appear to be indifferent to the source of the input, suggesting that spatial images based on different inputs might be a modal. The implication is that the spatial image produced by vision, hearing, or touch, can, in principal, be modified by higher-level cognition so as not to be spatially coincident with the percept. Whether this dissociation between percept a nd spa tial i mage e ver oc curs r emains t o be de termined, but the evidence to be reviewed is consistent with the assumption of coincidence.

16

Embodiment, Ego-Space, and Action

Accurate pa th i ntegration a nd co nsequent ac curate u pdating m ean t hat t he spa tial i mage r emains fi xed w ith r eference t o the physical environment. If path integration is in error, multiple updated targets move with respect to the physical environment, but they move t ogether r igidly. I f only s ensed s peed but not heading during path integration is off by a constant factor, blind walking to a target over very different paths will cause the terminal points to coincide even though the convergence point will not coincide with the i nitially per ceived l ocation. C omparisons o f i ndirect w alking responses w ith d irect w alking r esponses t o t he s ame v isual t argets indicate that walking results in accurate path integration and consequent accurate spatial updating (Loomis et al., 1998, Philbeck et al., 1997). Figure 1.10 gives the average terminal locations for direct and indirect walking responses to visual and auditory targets

Figure 1.10 Stimulus layout and results of an experiment on spatial updating of visual and auditory targets by Loomis et al. (1998). The observer stood at the origin and saw or he ard a t arget (X) lo cated either 3 or 1 0 m d istant at a n azimuth of 80°, -30°, 30°, or 80°. Without further perceptual information about the target, the observer attempted to walk to its location either directly or indirectly. In the latter case, the observer was guided forward 5 m to the turn point and then attempted to walk the rest of the way to the target. The small open circles are the centroids of the direct path stopping points for the 7 observers, and the small closed circles are the centroids for t he indirect path stopping points. Reproduction of Fi gure 5.6 f rom Loomis, J. M., Klatzky, R. L., Golledge, R. G., & Philbeck, J. W. (1999). Human navigation by path integration. In R. G. Golledge (Ed.), Wayfinding: Cognitive mapping and other spatial processes (pp. 125–151). Baltimore: Johns Hopkins.

Measuring Spatial Perception with Spatial Updating and Action

17

in one of these studies (Loomis et al., 1998). The congruence of the direct and indirect terminal points and their near coincidence with the v isual t argets dem onstrate t he ac curacy o f u pdating. The fact that the terminal points for audition are further from the auditory targets (along the initial target directions) than the vision signifies the poo rer acc uracy o f auditory d istance perception co mpared t o visual distance perception. Whereas actively controlled walking to targets at normal walking speeds produces accurate path integration and u pdating f or sh ort pa ths (e.g., P hilbeck, K latzky, B ehrmann, Loomis, & Goodridge, 2001), walking at unusual speeds or passive transport by w heelchair or ot her c onveyance ge nerally re sults i n degraded updating performance (Israël, Grasso, Georges-François, Tsuzuku, & Ba thos, 1 997; J uurmaa a nd S uonio, 1 975; Ma rlinsky, 1999a, 1999b; Mittelstaedt & Glasauer, 1991; Mittelstaedt & Mittelstaedt, 2001; Sholl, 1989), especially for older adults (Allen, Kirasic, Rashotte, & Haun, 2004). A more recent v ariant of perceptually d irected action prov ides a means of measuring the 3-D perceived location of a v isual stimulus perceived to lie above the ground plane (Ooi, Wu, & He, 2001, 2006). Figure 1.11 depicts t he procedure a nd t ypical pattern of results for luminous v isual targets placed on t he ground in an otherwise dark room. At the left, the observer views the target. Because of insufficient distance cues, the target is perceived to be closer than it is, resulting in the percept being elevated. The observer, wearing a blindfold, then walks out to the perceived and updated target and gestures by placing the hand at its location. Despite errors in distance (which are like those

Figure 1.11 Procedure and typical results for t he experiments of O oi, Wu, and He (2001, 2005). A glowing target was placed on the ground in an otherwise dark room. At t he le ft, t he ob server v iews t he t arget. B ecause of i nsufficient distance cues, t he target is perceived to b e closer t han it i s, resulting in t he percept being elevated. The observer, wearing a bl indfold, t hen walks out to t he perceived a nd updated target and crouches to place the hand at its location. The angle α is “angular declination” (or “height in the field”), which is an important cue to egocentric distance.

18

Embodiment, Ego-Space, and Action

reported by Philbeck and Loomis [1997]; see Figure 1.4), the indicated locations lay in very nearly the same direction as the targets as viewed from the origin. Given the complexity of this “blind-walking–gesturing” r esponse, t his i s a r emarkable r esult. I t i s d ifficult to imagine any interpretation of this result other than one of systematic error in perceiving the target’s distance followed by accurate path integration and spatial updating. Similar evidence supporting this interpretation comes from the aforementioned studies involving both direct and indirect walking (Loomis, Klatzky et al., 1998; Loomis, Lippa et al., 2002; Philbeck et al. 1997). When people walked along indirect paths while updating, they traveled to very nearly the same locations as when traveling along direct paths. Importantly, when the terminal points cl early de viated f rom t he t argets i n ter ms o f d istance, i ndicating distance errors, the directions were nonetheless quite accurate (Loomis, Klatzky et al., 1998; Loomis, Lippa et al., 2002; Philbeck et al. 1997). The conclusion is strong that observers were traveling to the perceived and updated target locations. Still further evidence that perceptually directed action can be used t o m easure per ceived l ocation a nd, t hus, per ceived d istance, comes f rom recent work by Ooi et a l. (2006). The y performed two experiments, one involving t he blind-walking—gesturing response to luminous point targets in the dark—and the other involving judgments of the shapes of luminous figures, also viewed in the dark. The indicated locations and judged slants were consistent with the targets in the two tasks being located on an implicit surface, extending from near the observer’s feet and moving outward while curving upward; the a uthors h ypothesize t hat t he i mplicit su rface r eflects intrinsic biases of the visual system, like the specific distance tendency (Gogel & Tietz, 1979). It is significant that two such very different responses, one involving action and other involving judgments of shape, can be understood in terms of a unitary perceptual process. The flexibility of perceptually directed action is indicated by studies demonstrating on-line modification of the response. In a h ighly influential paper, Thomson (1983) reported an experiment that prevented t he obs erver f rom ex ecuting a p replanned r esponse. On a given trial, the observer viewed a target on the ground some distance ahead. After the observer began blind walking toward the target, the experimenter gave a signal to stop and throw a beanbag the remaining d istance. A ccuracy w as h igh e ven t hough obs ervers d id n ot know at wh ich point t hey would be c ued to t hrow, demonstrating a flexible response combining two forms of action. Another exam-

Measuring Spatial Perception with Spatial Updating and Action

19

ple of on-l ine modification comes f rom one of t he ex periments on direct a nd indirect w alking to t argets (Philbeck e t a l., 1997). Here observers viewed a target, after which their vision and hearing were occluded. On c ue f rom t he ex perimenter, t he obs ervers w alked t o the target a long one of t hree paths. Because t hey were not cued as to which pat h to take until a fter vision was occluded, the excellent updating performance indicates that observers could not have been preprogramming t he response. Given t hese results, it appears t hat action directed toward a goal is extremely flexible. Presumably, once a goal ha s been established, a ny combination of ac tions, i ncluding walking, sidestepping, crawling, and throwing can be assembled “on the fly” to indicate the location of an initially perceived target. For other evidence of on-line adjustment of perceptually directed action, see the paper by Farrell and Tho mson (1999). Despite the involvement of cognitive and locomotor processes in perceptually directed action, the results of a number of experiments demonstrate that this method provides a pure, albeit indirect, measure o f per ceived d istance (and d irection). They do so by demonstrating t hat cog nitive a nd motor processes contribute l ittle to t he systematic error of task performance. Especially compelling are the results demonstrating t he absence of systematic error in path integration, spatial updating, and response execution through the congruence o f d irect a nd i ndirect w alking pa ths ( Loomis, K latzky e t al., 1998; Loomis, Lippa et al., 2002; Philbeck et al., 1997; see Figure 1.10) and the result of Ooi et al. (2001, 2006) showing that terminal location of a co mplex spatial response is consistent with the initial target d irection. I n add ition, t he close coupling of ac tion a nd verbal responses in both reduced-cue and full-cue conditions (Philbeck & Loomis, 1997), the close coupling of blind walking and throwing responses in both real and virtual environments (Sahm et al., 2005), and t he close coupling of blind walking/gesturing and shape judgments (Ooi et al., 2006; Wu et al., 2004) is further support that the action-based method provides a m easure of distance perception. A later section showing that spatial updating can be used to correct for biases in verbal report provides still further evidence.

Role of Calibration in Perceptually Directed Action Although w e bel ieve t hat cog nitive a nd m otor p rocesses d o n ot contribute appreciably to systematic errors in perceptually directed

20

Embodiment, Ego-Space, and Action

action, such processes are clearly required to control and execute the response. In light of this, and given the very high accuracy of visually directed action under full-cue conditions, there is good reason to believe that perceptually directed action depends on some adaptation process t hat ke eps t he ac tion s ystem i n c alibration. I ndeed, several forms of adaptation of perceptually directed action have been demonstrated (Durgin, Fox, & Kim, 2003; Durgin, Gigone, & Scott, 2005; Durgin, Pelah, Fox, Lewis, Kane et al., 2005; Ellard & Shaughnessy, 2003; Mohler, Creem-Regehr, & Thompson, 2006; Ooi et a l., 2001; Philbeck, O’Leary, & L ew, 2004; R ichardson & W aller, 2005; Rieser, Pick, Ashmead, & Garing, 1995; Witt et al., 2004). Does adaptation call into question the claim that perceptually directed action can be used to measure perception? Because per ception, pa th i ntegration/spatial u pdating, a nd response ex ecution a ll co ntribute t o pe rceptually d irected a ction, adaptation o f a ny o f t hese p rocesses o r o f t he co uplings be tween them c an be ex pected t o a lter t ask per formance. A daptation t hat alters perceived distance will influence all measures of distance perception ( verbal r eport, s ize-based m easures, a nd a ll ac tion-based measures, i ncluding w alking a nd t hrowing) f or wha tever s ensory modality has been adapted. In the aforementioned study, Ooi et al. (2001) used prism adaptation during a prior period of walking with vision to alter the effective “angular declination,” one of the cues in sensing target distance (represented by α in Figure 1.11). The results showed that perceived distance was indeed altered because walking and t hrowing r esponses were s imilarly a ffected, e ven t hough on ly walking was used during adaptation. Adaptation o f t he pa th i ntegration p rocess a lters t he g ain o f sensed self-motion relative to actual physical motion and is expected to have a uniform influence on walked distance. That is, if the gain is halved, observers ought to walk twice as far in order to arrive at the updated targets, and this effect should apply to all targets regardless of initial distance and t he path taken. If t he adaptation of walking speed is specific to walking direction, t hen t he effect of adaptation on path integration will depend upon the walking direction. Rieser et al. (1995) had observers walk on a treadmill while being pulled by a tractor and altered the normal relationship between vision and the proprioceptive cues of walking. Adaptation to this altered relationship produced reliable changes in the walked distance to previewed targets but did not affect throwing to targets, a result that rules out

Measuring Spatial Perception with Spatial Updating and Action

21

perceptual ad aptation. The ma gnitude o f r ecalibration depen ded upon walking direction. A likely interpretation is that sensed selfmotion w as a ltered b y t he ad aptation p rocess. D urgin a nd h is colleagues have additional e vidence of re calibration of s ensed s elfmotion (Durgin, Fox et al., 2003; Durgin, Gigone et al., 2005; Durgin, Pelah et a l., 2005), a lthough their results and interpretation of the c ause of r ecalibration d iffer somewhat from those of Rieser et al. (1995). It might be thought that recalibration involves a comparison of sensed self-motion signaled by idiothetic (proprioceptive and inertial) cues and the overall pattern of optic flow, but recent results by Thompson, Mohler, and Creem-Regehr (2005) show that the recalibration depends upon the perceived scale of the environment, with optic flow held constant. This means that recalibration is determined by the comparison between sensed self-motion signaled by idiothetic cues a nd t he s ensed s elf-motion ba sed on v isual perception of t he environment, which depends upon distance perception. The accuracy of visually directed action under full cues (Figures 1.2 and 1.7) clearly relies on calibration of the gain of walking relative to visual perception. This calibration is likely to be induced by sensing of just the near visible environment (e.g., Thompson et al., 2005). An interesting implication is that adaptation of visually sensed selfmotion ought to affect any form of action based on spatial updating regardless of the modality with which the target is perceived; thus, the indicated distance to a target based on perceptually directed action will be altered the same amount by visual recalibration whether the targets are visual, auditory, or haptic, provided that the initially perceived distances are the same for the different modalities. An experiment comparing the effects of feedback on blind walking a nd v erbal r eports (Mohler e t a l., 2 006) sh owed e vidence o f a form of recalibration not confined to t he action system. To i nduce recalibration, the authors took advantage of the systematic underperception of distance in virtual reality. Observers gave verbal reports and performed blind walking to targets seen in the virtual environment before and after getting feedback about their errors. During the feedback phase, observers blind walked to the estimated locations of the targets and were given feedback about their errors. Both walking responses and verbal reports showed considerable improvement with the feedback. Because verbal reports were affected, the recalibration cannot be confined to a change in sensed self-motion. Although the common recalibration is consistent with a modification of perceived

22

Embodiment, Ego-Space, and Action

distance, the authors conclude that it is more likely a result of a cognitive rule that influences both types of responses. If true, this might be t he result of a cog nitive a lternation i n t he spatial i mage so t hat it d oes n ot co incide w ith t he per ceived t arget. I f so , t riangulation responses ought to be similarly affected. Still another form of adaptation has been demonstrated by Ellard and Shaughnessy (2003). In their experiment, observers viewed targets at varying distances on different trials and blind walked to them. For two of the targets, observers were given false feedback about the accuracy of their responses. Telling observers that they had u ndershot the target resulted in overshooting on subsequent trials. This form of adaptation was specific to t he targets for which fa lse feedback was given. The result by Ellard and Shaughnessy (2003) raises the possibility of a form of adaptation that might undermine the claim that perceptually directed action measures perception. In particular, this type of adaptation could potentially explain the linearity (proportionality) b etween r esponded di stance an d v isual t arget di stance un der full cues even if perceived distance should in fact be a compressively nonlinear f unction o f t arget d istance, a s cla imed, f or ex ample, b y Gilinsky (1951). It would have to be a type of adaptation that modifies neither perception (which affects all responses, whether actionbased or not) nor path integration (which affects all walked distances by the same scale factor). In addition, because of the aforementioned triangulation re sults, it wou ld h ave t o a ffect t he coupling be tween perceived d istance a nd s ensed d isplacements f rom t he o rigin, regardless of path taken, and do so in a way that varies nonlinearly with distance from the origin (so as to compensate for the putative nonlinearity between target and perceived distances). There a re at least t hree l ines of e vidence a gainst t he hypothesis that this type of distance specific adaptation undermines t he measurement of perceived d istance. The first is t hat people rarely v iew distant targets and then walk to them without vision. Error feedback following such blind walking would be needed to “calibrate” perceptually d irected a ction b ased on t he put ative non linearity b etween target distance and perceived distance. The second line of evidence is concerned with whether adaptation following v isual feedback about open-loop walking er rors generalizes to other forms of perceptually directed action. Richardson and Waller (2005; E xperiment 2) had obs ervers per form blind walking

Measuring Spatial Perception with Spatial Updating and Action

23

to targets in virtual reality. At the outset, their observers walked to locations only about half of the simulated target distances, along both direct and indirect paths, indicating the underperception of distance in virtual reality found by others using a variety of methods (Knapp, 1999; Sahm et al., 2005; Thompson et al., 2004). Observers were then given a period of training involving open-loop walking to the targets along direct paths; after arriving at the estimated position of the target on each trial, the observer was given explicit feedback about the undershoot error. After t raining, obs ervers were once a gain te sted using direct and indirect walking. The t raining el iminated 79% of the undershoot error for direct walking but only 27% of the undershoot error for indirect walking. The large difference in amounts of recalibration argues t hat if people get explicit feedback about t heir blind w alking i n t he real world so a s t o a llow for correct w alking to compensate for putative nonlinearity in perceived distances, this type of recalibration still would not explain the high accuracy with which people perform triangulation tasks. In a f ollow-up st udy, R ichardson a nd Waller (2007) f ound t hat observers wh o w ere a llowed t o w alk a round i n i mmersive v irtual environments w hile c ontinuously i nteracting w ith v isual t argets exhibited a m ore g eneral f orm o f r ecalibration. C ontrasting w ith the results obtained with explicit feedback in their earlier study, the results of this study showed that the implicit feedback accompanying interaction with the environment during the training phase did allow for an equal amount of recalibration when walking open-loop along direct and indirect paths during the testing phase. Prima facie, this result appears to support the hypothesis that recalibration accounts for accurate visually directed action despite nonlinear functions for egocentric distance. However, their result is also consistent with two other hypotheses: recalibration of v isual perception a nd recalibration of sensed self motion. Further experiments not relying on updating (e.g., verbal report and ball throwing) are needed to distinguish between the three alternative hypotheses. The third line of evidence against the hypothesis is made possible by comparing the accurate responses to visual targets with the systematically compressed responses to auditory targets. Figure 1.12 gives t he results of t wo ex periments (Loomis e t a l., 1998) f rom 12 observers wh o made bo th v erbal a nd b lind w alking r esponses t o visual targets and 12 observers who made bo th types of responses to auditory targets; some observers were given targets at 4, 10, and

24

Embodiment, Ego-Space, and Action

Figure 1.12 Results of two experiments on visual and auditory distance perception (Experiments 1 a nd 2a from Loomis et al. [1998]). The same observers made verbal and blind walking responses to b oth visual and auditory targets in a l arge open field. Seven responded to targets at 4, 10, and 16 m, and 5 observers responded to targets at 4, 8, 12, and 16 m. The best fitting linear functions are plotted as well. In the right panel, arrows indicate how, for a given visual target distance, the corresponding “equivalent” auditory distance was determined, this being the distance of the auditory target which produced the same walked distance.

16 m, a nd others were given targets at 4, 8, 12, and 16 m. The best fitting linear functions (with 0 intercepts) for the combined data sets are plotted as well. If t he ac curate r esponses t o v isual t argets r eflect so me so rt o f calibration process acting on the visually-based action process, presumably t he s ame c alibration d oes n ot a pply t o auditorially ba sed action, given t he very large systematic errors. Thus, t he t wo action processes must i nvolve d ifferent c alibration f unctions. This means that if a visual target distance and an auditory target distance produce t he same value of blind walking, t he corresponding values of visually perceived a nd auditorially perceived d istance must be d ifferent. Thus, we would expect that since the process of making a verbal report is common to both modalities, t he verbal reports ought to be different for visual and auditory target distances that produce the s ame ac tion response, a ssuming t hat t he v isually-based ac tion responses have been calibrated through experience. To test this idea, we have used the best fitting linear functions to the blind walking (motor) responses (Figure 1.12) to find, for each visual t arget di stance, t he c orresponding a uditory t arget di stance that produced the same walking responses (see arrows in the right panel o f F igure 1 .12). The v isual t arget d istance a nd “ equivalent” auditory target distances were then used to compute the corresponding verbal reports f rom t he best-fitting l inear f unctions. The func-

Measuring Spatial Perception with Spatial Updating and Action

25

tion relating the verbal reports for vision and the verbal reports for audition i s very nearly t he identity f unction ( linear f unction w ith slope o f 1.04 a nd i ntercept o f - 0.22 m). This means that the vi sually-based a nd a uditorially-based ac tion p rocesses a re e ssentially identical. At least for these data, there is no evidence of a calibration of blind walking to compensate for a putative compressively nonlinear perceptual function. On the basis of the three lines of evidence, we conclude that perceptually directed action does provide a comparatively pure measure of perception when action is properly calibrated to near surrounding space. Also based on both the perceptually directed action results of Figures 1.2 and 1.7 and the verbal report results of Figure 1.1, we conclude that visual distance perception is a linear function of target distance out to at least 25 m on the ground plane when distance cues are abundant. Using Spatial Updating to Correct for Bias in Verbal Reports of Egocentric Distance Figures 1 .1, 1 .2, a nd 1 .7 sh ow t hat v isually per ceived eg ocentric distance i s a l inear f unction o f p hysical d istance o ut t o 25 m a nd that v erbal r eports a re g enerally abo ut 8 0% o f t he d istances i ndicated by ac tion. This systematic difference in response va lues does not, by itself, indicate that different internal representations of distance control the two types of responses, for Foley (1977) and later Philbeck a nd L oomis (1997) presented a m odel i n wh ich t he s ame internal r epresentation o f d istance, oste nsibly per ceived d istance, acts through different output transforms to determine the indicated responses (see right part of Figure 1.8). Philbeck and Loomis (1997) showed that verbal reports and blind walking to targets, while systematically different, were related to each other by a fixed mapping when switching from reduced-cue to full-cue viewing (also see the above analysis in connection with Figure 1.12). This is co nsistent with there being just a single internal representation of distance acting through different output transforms. However, it is possible, even likely, t hat t he o utput t ransforms f or ac tion a nd v erbal r eport a re sometimes affected differently by experimental manipulations, such that there is no fixed mapping between the two types of responses. For example, an observer can view a photograph and make reliable judgments of d istance of depicted objects, but a sking obs ervers t o

26

Embodiment, Ego-Space, and Action

perform open-loop walking to the same depicted objects is likely to be met with reluctance followed by very noisy performance. Andre and Rogers (2006) have found that experimental manipulations, including v iewing t argets t hrough ba se up a nd ba se down prisms, can differentially affect blind walking and verbal report. They interpret t heir findings i n ter ms of d ifferent i nternal representations of distance, but it is possible that the differential effects are produced at the level of response production and execution. In connection with the hypothesis that action and verbal report involve the same internal representation but different output transforms, a spec ial c ase c an be i dentified i n wh ich verbal reports a re subject t o a s ystematic u nder r eporting b ias ( b) t hat i s a co nstant proportion of perceived egocentric distance (i.e., b Pattern flicker

Dechent & Frahm, 2003 (fMRI)

Luminance flicker > Pattern flicker

Figure 4

N/A -60 N/A -70

2 15

(V6) Table 2 average (V6A)

Figures 5, 6

Pointing preparation Astafiev et al., 2003 (fMRI)

Delayed pointing > Delayed saccade

-7

-79

42

Table Supplementary Material Pcu

Figure 1E

Connolly et al., 2003 (fMRI)

Delay activity for effector and location > Delay activity for effector only

-1

-74

38

Results secti on

Figure 3

Reaching preparation Beurze et al., 2006 (fMRI)

Cue for target location > fixation

-24

-67

31

Table 2

Figure 2

Cue for effector > fixation

-21

-70

37

Table 3

Figure 3

-10

-90

36

Table 2 (POJ)

Figure 3a

Reaching Prado et al., 2005 (fMRI)

Reach to nonfoveated targets > Reach to foveated targets

fMRI Investigations of Reaching and Ego Space

Reference (technique)

Contrast

Talairach coordinates X

Y

Z

Source in reference for Talairach coordinates

257

Source figure in reference for foci in our Figure 8.5

Pellijeff et al., 2006 (fMRI)

Reaching to novel position > Reaching to repeated position

-21

-58

42

Table 1 Average (Pcu)

Figure 1

DeJong et al., 2001 (PET)

Reach to variable targets > Reach the same target

-22

-82

29

Table 1 Average (PCu, Cu and POS)

Figure 1

Experiment 1 (fMRI)

Reach-to-touch > Touch AND Reach-tograsp > Grasp

-7

-82

30

Average(upperlower POS)

Figure 1

-11

-72

46

Results section

Figure 10

V6 Retinotopy Pitzalis et al., 2006 (fMRI)

Wide-field retinotopic map Near preference

Experiment 2 (fMRI)

Passive-viewing within reach > Passiveviewing outside reach

1

-75

29

Figure 2

Experiment 3b (fMRI)

Vergence near the head > Vergence far from the head

-8

-86

28

Figure 3

midway between the two objects. The room remained d ark except for a 2 s period for each trial in which the object was illuminated and the action was executed. Prior to each trial, the experimenter placed a new object on t he platform a nd t he subject received a n auditory cue via headphones to “reach,” “grasp,” or “look” on the upcoming trial. At t he beginning of each trial, a b right LED mounted on t he ceiling of the magnet was illuminated for 2 s, prompting the subjects

258

Embodiment, Ego-Space, and Action

to perform the cued action (and then return the hand to the starting location) or to passively view the object. After each trial, the subject rested in darkness for a 12 s intertrial interval. We first identified brain areas involved in the grip component by performing a r andom e ffects contrast between grasping objects at the reachable location vs. touching objects at the reachable location, consistent w ith previous st udies (Binkofski et a l., 1998; Culham et al., 2003; Frey, Vinton, Norlund, & Grafton, 2005). As expected, this contrast produced activation in the anterior intraparietal (AIP) cortex, specifically at the junction of the IPS and the postcentral sulcus (PCS; see Figures 8.2b; Talairach coordinates in Table 8.1). AIP also showed higher activation for grasping vs. reaching at the adjacent location (Figure 8.2c). We then identified brain areas involved in the transport co mponent b y per forming a co ntrast be tween t ouching objects at the reachable location vs., touching objects in the adjacent location. This contrast produced activation in SPOC (see Figures 8.2d), which also showed higher activation for grasping objects in the reachable vs. adjacent location (Figure 8.2e). The SPOC activation for t he t wo pa ssive v iewing conditions w as identical (Figure 8 .2e), suggesting that stimulus confounds (such as retinal location) could not account for the activation difference attributed to the transport component. Implications These results demonstrate that the transport and the grip component of a reach-to-grasp task rely on different brain structures. While AIP is activated by the computation of grip aperture regardless of whether a reach is required to acquire the object, SPOC is much more active when a ctions a re ex ecuted t oward a n o bject r equiring a rm ex tension. A f unctional d issociation be tween t he t wo co mponents d oes not imply that they work separately from one another. Indeed the two components t ake place simultaneously a nd behavioral ex periments have shown that they are closely choreographed. In the future, functional connectivity studies would be v aluable for investigating the nature of the crosstalk between SPOC and AIP.

fMRI Investigations of Reaching and Ego Space

259

Experiment 2: A Preference for Objects within Arm’sLength in Superior Parieto-Occipital Cortex Rationale We reasoned that if SPOC is involved in reaching movements, it may show a preferential response to objects within reachable space. Given past research from our lab (Cavina-Pratesi et al., 2007) showing that human A IP a nd S POC a re ac tivated b y t he v isual p resentation o f an object within reachable space even without any overt action, we investigated whether or not such passive viewing responses would be modulated by whether objects were within vs. out of reach.

Methods and Results Within t he s ame s essions a s E xperiment 1 , a nd u sing t he s ame setup a nd t he s ame 10 sub jects, w e r an E xperiment 2 t o ex amine whether the response in transport- and grip-related areas would be modulated by ob ject d istance. Once again, we presented objects i n the adjacent and reachable locations; however, we also included an additional location that was beyond reach (see Figure 8.3a). Subjects maintained fi xation on a central point throughout all trials. On some trials, subjects were i nstructed to reach-to-touch or reach-to-grasp objects i n one of t he t wo reachable locations (though ac tions were never performed to the other two locations). On other trials, subjects simply passively viewed an object placed at any of the three locations (adjacent, reachable, and unreachable). We per formed a co njunction a nalysis t o i dentify r egions t hat were more activated during passive viewing for objects within reach than outside of reach ([adjacent > u nreachable] A ND [reachable > unreachable]). As shown in Figure 8.3b, this contrast produced activation in SPOC (Talairach coordinates in Table 8.1). As expected by the co ntrast u sed t o i dentify t he a rea, t here w as h igher ac tivation during for passive viewing of adjacent and reachable locations than unreachable locations; in addition, the area responded more strongly to grasping and reaching (at the reachable location) than to passive viewing (Figure 8.3b). The activation partially overlapped with the transport-related region identified in Experiment 1.

260

Embodiment, Ego-Space, and Action

Figure 8.3 Methods, s tatistical m aps a nd f MRI a ctivation for E xperiment 2 investigating responses to re achable vs. u nreachable objects. a) S chematic representation of t he t hree p ossible lo cations at w hich objects we re pre sented d uring passive viewing trials. The arc highlights the area corresponding to t he moveable range of the arm. The cross represents the location of the fi xation point. In addition to these three conditions, two other conditions, not shown, were included: Grasping an object at the reachable location and touching an object at the reachable location. b) Group activation showing the region of SPOC that was activated by a conjunction analysis of ([adjacent > unreachable] AND [reachable > unreachable]). c) Bar graphs display the magnitude of peak activation (%BSC) in all conditions for the region circled in b. A full-color version of the figure is available online at http:// psychology.uwo.ca/culhamlab/PDFs/Culham_etal_CMUchapter8_ColorFigs.pdf

Implications These results are consistent with earlier suggestions that peripersonal space may have a particular relevance within the dorsal stream. Specifically, they suggest that neurons within SPOC show a preferential response to ob jects w ithin re achable s pace, e ven w hen no e xplicit action is required. The se findings are consistent with the suggestion that an object can automatically evoke affordances, potential actions that can be performed on the particular object (Gibson, 1979). Moreover, they suggest such affordances may have neural correlates within brain areas responsible for particular types of actions. We have additional control experiments underway to ensure that these r esults a re n ot d ue t o pos sible st imulus co nfounds such a s object size or position within the visual field; however, we think such confounds are unlikely to account for our data. In our experiments,

fMRI Investigations of Reaching and Ego Space

261

the objects had t he same physical size but naturally further objects subtended a smaller retinal image size than closer objects. Although some brain a reas w ithin t he ventral st ream have been found to be modulated b y r etinal i mage s ize ( Hasson, Ha rel, L evy, & Ma lach, 2003; Ma lach, L evy, & Ha sson, 2 002), o ur ac tivation w as f ound within t he d orsal st ream, wh ere o ne w ould ex pect r eal w orld s ize would be more relevant than retinal size. Another possible concern is the difference in retinal position of the objects. The placement of the objects was restricted by the reachable space, which was limited to an arc-shaped zone with the fulcrum at the right elbow. Thu s, the retinal location of the objects in the three positions could not be held constant. Ba sed on t he geometry of t he s etup: (1) a ll t hree objects were in the lower visual field with the near object being more peripheral and the far object appearing closer to the fovea; (2) the adjacent and unreachable objects were in the left visual field while the reachable object was in the right field; and (3), the fi xation point was midway in depth between the adjacent and reachable objects (as in Expt. 1). We d on’t bel ieve t hat t hese fac tors co ntributed t o o ur findings because: (1) t here were no activation differences i n SPOC between the adjacent and reachable objects, suggesting that retinal eccentricity isn’t likely to play a role; (2) if visual hemifields were a critical factor, we would predict g reater left hemisphere activation for objects in the reachable location compared to the adjacent and unreachable locations (with t he co nverse pa ttern i n t he r ight h emisphere), b ut no such pa ttern was observed; and (3) given that SPOC lies within the dorsal stream and is sensitive only to low spatial frequencies, it is unlikely to be sensitive to the image blurring that would strongest for the furthest object. Given that the reach-selective SPOC appears to be more activated by objects in reachable space than beyond, a f uture line of research will investigate whether this effect can be m odulated by ex tending peripersonal space by providing the subject with a tool. Growing evidence suggests t hat tools can ex tend t he range of action space a nd this can affect neural and behavioral responses. A seminal study by Iriki and colleagues (1996) demonstrated that when a macaque monkey learns to use a tool, the receptive fields of reach-selective neurons in t he i ntraparietal co rtex ex panded t o en compass t he spac e t hat became reachable with the tool. Human neuropsychological studies have also found that peripersonal space is modified by availability of a tool. For example, a patient with left neglect in peripersonal space

262

Embodiment, Ego-Space, and Action

showed an extension of that neglect to far space during line bisection tasks when using a stick but not when using a laser pointer, suggesting that the stick was treated as an extension of the body but the laser pointer was not (Berti & F rassinetti, 2000). Although these human neuropsychological st udies su ggest t hat t he human b rain, l ike t he monkey b rain, co ntains n eurons t uned t o ac tion spac e, t he la rge extent of lesions makes it difficult to determine which areas contain such neurons. We expect that SPOC is one such region and that its response to objects during passive viewing should be modulated by the availability of a tool to extend reachable space. Experiment 3: A Preference for Near Gaze in Superior Parieto-Occipital Cortex Rationale Experiment 3 from our lab (Quinlan & Culham, 2007) also suggests that the human SPOC may be particularly responsive to near space. Specifically, we found that SPOC activation was modulated by gaze distance, with stronger responses when subjects were fixating upon a near point than a far point. This research arose from an earlier experiment that had originally been i ntended t o ex amine t he pos sibility o f a p reference f or n ear space in a h uman area that has been proposed as the human functional eq uivalent o f t he mac aque v entral i ntraparietal (V IP) a rea (Bremmer et a l., 2001; Goltz et a l., 2001; see a lso Sereno & H uang, 2006). Electrophysiological studies have shown that a subset of neurons within macaque VIP respond more strongly to motion in ultranear space (very close to the face) than at further distances (Colby, Duhamel, & Goldberg, 1993), so we investigated whether putative human VIP demonstrated a similar near preference to motion. In an initial experiment, we had presented subjects with patterned objects that loomed toward the face and receded. The objects could be presented at one of three distance ranges: near the face, above the hand, or above the feet. Stimuli were carefully equated for low-level visual properties such as visual angle, velocity and so forth. Although we did n ot obs erve a p reference f or ob jects m oving i n n ear spac e v s. far space within the putative human VIP, we did observe activation in SPOC. In our initial experiments, we had i nstructed subjects to

fMRI Investigations of Reaching and Ego Space

263

follow t he looming-and-receding t argets w ith t heir e yes. Thus one factor that may have led to activation in the superior occipital cortex was the distance at which gaze was directed. We conducted an experiment to de termine whether simply having the eyes gaze on a n ear vs. far point could induce activation in the superior parieto-occipital cortex. When the eyes are directed to close targets, a near response is invoked that consists of three components called the near triad. First, when looking at near targets, the eyes rotate inward to maintain fi xation on the object with each eye (vergence). Second, the lens of the eye thickens to keep the object in focus (accommodation). Third, t he pupil constricts to i ncrease t he depth-of-field. A lthough t hese co mponents ha ve so metimes be en studied i n i solation ( Hasebe e t a l., 1 999; R ichter, C ostello, S ponheim, Lee, & P ardo, 2004; R ichter, Lee, & P ardo, 2000), in the real world, they co-occur. Therefore, we simply asked the subjects to look at each point, such that vergence, accommodation, and pupil size all provided cues as to the depth of the fixation point.

Methods and Results We gave eight right-handed subjects the simple task of gazing at small (0.7°) st ationary l ights ( LEDs) a t o ne o f t hree d istances a long t he natural line of sight: 15, 26 or 84 cm from the eye (See Figure 8.4a). The LEDs were viewed in an otherwise completely dark scanner and were calibrated to have the same luminance and visual angle. Only one LED was illuminated at a t ime and subjects were instructed to maintain fi xation o n wh ichever L ED w as c urrently i lluminated. When one LED was extinguished and another was illuminated, the subject made a s imple vergence shift (w ithout a ny s accadic c omponents) from the first LED to the second. LEDs were illuminated for 16 s a t a t ime i n ps eudo-random order. Subjects lay supine w ithin the magnet and viewed the LEDs through a mirror tilted at approximately 45°. A su rface coil was used to provide high signal-to-noise within the occipital and parietal cortices. A contrast of near vs. far viewing produced robust activation just posterior to the superior parieto-occipital sulcus in all eight subjects (Figures 8 .4b & 8 .4c; Talairach coordinates i n Table 8 .1). The time courses from this region within SPOC showed that following an initial t ransient response to a cha nge i n gaze d istance, t here was a

264

Embodiment, Ego-Space, and Action

Figure 8.4 Methods, s tatistical m aps a nd f MRI a ctivation for E xperiment 3 investigating re sponses to ne ar v s. f ar ve rgence. a ) S chematic re presentation of the e ye p ositions u sed i n t he d istance fi xation e xperiment. The e yeballs a nd t he vergence a ngle a re showed f rom above. Subjects fi xated one of t hree i lluminated light emitting diodes (LEDs) that were positioned at 15, 26 and 84 cm. Fixation was held for 16 seconds at w hich time the LED was extinguished and a ne w LED was i lluminated. b) A ctivation m ap re sulting f rom a c omparison of ne ar v s. f ar fi xations. c) B ar g raph d isplays t he m agnitude of s ustained a ctivation i n SP OC (%BSC) for e ach fi xation d istance, ave raged a cross subjects. A f ull-color ve rsion of the figure is available online at http://psychology.uwo.ca/culhamlab/PDFs/Culham_etal_CMUchapter8_ColorFigs.pdf

sustained response that scaled with the distance of the fixation point (highest for the near point, lowest for the far point). At lower thresholds, w e obs erved ac tivation s ites el sewhere i n t he oc cipital l obe, though these were less consistent between subjects and less robust than t he S POC f ocus. E ye t racking o utside t he s canner i ndicated that the activation differences were not due to differences in stability of gaze across the three distances.

Implications These r esults su ggest t hat S POC ac tivation i s m odulated b y g aze distance, wh ich ma y p rovide t he d orsal st ream w ith i nformation about object distance for action. In order to compute real world distance, the visual system needs information about where the eyes are currently d irected ( based on vi sual signals, proprioceptive signals from the eye muscles, and/or efference copy signals generated with the co mmand t o m ove t he e yes) a s w ell a s i nformation abo ut t he location of the target with respect to gaze (based on retinal location and binocular disparity). We propose that the modulation of SPOC

fMRI Investigations of Reaching and Ego Space

265

activity by gaze distance provides the first key component necessary for computing target locations for action. Both single neurons of the macaque PRR (Cohen & Andersen, 2002) and a reach-related region of the human brain (in the anteromedial IPS) (DeSouza et al., 2000) have responses that can be modulated by directing eye gaze leftward vs. rightward. Such eye-position dependent modulation properties, sometimes referred to as gain fields, are thought to play an important role in the conversion of information from retinotopic to egocentric (e.g., head-centered) coordinate frames. Our results suggest that gain fields may also exist in the third dimension, depth, to provide signals which could also be useful for the computation of physical distance, which is particularly important for the accurate control of actions. Indeed, behavioral studies suggest that eye position and vergence play an important role in the accuracy of reaching movements (Bock, 1986; Henriques & Cr awford, 2000; Henriques, Klier, Smith, Lowy, & Cr awford, 1998; Henriques, Medendorp, Gielen, & Crawford, 2003; Neggers & Bekkering, 1999; van Donkelaar & Staub, 2000). Because w e a llowed a ll t hree co mponents o f t he n ear r esponse (vergence, accommodation, and changes in pupil size) to co-occur, we cannot definitively state whether any one of these three components i s t he d riving f orce i n t he n ear-selective r esponse i n S POC. However, pa st r esearch ha s su ggested v ergence p rovides a m uch stronger cue to distance than the other two components (e.g., Foley, 1980). General Discussion To s ummarize, w e h ave r eported th ree s tudies th at h ighlight th e importance o f t he h uman S POC i n t ransporting t he a rm d uring reaching m ovements a nd i n en coding per ipersonal spac e. S patial encoding of peripersonal space appears to be ba sed on modulation of activation by both object position (with gaze fi xed) a nd by gaze distance (when n o ob ject i s p resent). A lthough t he ex act r elationships between the activation foci in our three experiments are yet to be determined, t hese results taken together suggest t hat t he SPOC region in general may be a key node within the dorsal stream for the computation of object distance, as needed to g uide actions such a s reaching.

266

Embodiment, Ego-Space, and Action

Taken t ogether, t he r esults o f t he t hree ex periments su ggest that multiple f actors a ffect r esponses w ithin SPOC. G aze d istance alone may su ffice to modulate responses i n SPOC (Experiment 3). However, e ven wh en g aze is h eld co nstant, t he S POC r esponse t o objects during passive viewing depends on whether or not they are in reachable space (Experiment 2). Furthermore, the SPOC response depends n ot o nly o n abso lute d istance, b ut o n ac tions per formed toward objects: t he response t o f urther, but st ill reachable o bjects, can be higher than the response to adjacent objects when actions are performed on the objects (Experiment 1). At first this may seem contrary to the findings of Experiments 2 and 3 of a near preference in SPOC; however, the computations for guidance of the arm to an object are more complex when t he object is f urther f rom t he ha nd and this may recruit SPOC to a greater degree. In add ition, o ur d ata su ggest t hat e ye pos ition ma y be a nother critical component in the relationship between space and hand. That is, t onic s ignals abo ut c urrent g aze d istance ( perhaps v ergence i n particular) may provide useful signals for enhancing the response to stimuli in near space and for computing the egocentric target location to guide arm movements. Other research has also suggested that SPOC may encode eye position information. First, the region is part of a network for eye movements (Paus et al., 1997). Second, SPOC is modulated by saccadic eye movements, even in the dark (Law, Svarer, Rostrup, & Paulson, 1998), supporting our findings that eye position signals are important in the area, even in the absence of other visual stimulation or task demands. There is growing evidence from past studies, as well as the three new studies presented here, to suggest that SPOC plays an important role in actions such as reaching and pointing; however, it remains to be determined whether SPOC comprises different subregions. Preliminary comparisons within subjects suggested some overlap between the transport-selective activation in lower POS in Experiment 1 and the reachable-selective activation in Experiment 2; however, no such intrasubject comparisons were possible between Experiments 1 & 2 compared to E xperiment 3. Figure 8 .5 presents a s chematic of t he activation foci from numerous studies which have reported SPOC activation. Our loose definition of SPOC includes the superior end of the parieto-occipital sulcus, as well as the regions immediately posterior (in the cuneus) and anterior (in the precuneus) to the sulcus. Several cha racteristics of t he SPOC region c an be n oted i n Figure

fMRI Investigations of Reaching and Ego Space

267

Figure 8.5 Summary of activation foci within superior parieto-occipital cortex in nine past studies and the three present studies. Activation foci are shown on the medial su rface o f o ne r epresentative sub ject’s l eft hemisphere. The c ortical s urface was defined at the gray-white matter border and has been partially inflated to reveal regions within the sulci (concavities, in dark gray) and on the gyri (convexities, in light gray). Foci are schematically represented based on their sizes and anatomical locations relative to the parieto-occipital, calcarine, and cingulate sulci, as depicted in figures from t he original studies, as specified in Table 1. A f ull-color version of t he figure i s av ailable on line at h ttp://psychology.uwo.ca/culhamlab/ PDFs/Culham_etal_CMUchapter8_ColorFigs.pdf

8.5. First, the response properties in the region strongly suggest it belongs w ithin t he dorsal st ream. Using human ma gnetoencephalography (MEG), Ha ri a nd colleagues have reported a f ocus i n t he dorsal pa rieto-occipital su lcus w ith d orsal st ream p roperties: fa st latencies, sensitivity to luminance rather than pattern changes, and motion selectivity (Hari & Salmelin, 1997; Portin et al., 1998; Vanni, Tanskanen, Seppa, Uutela, & Ha ri, 2001). Human f MRI has found somewhat more i nferior fo ci for l uminance ( vs. p attern) c hanges (Dechent & Frahm, 2003) and blinking (Bristow, Frith, & Rees, 2005). Second, SPOC has been commonly activated by the preparation and execution of pointing a nd reaching movements, w ith some studies reporting activation anterior to t he superior POS i n t he precuneus (Astafiev et al., 2003; Connolly et al., 2003; Pellijeff et al., 2006; Prado et al., 2005), and some studies also reporting activation in the POS or behind it in the cuneus (Beurze et al., 2007; Connolly et al., 2003; de

268

Embodiment, Ego-Space, and Action

Jong et al., 2001). Third, the recent human fMRI work of one group with experience in neurophysiology of reach-related a reas (Galletti et a l., 2 003) ha s l ed t o t he p roposal t hat t he h uman eq uivalent o f V6 lies posterior to the superior POS while the human equivalent of V6A is more anterior, on the parietal side of the superior POS. Putative human V6 contains a s imilar retinotopic map a s mac aque V6 (Pitzalis, Galletti et al., 2006b); whereas, putative human V6A, like macaque V6A, has only weak eccentricity mapping and shows reachrelated responses (Pitzalis, Galletti et al., 2006a). In sum, recent evidence from other labs and from the three experiments summarized here suggest that the human SPOC is a d orsal stream area involved in planning actions to locations in near space based on information such as current gaze angle.

Acknowledgments This research was funded by grants to JCC from the Canadian Institutes o f H ealth Re search ( CIHR), t he N atural S ciences a nd E ngineering Research Council (of Canada), the Canadian Foundation for Innovation and the (Ontario) Premier’s Research Excellence Award. CCP was funded by a CIHR grant to the Group on Action and Perception. We thank Claudio Galletti and Patrizia Fattori for explaining the relationship between the parietal reach region and area MIP. We also thank Marlene Behrmann and John Zettel for comments on an earlier draft.

References Andersen, R. A., & Buneo, C. A. (2002). Intentional maps in posterior parietal cortex. Annual Review of Neuroscience, 25, 189–220. Astafiev, S. V., Shulman, G. L., Stanley, C. M., Snyder, A. Z., Van Essen, D. C., & C orbetta, M. (2003). Functional organization of human intraparietal and frontal cortex for attending, looking, and pointing. Journal of Neuroscience, 23(11), 4689–4699. Berti, A ., & F rassinetti, F. (2000). W hen far becomes near: remapping of space by tool use. Journal of Cognitive Neuroscience, 12(3), 415–420. Beurze, S. M., De Lange, F. P., Toni, I., & Medendorp, W. P. (2007). Integration of t arget and e ffector i nformation i n t he h uman b rain d uring reach planning. Journal of Neurophysiology, 97(1), 188–199.

fMRI Investigations of Reaching and Ego Space

269

Binkofski, F., Dohle, C., Posse, S., Stephan, K. M., Hefter, H., Seitz, R. J., et al. (1998). Human a nterior i ntraparietal a rea subs erves prehension: A combined lesion and functional MRI activation study. Neurology, 50(5), 1253–1259. Bock, O. (1986). Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements. Experimental Brain Research, 64(3), 476–482. Bremmer, F., Schlack, A., Shah, N. J., Z afiris, O., Kubischik, M., Hoffman, K.-P. et al. (2001). Polymodal motion processing in posterior parietal and premotor cortex: A human fMRI study strongly implies equivalencies between humans and monkeys. Neuron, 29(1), 287–296. Bristow, D., Frith, C., & Rees, G. (2005). Two distinct neural effects of blinking on human visual processing. Neuroimage, 27(1), 136–145. Calton, J. L., Dickinson, A. R., & Snyder, L. H. (2002). Non-spatial, motorspecific activation in posterior parietal cortex. Nature Neuroscience, 5(6), 580–588. Cavina-Pratesi, C., Goodale, M. A., & Cu lham, J. C . (2007). f MRI reveals a d issociation b etween g rasping a nd p erceiving t he si ze o f re al 3D objects. Public Library of Science (PLOS) One, 2(5), e424. Cohen, Y. E ., & A ndersen, R . A . (2002). A c ommon ref erence f rame f or movement plans in the posterior parietal cortex. Nature Reviews Neuroscience, 3(7), 553–562. Colby, C . L ., D uhamel, J.-R., & G oldberg, M . E . (1993). Ventral i ntraparietal a rea o f t he mac aque: A natomic lo cation a nd v isual re sponse properties. Journal of Neurophysiology, 6(3), 902–914. Colby, C. L., Gattass, R., Olson, C. R., & Gross, C. G. (1988). Topographical organization of cortical afferents to extrastriate visual area PO in the macaque: A dual tracer study. Journal of Comparative Neurology, 269, 392–413. Colby, C. L., & Goldberg, M. E. (1999). Space and attention in parietal cortex. Annual Review of Neuroscience, 22, 319–349. Connolly, J. D., Andersen, R. A., & Goodale, M. A. (2003). fMRI evidence for a “parietal reach region” in the human brain. Experimental Brain Research, 153(2), 140–145. Connolly, J. D ., G oodale, M . A ., De souza, J. F ., Menon, R . S ., & Vi lis, T. (2000). A c omparison o f fr ontoparietal f MRI a ctivation d uring anti-saccades an d an ti-pointing. Journal of N europhysiology, 8 4(3), 1645–1655. Cooke, D. F., Taylor, C. S., Moore, T., & Gr aziano, M. S. (2003). Complex movements evoked by m icrostimulation of t he ventral i ntraparietal area. Proceedings of th e National Academy of S ciences of th e United States of America, 100(10), 6163–6168. Culham, J. C. (2004). Human brain imaging reveals a parietal area specialized for grasping. In N. Kanwisher & J. Duncan (Eds.), Attention and

270

Embodiment, Ego-Space, and Action

performance: Vol. 10. Func tional b rain im aging of hum an c ognition (pp. 417–438). Oxford: Oxford University Press. Culham, J. C. (2006). Functional neuroimaging: Experimental design and analysis. In R. Cabeza & A. Kingstone (Eds.), Handbook of functional neuroimaging of cognition (2nd ed., pp. 53–82). Cambridge MA: MIT Press. Culham, J. C., Cavina-Pratesi, C., & Singhal, A. (2006). The role of parietal cortex in visuomotor control: What have we learned from neuroimaging? Neuropsychologia, 44(13), 2668–2684. Culham, J. C ., Danckert, S. L ., DeSouza, J. F., Gati, J. S ., Menon, R . S., & Goodale, M . A . ( 2003). Vi sually g uided g rasping p roduces f MRI activation in dorsal but not ventral stream brain areas. Experimental Brain Research, 153(2), 180–189. Culham, J. C., & Kanwisher, N. G. (2001). Neuroimaging of cognitive functions i n h uman pa rietal c ortex. Current O pinion in N eurobiology, 11(2), 157–163. Culham, J. C ., & V alyear, K . F. (2006). Human pa rietal c ortex i n ac tion. Current Opinion in Neurobiology, 16(2), 205–212. de Jong, B. M., van der Graaf, F. H., & Paans, A. M. (2001). Brain activation related to t he representations of external space and body scheme in visuomotor control. Neuroimage, 14(5), 1128–1135. Dechent, P., & Frahm, J. (2003). Characterization of the human visual V6 complex by f unctional magnetic resonance imaging. European Journal of Neuroscience, 17(10), 2201–2211. DeSouza, J. F ., D ukelow, S . P., G ati, J. S ., Menon, R . S ., A ndersen, R . A ., & Vi lis, T. (2000). E ye p osition sig nal m odulates a h uman pa rietal pointing region during memory-guided movements. Journal of Neuroscience, 20(15), 5835–5840. DiFranco, D., Muir, D. W., & Dodwell, P. C. (1978). Reaching in very young infants. Perception, 7, 385–392. diPelligrino, G., Ladavas, E., & Farne, A. (1997). Seeing where your hands are. Nature, 388, 730. Eskandar, E. N., & Assad, J. A. (1999). Dissociation of visual, motor and predictive signals in parietal cortex during visual guidance. Nature Neuroscience, 2(1), 88–93. Foley, J. M . (1980). Bi nocular d istance p erception. Psychological Re view, 87(5), 411–434. Frak, V., Paulignan, Y., Jeannerod, M., Michel, F., & Cohen, H. (2006). Prehension movements in a pa tient (AC) w ith posterior parietal cortex damage a nd p osterior c allosal s ection. Brain and C ognition, 60 (1), 43–48. Frey, S. H., Vinton, D., Norlund, R., & Grafton, S. T. (2005). Cortical topography of human a nterior i ntraparietal c ortex ac tive during v isually

fMRI Investigations of Reaching and Ego Space

271

guided g rasping. Brain Re search, Cognitive Brain Re search, 23(2–3), 397–405. Gail, A., & A ndersen, R . A. (2006). Neural dy namics in monkey parietal reach re gion re flect context-specific s ensorimotor t ransformations. Journal of Neuroscience, 26(37), 9376–9384. Galletti, C., Kutz, D. F., Gamberini, M., Breveglieri, R., & Fattori, P. (2003). Role o f t he m edial pa rieto-occipital c ortex i n t he c ontrol o f re aching a nd g rasping m ovements. Experimental Brain Re search, 153(2), 158–170. Gibson, J. J. ( 1979). The ecol ogical appr oach t o v isual pe rception. B oston: Houghton Mifflin. Goltz, H. C., Dukelow, S. P., De Souza, J. F. X., Culham, J. C., van den Berg, A. V., Goosens, H. H. L. et al. (2001). A putative homologue of monkey area VIP in humans. Paper presented at the Society for Neuroscience, San Diego, CA. Goodale, M. A., & Jakobson, L. S. (1992). Action systems in the posterior parietal cortex. Behavioral and Brain Sciences, 15(4), 747. Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15(1), 20–25. Grafton, S. T., Fagg, A. H., Woods, R. P., & Arbib, M. A. (1996). Functional anatomy of pointing and grasping in humans. Cerebral Cortex, 6(2), 226–237. Grefkes, C., & Fink, G. R. (2005). The functional organization of the intraparietal sulcus in humans and monkeys. Journal of Anatomy, 207(1), 3–17. Grefkes, C., Ritzl, A., Zilles, K., & Fink, G. R. (2004). Human medial intraparietal c ortex s ubserves v isuomotor c oordinate t ransformation. Neuroimage, 23(4), 1494–1506. Halligan, P. W., & Ma rshall, J. C . (1991). Left neglect for near but not far space in man. Nature, 350(6318), 498–500. Halverson, H. M. (1931). An experimental study of prehension in infants by means of systematic cinema records. Genetic Psychology Monographs, 10, 110–286. Hari, R., & Salmelin, R. (1997). Human cortical oscillations: A neuromagnetic view through the skull. Trends in Neurosciences, 20(1), 44–49. Hasebe, H ., O yamada, H ., K inomura, S ., K awashima, R ., O uchi, Y ., Nobezawa, S . e t a l. (1999). H uman c ortical a reas ac tivated i n rel ation to v ergence e ye m ovements-a P ET s tudy. Neuroimage, 1 0(2), 200–208. Hasson, U., Ha rel, M., L evy, I., & Ma lach, R . (2003). L arge-scale m irrorsymmetry o rganization o f h uman o ccipito-temporal ob ject a reas. Neuron, 37(6), 1027–1041.

272

Embodiment, Ego-Space, and Action

Henriques, D. Y., & Cr awford, J. D . (2000). D irection-dependent d istortions o f re tinocentric spac e i n t he v isuomotor t ransformation f or pointing. Experimental Brain Research, 132(2), 179–194. Henriques, D. Y., K lier, E . M., Smith, M. A., Lowy, D., & Cr awford, J. D . (1998). Gaze-centered remapping of remembered v isual space in a n open-loop pointing task. Journal of Neuroscience, 18(4), 1583–1594. Henriques, D. Y., Medendorp, W. P., Gielen, C. C., & Crawford, J. D. (2003). Geometric computations underlying eye-hand coordination: Orientations of t he t wo e yes a nd t he he ad. Experimental Brain Re search, 152(1), 70–78. Iriki, A ., T anaka, M ., & I wamura, Y . ( 1996). C oding o f m odified body schema during tool use by mac aque postcentral neurones. Neuroreport, 7(14), 2325–2330. Jeannerod, M. (1981). Intersegmental coordination during reaching at natural visual objects. In J. Long & A. Baddeley (Eds.), Attention an d Performance ( Vol. 60, pp. 153–168). Hillsdale NJ: Erlbaum. Jeannerod, M. (1984). The timing of natural prehension movements. Journal of Motor Behavior, 16(3), 235–254. Jeannerod, M . (1986). Mechanisms o f v isuomotor c oordination: A s tudy in n ormal a nd b rain- d amaged sub jects. Neuropsychologia, 2 4(1), 41–78. Jeannerod, M ., De cety, J., & M ichel, F . ( 1994). I mpairment o f g rasping movements following a b ilateral posterior parietal lesion. Neuropsychologia, 32(4), 369–380. Karnath, H. O., & Perenin, M. T. (2005). Cortical control of visually guided reaching: evidence from patients with optic ataxia. Cerebral Cortex, 15(10), 1561–1569. Kawashima, R., Naitoh, E., Matsumura, M., Itoh, H., Ono, S., Satoh, K. et al. (1996). Topographic representation in human intraparietal sulcus of reaching and saccade. Neuroreport, 7, 1253–1256. Kertzman, C., Schwarz, U., Zeffiro, T. A., & Hallett, M. (1997). The role of posterior pa rietal c ortex i n v isually g uided re aching movements i n humans. Experimental Brain Research, 114(1), 170–183. Ladavas, E . (2002). Functional and dy namic properties of v isual peripersonal space. Trends in Cognitive Sciences, 6(1), 17–22. Ladavas, E., diPellegrino, G., Farne, A., & Zeloni, G. (1998). Neuropsychological evidence of an integrated visuotactile representation of peripersonal spac e i n h umans. Journal of C ognitive N euroscience, 1 0(5), 581–589. Ladavas, E., Farne, A., Zeloni, G., & di Pellegrino, G. (2000). Seeing or not seeing w here yo ur ha nds a re. Experimental Br ain Re search, 1 31(4), 458–467.

fMRI Investigations of Reaching and Ego Space

273

Ladavas, E., Zeloni, G., & Farne, A. (1998). Visual peripersonal space centred on the face in humans. Brain, 121(Pt. 12), 2317–2326. Law, I., Svarer, C ., Rostrup, E ., & Pa ulson, O. B. (1998). Pa rieto-occipital cortex ac tivation during s elf-generated e ye movements i n t he d ark. Brain, 121(Pt. 11), 2189–2200. Malach, R ., L evy, I ., & Ha sson, U. (2002). The topography of h igh-order human object areas. Trends in Cognitive Sciences, 6(4), 176–184. Maravita, A., & Iriki, A. (2004). Tools for the body (schema). Trends in Cognitive Sciences, 8(2), 79–86. Milner, A. D., & Goodale, M. A. (1995). The visual brain in action. Oxford: Oxford University Press. Neggers, S. F., & B ekkering, H. (1999). Integration of v isual and somatosensory target information in goal-directed eye and arm movements. Experimental Brain Research, 125(1), 97–107. Paus, T., Jech, R., Thompson, C. J., C omeau, R., Peters, T., & Evans, A. C. (1997). Transcranial magnetic stimulation during positron emission tomography: A new method for studying connectivity of the human cerebral cortex. Journal of Neuroscience, 17(9), 3178–3184. Pellijeff, A ., B onilha, L ., M organ, P. S ., M cKenzie, K ., & J ackson, S . R . (2006). Pa rietal u pdating o f l imb p osture: A n e vent-related f MRI study. Neuropsychologia, 44(13), 2685–2690. Pitzalis, S., Galletti, C., Huang, R. S., Patria, F., Committeri, G., Galati, G. et al. (2006a). Visuotopic properties of the putative human homologue of the macaque V6A. Paper presented at the Organization for Human Brain Mapping, Florence, Italy. Pitzalis, S ., G alletti, C ., H uang, R . S ., Pa tria, F., C ommitteri, G ., G alati, G. et al. (2006b). Wide-field retinotopy defines human cortical visual area v6. Journal of Neuroscience, 26(30), 7962–7973. Pitzalis, S., Sereno, M., Committeri, G., Galati, G., Fattori, P., & Galletti, C. (2006). A possible human homologue of the macaque V6A. Journal of Vision, 6(6), 536a. Portin, K ., Salenius, S ., Salmelin, R ., & Ha ri, R . (1998). Activation of t he human occipital and parietal cortex by pattern and luminance stimuli: Neuromagnetic measurements. Cerebral Cortex, 8(3), 253–260. Prado, J., C lavagnier, S ., O tzenberger, H ., S cheiber, C ., Ken nedy, H ., & Perenin, M . T. (2005). Two c ortical s ystems f or re aching i n c entral and peripheral vision. Neuron, 48(5), 849–858. Previc, F. H. (1998). The neuropsychology of 3-D space. Psychological Bulletin, 124(2), 123–164. Quinlan, D. J., & Cu lham, J. C. (2007). fMRI reveals a preference for near viewing in the human superior parieto-occipital cortex. Neuroimage, 36(1), 167–187.

274

Embodiment, Ego-Space, and Action

Richter, H. O., Costello, P., Sponheim, S. R., Lee, J. T., & Pardo, J. V. (2004). Functional ne uroanatomy of t he h uman ne ar/far re sponse to bl ur cues: Eye-lens accommodation/vergence to p oint t argets v arying i n depth. European Journal of Neuroscience, 20(10), 2722–2732. Richter, H. O., Lee, J. T., & Pardo, J. V. (2000). Neuroanatomical correlates of the near response: Voluntary modulation of accommodation/vergence in the human visual system. European Journal of Neuroscience, 12(1), 311–321. Sereno, M. I., & Huang, R. S. (2006). A human parietal face area contains aligned head-centered visual and tactile maps. Nature Neuroscience, 9(10), 1337–1343. Singhal, A., Kaufman, L., Valyear, K., & Culham, J. C. (2006). fMRI reactivation of the human lateral occipital complex during delayed actions to remembered objects. Visual Cognition, 14(1), 122–125. Smeets, J. B., & Brenner, E. (1999). A new view on grasping. Motor Control, 3(3), 237–271. Snyder, L . H ., B atista, A . P., & A ndersen, R . A . (2000). I ntention-related activity i n t he p osterior pa rietal c ortex: A re view. Vision Re search, 40(10–12), 1433–1441. van Donkelaar, P., & Staub, J. (2000). Eye-hand coordination to visual versus remembered targets. Experimental Brain Research, 133(3), 414–418. Vanni, S., Tanskanen, T., Seppa, M., Uutela, K., & Hari, R. (2001). Coinciding early activation of the human primary visual cortex and anteromedial c uneus. Proceedings of th e N ational A cademy of S ciences, U.S.A., 98(5), 2776–2780. von H ofsten, C . ( 1979). De velopment o f v isually-directed re aching: The approach phase. Journal of Human Movement Studies, 5, 160–178. von Hofsten, C . (1982). Eye-hand coordination in t he newborn. Developmental Psychology, 18, 450–461. Weiss, P. H., Marshall, J. C., Wunderlich, G., Tellmann, L., Halligan, P. W., Freund, H. J., et al. (2000). Neural consequences of acting in near versus far space: A p hysiological ba sis for clinical d issociations. Brain, 123( Pt. 12), 2531–2541.

9 The Growing Body in Action What Infant Locomotion Tells Us About Perceptually Guided Action

Karen E. Adolph

A Changeable Body in a Variable World Twenty years ago, Eleanor Gibson (1987) asked, “What does infant perception tell us about theories of perception?” (p. 515). Her answer was t hat t heories of perception, t ypically built on adults’ behavior in esoteric recognition and discrimination tasks, must take into account the perceptual accomplishments of young infants. Although infants cannot recognize letters, follow researchers’ instructions, or provide researchers with verbal judgments about their percepts, they can generate perceptual i nformation t hrough spontaneous ex ploratory movements a nd u se it for g uiding motor ac tion. I n Gibson’s (1987) words, “Present-day theories of perception are going to have to give an account of how perception gives rise to and guides action” (p. 518). Perhaps, the fact that infants make such difficult subjects in traditional perception pa radigms encouraged researchers to recognize the links between perception and motor action. Infants create visual, t actile, vestibular, a nd muscle-joint i nformation by moving their eyes, heads, and bodies. The consequent perceptual information 275

276

Embodiment, Ego-Space, and Action

can provide the basis for selecting and modifying future movements because it is both exteroceptive, specifying events and objects in the environment, a nd proprioceptive, spec ifying t he c urrent st atus of the body and its participation in ongoing events. Here, I pose Gibson’s question once again, but now in the context of a new literature on the perceptual guidance of motor action. The notion that a central function of perception is to guide motor action is no longer new. But, in t he midst of a n ew generation of t heorizing ba sed on adults’ behavior i n e soteric motor dec ision t asks, my answer will be that perception-action studies with infants tell us that theories of perception-action coupling will have to take learning and development into account. Issues of learning and development are accentuated in research with infants because changes in infants’ bodies and motor skills are especially dramatic, and encounters with novel features of t he environment a re especially pronounced compared with later periods of life. However, learning and development are not limited to infancy. Throughout the life span, bodily propensities change due to gains and losses in weight, muscle stiffness, and strength. New motor sk ills a re acquired a nd old ones a re lost. The environment still holds some surprises. At any age, the central problem for understanding perceptually guided action is how observers cope with a changeable body in a variable world. Embodied and Embedded Action Motor actions are always embodied and embedded (Bernstein, 1996; Clark, 1997). The functional outcome of motor actions is inextricably bound to the biomechanical status of the body (the size, shape, mass, compliance, strength, flexibility, and coordination of the various body parts) and the physical properties of the surrounding environment (the surfaces and media that support the body, the objects toward which movements are directed, and the effects of gravity acting on the various body parts). A few, haphazard, feather-kicks of a 12-week-old f etus c an p ropel i t so mersaulting, n early w eightlessly, through the buoyant amniotic fluid, whereas powerful muscle forces, precisely timed to exploit inertia, are required for an expert gymnast to launch a so mersault at the end of a t umbling run. From fetus to skilled a thlete, m otor ac tions a re a lways constrained o r fac ilitated by the body’s current dimensions and propensities in the context of

The Growing Body in Action

277

an i mmediate en vironment w ith pa rticular p hysical su pports a nd hindrances. Depending o n t he c urrent co nstellation o f body a nd en vironmental fac tors, t he s ame f unctional outcome c an require very d ifferent muscle actions and, reciprocally, the same muscle actions can result i n ve ry d ifferent f unctional out comes (B ernstein, 1996). For example, for fetuses to bring their hands to their mouths in the first weeks of gestation, they must flex their arms at the shoulder because their a rm buds a re so sh ort (Moore & P ersaud, 1998). To per form the same hand-to-mouth behavior several weeks later, fetuses must bend their arms at the elbow to take their longer limbs into account (Robinson & K leven, 2005). The same leg kicks that somersault the fetus through the amniotic fluid early in gestation fail to extend the legs toward t he end of gestation when t he growing fetus is pressed against t he u terine w all ( de V ries, V isser, & Pr echtl, 1 982). A fter birth, v igorous l eg k icks flex a nd ex tend t he l egs, b ut w ithout t he buoyancy of the amniotic fluid, gravity keeps t he infant rooted in place (Thelen & Fisher, 1983). The cha nging constraints of t he body a nd environment a re not limited to developmental changes (such as the lengthening and differentiation o f t he f etal a rm a nd t he dec rease i n spac e w ithin t he uterus). At a ny point i n t he l ifespan, t he fac ts of embodiment c an vary due to seemingly insignificant factors in the course of everyday activity (Adolph, 2002; Reed, 1989). Carrying an object can alter the body’s functional dimensions. Variations in clothing and footgear can affect the ability to create resistive forces. Leaning forward, lifting an arm, turning the head, or even drawing a deep breath can create moment-to-moment changes in the location of the body’s center of mass and, as a consequence, pose continually changing demands for m aintaining b alance. S imilarly, v ariability an d n ovelty in t he environmental co ntext i s t he r ule, n ot t he ex ception. A ctions a re typically performed in a world that is cluttered with potential obstacles and opportunities. Objects and surfaces have dimensions, material properties, and locations that can change from one encounter to the next (think of cha irs, doors, a nd t he condition of t he sidewalk due to weather a nd debris). Ma ny objects move a nd pa rticipate i n events—people, animals, balls, and cars. As the Greek philosopher, Heraclitus put it, “No man ever steps in the same river twice, for it is not the same river and he is not the same man.”

278

Embodiment, Ego-Space, and Action

Affordances The Gibsons’ concept of affordances captures the functional significance of embodied a nd embedded ac tion (E. J. Gibson, 1982; E . J. Gibson & Pick, 2000; J. J. Gibson, 1979). Affordances are possibilities for motor action. The probability of performing a n action successfully depends on the fit between the behaviorally relevant properties of the body i n relation to the surrounding environment (Adolph & Berger, 2006; Warren, 1984). As such, affordances reflect the objective state of affairs. Actions have a certain probability of success, regardless of whether actors perceive, misperceive, or take advantage of the possibilities. Thus, the notion of affordances is distinct from claims about h ow a ffordances m ight be per ceived o r ex ploited, a nd t he description of an affordance involves only the relationship between the relevant physical features of the body and environment. Because a ffordances a re r elational, t he fac ts o f em bodiment must be t aken w ith reference to t he properties of t he environment in which the body i s embedded and vice versa. For example, walking over open ground is possible only when walkers have sufficient strength, post ural control, a nd endurance relative to t he length of the path, and the slant, rigidity, and texture of the ground surface. Walking between two obstacles is possible only when walkers’ largest body dimensions are smaller than the size of the opening (Warren & Whang, 1987). In fact, bodily propensities and environmental properties are so intimately connected for supporting motor actions that changes in a single factor on either side of the affordance relationship alter the probability of successful performance. For example, Figure 9.1A shows the affordance function for a t ypical 14-month-old toddler walking down an adjustable sloping ramp. On shallower slopes, from 0° to 20°, the probability of walking successfully is close to 1.0. On slopes between 20° and 32°, tiny changes in the degree of slant cause the probability of success to drop from 1.0 to 0. The affordance threshold, defined here as the slope where the probability of success is 0.50, is approximately 26°. Figure 9.1B shows the affordance function f or a n ad ult w oman w alking t hrough a n ad justable d oorway (Franchak & Adolph, 2007). For wider doorways, larger than 31 cm, the probability of walking successfully is 1.0. With small changes in doorway width, the probability of success drops from 1.0 to 0. The estimated affordance threshold is 30 cm. Reciprocally, changes in body dimensions and propensities—both structural and dynamic aspects of the body—affect the probability

The Growing Body in Action A

B

C

D

279

Figure 9.1 Affordance t hresholds. (A) A ffordance f unction for one 1 4-monthold infant walking down slopes. The function is calculated by the ratio of successful a nd f ailed at tempts to w alk. The affordance t hreshold i s t he e stimated s lope where the probability of walking successfully is 0.50. (B) Affordance function and threshold for one woman walking through doorways. (C) Developmental changes in affordance thresholds for one infant walking down slopes. Figure adapted from R. Kail (Ed.), Advances in Child Development & Behavior, by K. E. Adolph, “Learning to keep balance,” 2002, with permission from Elsevier Science. (D) Developmental changes in affordance thresholds for one pregnant woman walking through doorways.

of performing these same actions successfully. The changes may be temporary: For example, affordance thresholds for infants walking down slopes decreased by 4.4°, on average, while loaded with lead weights on their shoulders (Adolph & Avolio, 2000). Similarly, affordance thresholds for navigating between two obstacles would increase while carrying a p izza box or wearing bulky clothing. Or, cha nges may be more permanent as the result of injury, aging, growth, weight gain/loss, increase/decrease in motor proficiency, and so on. Figure 9.1C shows t he cha nge i n a ffordance t hresholds for w alking down slopes f or o ne i nfant o ver w eeks o f te sting ( Adolph, 1 997). W ith changes in the infant’s body dimensions and walking skill, he could manage steeper slopes each week than the last. Figure 9.1D shows the change i n a ffordance t hresholds for pa ssing t hrough doorways for one woman over the course of her pregnancy (Franchak & Adolph,

280

Embodiment, Ego-Space, and Action

2007). Increase in her body dimensions and decrease in her ability to contort and compress her abdomen caused a corresponding increase in affordance thresholds over weeks of testing. How Development Changes Affordances for Action The examples of infants walking down slopes and pregnant women walking through doorways illustrate how rapidly and dramatically affordances c an cha nge w ith de velopment. A pa ssable d oorway a t one month becomes impassable a month later. An impossibly steep slope this week becomes navigable a week later. In essence, when possibilities for motor action are plotted against a continuously changing feature of the environment (as in Figures 9.1A-B), developmental changes in body dimensions and skills shift the affordance function back and forth along the x-axis, causing a corresponding shift in the value of the threshold. Developmental changes in the body, its propensities, and the surrounding environment are especially striking in the beginning of life. Body Growth The rate of fetal body growth, for example, is far more astronomical than the external view of mothers’ bulging abdomens might suggest. At 4 weeks postconception, the average embryo is 4 mm long, about the size of a pe a (Moore & Persaud, 1998). The head comprises one half of t he b ody length. From p ea-sized embryo to term ne wborn, fetuses i ncrease t heir h eight b y a pproximately 8 ,000% a nd t heir weight by 42,500%. By the end of gestation at 38 to 40 weeks, the newborn i s 57 c m long, a nd t he head comprises one fourth of t he body length (Ounsted & M oar, 1986). By co mparison, adults’ head length is one eighth of their total body height (C. E. Palmer, 1944). Rapid and dramatic body growth continues during infancy. Like fetal growth, the rates of change during infancy differ for different parts of the body. Gains in infants’ body length are faster than gains in weight and head circumference, so that overall body proportions undergo a general slimming. In essence, infants begin to grow into their large heads. Their top-heavy bodies become increasingly cylindrical and their center of mass moves from the bottom of the ribcage

The Growing Body in Action

281

to below the belly button. After 2 years of age, rate of body growth slows until the adolescent growth spurt. Continuous body growth as represented by the smooth curves on a standard growth chart is actually a m isrepresentation. Children’s body growth—from fetus to adolescent—is episodic, not continuous (Johnson, Veldhuis, & Lampl, 1996; Lampl, 1993; Lampl, Ashizawa, Kawabata, & J ohnson, 1998; Lampl & J eanty, 2003; Lampl & J ohnson, 1993; Lampl, Johnson, & Frongillo, 2001). That is, brief, 24-hr periods of extremely rapid growth are interspersed w ith long periods of stasis during wh ich no g rowth occurs for days or weeks on end. Figure 9.2A shows the smooth growth functions on a standard growth chart for height, and Figure 9.2B shows the actual episodic increases in height for one inf ant. Episodic growth i s c haracteristic of cha nges i n height, weight, head circumference, a nd leg bone growth. F or ex ample, d aily m easurements o f i nfants’ h eight sh ow increases of 0.5 to 1.65 cm punctuated by 2- to 28-day periods of no growth (Lampl, Veldhuis, & J ohnson, 1992). The t iming of g rowth spurts, the amplitude of the changes, and the length of the plateaus when no growth occurs show large intra- and intersubject variability. The long plateaus with no growth are not related to stress or illness. Rather, normal, healthy children grow in fits and starts. Even within a 24 -hr per iod, g rowth i s ep isodic. Ch ildren g row m ore a t n ight while lying down than during the day while standing and walking, especially in their weight-bearing extremities (Lampl, 1992; Noonan et al., 2004). Motor Proficiency and New Perception-Action Systems. During the same time period that infants’ body size and body proportions are changing, so are their abilities to perform various motor skills (for reviews, s ee Adolph & B erger, 2 005, 2 006; B ertenthal & Clifton, 1998). For example, between 5 a nd 8 m onths, most infants begin l earning t o s it ( Bly, 1 994; F rankenburg & Dodd s, 1 967). A t first, balance is so precarious that they must prop themselves upright in a tripod position by supporting their body weight on their arms between their outstretched legs. Their hamstrings are so loose, that they l ose ba lance i n a 36 0° a rc, i ncluding fa lling n ose-to-knees. Gradually, only one arm must be used for balance. Then, the hands become f reed f rom s upporting f unctions, first mome ntarily, a nd

282

Embodiment, Ego-Space, and Action

Length (cm)

A

Age (years)

Length (cm)

B

Age (mos) Figure 9.2 Growth curves for height. (A) Standard growth curves from birth to 18 years. Data are mathematically smoothed and averaged over children. Dashed line represents boys. Solid line represents girls. Adapted from growth charts developed by t he National C enter for H ealth St atistics i n c ollaboration w ith t he National C enter for C hronic D isease P revention a nd H ealth P romotion ( 2000). (B) M icrogenetic e pisodic g rowth c urve for one e xemplar i nfant. E ach ve rtical line re presents d aily re plicate ob servations. Re printed f rom Science, 2 58, by M . Lampl, J. D. Veldhuis, and M. L. Johnson, “Saltation and Stasis: A Model of Human Growth,” pp. 801–803, 1992, with permission from the American Association for the Advancement of Science.

The Growing Body in Action

283

later for extended periods. Finally, infants can turn their heads, lean in various directions, and hold objects and manipulate them without losing balance. Most infants’ first success at forward locomotion is crawling (others may initially “log roll,” “bum shuffle,” or “cruise”). Many crawlers first ma ster p rone p rogression b y bel ly c rawling, wh ere t heir abdomens rest on the ground during part of each crawling cycle (Adolph, Vereijken, & Den ny, 1998). They use their arms, legs, and bellies i n v arious, id iosyncratic c ombinations, s ometimes pu shing with o nly o ne l imb i n a g irdle a nd d ragging t he la me a rm o r l eg behind, sometimes pushing with first the knee then the foot on one leg, a nd so metimes la unching t hemselves f rom k nees o r f eet o nto belly during each cycle. They may move arms and legs on alternate sides of the body t ogether like a t rot, ipsilateral limbs together like a lumbering bear, lift front then back limbs into the air like a bunny hop, and display irregular patterns of interlimb timing—the possibilities a re i mmense bec ause ba lance is so u nconstrained w ith t he abdomen r esting o n t he floor. Fi gure 9.3A s hows one i nfant’s p attern of interlimb timing over a series of cycles (Adolph et al., 1998). First, he moved his left arm forward. Then, he pushed with both legs and l ifted h is abdomen off the floor. Next he moved h is r ight a rm forward. Use of the right leg was especially variable, and for a l ong period the infant maintained the leg aloft. Despite tremendous intraand i nter-subject variability i n crawling movements, proficiency at belly c rawling i ncreases w ith e ach w eek o f p ractice, a s i ndicated by i mprovements i n c rawling spe ed ( Figure 9.4A) a nd step l ength (Adolph et al., 1998). Eventually (typically, at about 8 m onths of a ge), i nfants acquire sufficient strength and balance control to crawl with their abdomens off the floor. In contrast to the variability endemic in belly crawling, the t iming i n i nterlimb coo rdination o f ha nds-and-knees g aits i s nearly uniform (Adolph et al., 1998; Freedland & Bertenthal, 1994). Within the first week or two after achieving hand-and-knees crawling, i nfants m ove t heir a rms a nd l egs i n a n a lternating n ear-trot. Figure 9.3B shows the pattern of interlimb timing in one infant that is typical of hands-and-knees gaits. She moved her right arm and left leg together a nd her left a rm a nd right leg together, w ith t he a rms slightly preceding the legs in each case. Again, proficiency at crawling shows dramatic improvements over the course of a few months (Figure 9.4B), with increases in the amplitude and speed of crawling movements (Adolph et al., 1998).

284

Embodiment, Ego-Space, and Action

A

Limb

Belly Crawler

Time (seconds) B

Limb

Hands/Knees Crawler

Time (seconds) Figure 9.3 Patterns of i nterlimb t iming i n (A) b elly c rawling, a nd ( B) h andsand-knees crawling. Shaded regions represent t ime when t he l imb (or belly) was supporting t he b ody i n s tance. O pen re gions re present t ime w hen t he l imb (or belly) was moving forward in swing.

Typically, coincident with crawling at about 10 months of age (Frankenburg & Dodds, 1967), infants begin to master upright postures. They pull to a stand against furniture and later cruise—that is, move sideways in an upright position while holding onto furniture for support. Initially, cruisers’ arms do most of the work of supporting body w eight, ma intaining ba lance, a nd ste ering; t he l egs h old only part of the body weight (many infants begin cruising up on their toes) a nd often become crossed and entangled. As the legs take on more of a supporting function, infants put less weight on their arms

The Growing Body in Action

285

B

A

C

Figure 9.4 Developmental c hanges i n ve locity d uring ( A) b elly c rawling, ( B) hands-and-knees c rawling, a nd ( C) w alking. Cr awling d ata we re a dapted f rom Child Development, 69, by K. E. Adolph, B. Vereijken, and M. Denny, “Learning to crawl,” pp. 1299–1312, 1998, with permission from Blackwell Publishers.

and t he coo rdination be tween a rm a nd l eg m ovements i mproves (Vereijken & Adolph, 1999; Vereijken & Waardenburg, 1996). Walking i s t he m ost h eralded o f i nfants’ m otor sk ills, p robably because it is such an obvious step toward adultlike behavior. On average, infants take their first toddling steps toward the end of their first year (Frankenburg & Dodd s, 1967), but like most motor sk ills, t he range in the age of walking onset is extremely wide (9 to 16 months of age). Like crawling, improvements in walking are most rapid and dramatic in the first few months after onset, and thereafter begin to asymptote a nd r eflect m ore sub tle t ypes o f cha nges (Adolph, Vereijken, & Sh rout, 2003; Bril & B reniere, 1992; Bril & L edebt, 1998). Asymptotic performance curves are typical of motor performance, including measures of errors (e.g., missteps), variability (e.g., coefficient of variation), speed, amplitude, and accuracy (Schmidt & Lee, 1999). I nfants’ step l engths i ncrease, t he la teral d istance be tween

286

Embodiment, Ego-Space, and Action

their legs a nd out-toeing decrease, both i ntra- a nd i nterlimb coordination b ecome le ss v ariable a nd more e fficient, a nd a s shown i n Figure 9.4C (Adolph, Garciaguirre, & Badaly, 2007a; Garciaguirre, Adolph, & Shrout, 2007), infants walk faster. Note that novice walking infants move twice as fast in their first weeks of walking as experienced crawlers do in their last weeks of crawling. Once infants can string a s eries of consecutive w alking steps t ogether, t he ability to walk provides them with a more time-efficient mode of travel. The de velopmental cha nges i n s itting, c rawling, c ruising, a nd walking reflect more than the acquisition of motor proficiency. The transitions be tween e ach o f t hese post ural m ilestones r eflect the acquisition of new perception-action systems (Adolph, 2002, 2005). Whereas changes in motor proficiency shift the affordance function back and forth along the x-axis (as illustrated in Figure 9.1), transitions to new perception-action systems create new affordance functions. Each posture represents a d ifferent problem space defined by a unique set of parameters for maintaining balance. Each has a different key pivot around which the body rotates (the hips for sitting, the wrists for crawling, the shoulders for cruising, and the ankles for walking) and a different region of permissible postural sway within which the body can rotate before falling. Infants use different muscle groups for keeping t he body u pright and for propelling it forward. There a re d ifferent v antage po ints f or v iewing t he g round a head, correlations be tween v isual a nd v estibular i nformation, ac cess t o mechanical i nformation f rom t ouching t he g round, a nd so o n. I n Campos and colleagues’ (2000) words, The ma pping b etween v ision a nd p osture t hat r esults f rom crawling experience will need to be remapped as the infant acquires new motor sk ills such a s s tanding a nd w alking…In fact, remapping is likely to occur with the acquisition of every new motor skill in a continuously coevolving perception-action cycle. (p. 174)

Each post ure r equires m oving d ifferent body pa rts a nd ba lance i s controlled by different sources of visual, vestibular, and mechanical information. After the infancy period, it is hard to imagine comparable novelty in perception-action systems, but bicycling, swimming, and swinging arm over arm along monkey bars are likely candidates because these sk ills i nvolve such d ifferent c onstraints on maintaining bal-

The Growing Body in Action

287

ance. In bicycling, for example, the key pivot is at the bottom of the front wheel and the region of permissible postural sway depends on the angles of the wheels as well as the body’s position. While swinging on monkey bars, the body rotates around the shoulders, and the arms must support the entire body weight. Rather than representing the acquisition of new perception-action systems, most of the locomotor sk ills acq uired d uring ch ildhood a nd ad ulthood (driving a car, motorcycling, rock climbing, ice skating, skiing, surf boarding, etc.) may reflect the growth and adaptation of existing perceptionaction systems for sitting, crawling, cruising, and walking. Environment Developmental changes in infants’ bodies and skills bring about corresponding changes in the environment. New postures and vantage points and new and improved forms of mobility allow infants to gain access to new places a nd surfaces. Instead of looking at t he legs of the coffee table, they can peer over the top. Instead of looking at an object from across the room, they can go to retrieve it. Rather than waiting for caregivers to transport them, they can go to see things for themselves. Features of the environment that adults take for granted, such as sloping ground and narrow openings between obstacles, are novel for newly mobile infants. Moreover, de velopmental ex pansion i n t he en vironment i s n ot solely depen dent o n de velopmental cha nges i n m obility. A l eaner, more mature looking body and better performance of various locomotor sk ills may i nspire caregivers to provide i nfants w ith g reater access to the environment. Now, parents may put infants on the floor with greater frequency, remove the gate blocking the stairs, and allow infants to travel into an adjoining room on their own.

Summary: Changing Affordances Changes i n i nfants’ bod ies, p ropensities, a nd en vironments oc cur concurrently, but along different developmental trajectories. Singly, or in combination, t hese fac tors c an cha nge t he constraints on ac tion. Episodic body growth means that infants begin a new day with a new body size and a n ew set of body p roportions. In particular, less top-

288

Embodiment, Ego-Space, and Action

heavy body p roportions and increased muscle mass relative to body fat introduce new possibilities for keeping balance in stance and locomotion. R apid i mprovements i n motor proficiency mean that affordances for ba lance a nd locomotion cha nge f rom week to week, a nd these changes will be most pronounced in the first months after infants acquire a new perception-action system. Moreover, because the acquisition of sitting, crawling, cruising, and walking postures appear staggered over several months, at each point in development, infants are experts in an earlier developing posture and novices in a later developing one. Thus, a situation that affords balance in an experienced sitting posture may be impossible in a novice crawling posture. Developmental changes in infants’ environments mean t hat infants are likely to encounter novel features of t he environment t hat a fford or pre clude the use of their newly acquired perception-action systems. Infants’ Perception of Affordances The critical question for understanding the adaptive control of action, of course, is wh ether a ffordances a re perceived—whether ch ildren and adults gear t heir motor decisions to t he actual possibilities for action (for reviews, see Adolph, 1997; Adolph & Berger, 2005, 2006; Adolph, Eppler, & Gibson, 1993). Given the constant flux of developmental changes, t he cha llenge for infants is to detect t he new constraints on action. Infants must continually update their assessment of affordances to take their new bodies, skills, and environments into account. I begin with the problem for perception and then describe three case studies (avoiding a precipice, navigating slopes, and crossing bridges) that highlight the importance of learning and development in the perceptual guidance of locomotion.

The Perceptual Problem Perceiving a ffordances, l ike a ny per ceptual p roblem, beg ins w ith a de scription o f wha t t here i s t o be per ceived ( J. J . Gibso n, 1 979). As i llustrated b y t he i ndividual d ata i n F igures 9.1A-B, l ike ma ny psychophysical fu nctions, th e c urves th at c haracterize th e t ransition from possible to impossible actions are typically steep, S-shaped functions with long extended tails. Although affordance thresholds

The Growing Body in Action

289

for walking down slopes and passing through doorways vary widely between i ndividuals, t he f unctions across i ndividuals a re similarly steep. M ore ge nerally, re gardless of t he lo cation of t he a ffordance function along the x-axis, most actions are either possible or impossible for a wide range of situations that lie along the tails of the function, and have a shifting probability of success for a narrow range of situations that lie along the inflection of the function. Consider, f or ex ample, t he en tire r ange o f sl opes, f rom 0 ° t o 90°, and the entire range of doorways (or more generally, openings between obstacles), f rom 0 c m to infinitely w ide. Regardless of t he particulars of the current situation, the inflection of the affordance function is likely to occupy only a small section of the range of possibilities. The consistent shape of the affordance function simplifies the p roblem f or per ception: I n m ost c ases, per ceivers m ust d etermine only which tail of the function best describes the current situation (e.g., the slope is perfectly safe or impossibly steep; the doorway is completely passable or absolutely impassable). However, because of the particulars of the current situation (the slope is slippery, the walker is carrying a l oad, walking proficiency has i mproved, e tc.), t he l ocation o f t he i nflection o f t he f unction along t he x-axis—whether t he perso n c an ma nage ste ep sl opes o r only shallow ones, narrow doorways or only wider ones—can vary widely f rom o ne m oment t o t he n ext. I n fac t, a n u nlimited n umber o f v ariables i ncluding fac tors such a s sla nt, f riction, l oad, a nd walking proficiency, create an n-dimensional axis, along which the affordance f unction s lides f rom mome nt to mome nt. The changing location of the affordance function complicates the problem for perception: Obs ervers m ust de termine wh ere a long t he x-axis (or, more generally, n-dimensional axis) the transition from possible to impossible occurs, that is, the region where their threshold currently lies. Figures 9.1C and 9.1D show dramatic developmental changes in the location of the affordance threshold for individual participants. To illustrate further, the range in affordance thresholds for walking down slopes for 14-month-olds (± 1 week of age) is quite large, from 4° t o 2 8° ( Adolph, 1 995; A dolph & A volio, 2 000). The wide range in affordance t hresholds g iven t he na rrow spread i n ch ronological age r eflects t he v ariable t iming a nd a mplitude of d evelopmental changes. In addition, in the event that the affordance lies along the inflection o f t he c urve (e.g., t he p robability o f w alking s afely d own t he

290

Embodiment, Ego-Space, and Action

slope or t hrough t he doorway i s be tween 0 a nd 1.0), or perceivers are unsure about the precise location of the transition from possible to impossible, they must weigh the probability of success against the penalties f or u nder- a nd o verestimation er rors. Thus, m otor dec isions—like a ny per ceptual j udgment—reflect bo th o bservers’ se nsitivity to the perceptual information and a r esponse criterion. For both i nfants a nd adults, some outcomes such a s fa lling downward (down a slope or over the brink of a cliff ) constitute a highly aversive penalty, whereas other outcomes such as entrapment (e.g., becoming wedged in an overly narrow doorway) constitute a relatively innocuous penalty (Adolph, 1997; Joh, Adolph, Narayanan, & Dietz, 2007; C. F. Palmer, 1987; Warren & W hang, 1987). Factors such a s motivation and fatigue may also play an important role in setting a response criterion. What kind of data would constitute evidence that infants (or animals of any age) can solve the perceptual problem of detecting affordances? One source of evidence is infants’ motor decisions, that is, whether infants match their attempts to perform a t arget action to the conditional probability of success. Obtaining such data requires test paradigms where the challenges are novel, the probability of success varies from trial to trial, and infants are sufficiently motivated to produce a large number of trials while staying on task. If infants’ response c riterion i s t oo l iberal, t hey a re l ikely t o r espond i ndiscriminately, precluding a ssessment of t heir sensitivity to t he a ffordances. If their response criterion is too conservative, they are likely to refuse to participate in the experiment. Fortunately, infants love to p ractice t heir n ewly de veloping m otor sk ills, a nd w ith a b it o f incentive (praise from their caregivers and pieces of dry cereal) they happily produce dozens of t rials over a l engthy s ession. W hen t he penalty for error is falling downward, infants’ response criterion is sufficiently conservative to assess their perception of affordances for balance and locomotion. Case Study 1: Avoiding Cliffs and Gaps An obvious test case to study infants’ perception of affordances is a falling off place (E. J. Gibson, 1991). A basic requirement for balance and locomotion is a continuous floor to support the body. A ground surface t hat terminates i n a la rge d rop-off is a cliff. A g round su r-

The Growing Body in Action

291

face that is interrupted by a w ide gap is a crevasse. These should be avoided.

The Visual Cliff Most researchers have studied infants’ perception of affordances at the edge of a drop-off on a “visual cliff,” rather than a real one. First devised by Gibson a nd Walk (E. J. Gibson & W alk, 1960; Walk & Gibson, 1961), the apparatus is a large glass table with wooden sides, divided in half by a na rrow board. On t he “deep” side, a pa tterned surface lies on the floor far below the glass, creating the illusion of an abrupt drop-off. On t he “shallow” side, t he patterned surface is placed directly beneath the glass, providing visual information for a continuous ground surface. The glass serves to ensure infants’ safety if they venture onto the deep side and the wooden sides prevent them from falling off the table onto the floor. Human infants a re placed on t he c enterboard a nd c aregivers en courage t hem f rom first one side a nd t hen t he o ther ( Figure 9.5). O ther a nimals a re p laced o n the centerboard and given several minutes to descend to the side of their choice. Dozens of i nvestigations have y ielded i ntriguing but conflicting findings regarding the role of locomotor experience in avoiding the deep side of the visual cliff. Precocial animals such as infant goats and chicks, who walk moments after birth, avoid the deep side on their first ex posure ( E. J . Gibso n & W alk, 1960; Walk & Gibso n, 1961). Similarly, some altricial animals such as rats, whose locomotor skills develop slowly, do not require visual experience with locomotion to avoid the apparent drop-off (Walk, Gibson, & Tighe, 1957). However, other a ltricial spec ies such a s k ittens, r abbits, a nd i nfant humans, crawl r ight o ver t he ed ge o f t he de ep s ide wh en t hey first become mobile. The problem is not lack of depth perception because human infants can see the drop-off months before they begin crawling. A frequently cited study with human infants showed that crawling ex perience p redicts a voidance o f t he a pparent d rop-off, when testing a ge i s controlled: A fter only t wo weeks of c rawling ex perience, 6 5% of i nfants c rawled o ver t he de ep s ide of t he v isual cl iff, but a fter six weeks of crawling experience, the percentage dropped to 35% of infants (Bertenthal, Campos, & Ba rrett, 1984). However, other c ross-sectional st udies found t he opposite results, where t he

292

Embodiment, Ego-Space, and Action

Figure 9.5 Crawling i nfant o n t he v isual c liff. O n t he d eep sid e, s afety g lass covered a l arge d rop-off. Caregivers beckoned to i nfants f rom t he far side of t he obstacle. A dapted f rom Psychological M onographs: G eneral a nd Ap plied, 7 5(15, Whole No. 519), by R. D. Walk & E. J. Gibson, “A comparative and analytical study of visual depth perception,” 1961, with permission from the American Psychological Association.

duration of crawling experience, controlling for age, was positively related to crossing over the deep side (Richards & Rader, 1981, 1983). Longitudinal d ata a re i nconclusive bec ause i nfants l earn f rom repeated testing that the safety glass provides support for locomotion and they become more likely to cross over (Campos, Hiatt, Ramsay, Henderson, & S vejda, 1 978; Ep pler, S atterwhite, Wendt, & B ruce, 1997; Titzer, 1995). Some evidence suggests t hat locomotor ex perience is posture-specific: The same crawlers who avoided the drop-off when te sted on t heir ha nds a nd k nees c rossed over t he cl iff when tested moments later in an upright posture in a wheeled baby-walker (Rader, Ba usano, & R ichards, 1 980). O ther e vidence su ggests t hat locomotor experience generalizes from an earlier developing perception-action system to a later developing one: 12-month-old walking infants avoided the apparent drop-off after only two weeks of walking experience appended to their several weeks of crawling experience (Witherington, Campos, Anderson, Lejeune, & Seah, 2005). Albeit the most famous test paradigm, the visual cliff is not optimal. Discrepant findings may result from methodological problems

The Growing Body in Action

293

stemming from the design of the apparatus. The safety glass presents mixed messages: The v isual cliff looks dangerous, but feels safe. In fact, because human infants quickly learn that the apparatus is perfectly safe, avoidance attenuates and they can only be tested with one trial on the deep side. In addition, the safety glass may lead to underestimation of i nfants’ er rors. Sometimes i nfants lean forward onto the g lass w ith t heir ha nds o r st art onto t he g lass a nd t hen r etreat (e.g., Campos et al., 1978); if the glass were not there, they would have fa llen. M oreover, t he d imensions o f t he v isual cl iff are fixed so that researchers cannot test the accuracy of infants’ responses or ask whether infants scale their locomotor decisions to the size of the challenge. The heights of t he sha llow a nd de ep sides l ie fa r on t he tails of t he a ffordance fu nction, rather than near the i nflection of the curve. Adjustable Gaps: A New Psychophysical Approach Circumventing t he m ethodological p roblems o n t he v isual cl iff required a n ew approach. Thus, we devised a n ew “gaps” paradigm (Adolph, 2000). Rather than a glass table with wooden sides, 9.5month-old infants were observed as they approached a deep crevasse between two platforms (Figure 9.6A-B). We removed the safety glass so that visual and haptic information would be in agreement rather than in conflict, and perceptual errors would lead to the real consequence of falling. To ensure infants’ safety, a h ighly trained experimenter followed alongside infants to provide rescue if they began to fall. Fortunately, “spotting” infants did not produce the same problem a s t he s afety g lass: I nfants d id not simply learn to rely on t he experimenter to c atch t hem, a nd avoidance d id not attenuate over trials. The dimensions of the apparatus were adjustable, rather than fi xed, so t hat we could a ssess t he ac curacy of i nfants’ motor dec isions. Moving one platform along a c alibrated track varied the gap width from 0 to 90 cm in 2 cm increments. Because the depth of the crevasse between the two platforms was always the same, the penalty for fa lling w as i dentical ac ross a ll r isky g ap s izes. The la rgest g ap width had a pproximately t he s ame d imensions a s t he de ep side of the standard visual cliff. Most critical, to determine the role of experience and the specificity of learning across developmental transitions in perception-action systems, each infant was tested in an earlier developing sitting posture (M sitting ex perience = 1 5 weeks) a nd a la ter de veloping c rawling

294

Embodiment, Ego-Space, and Action

A

B

Figure 9.6 Infants i n ( A) sit ting a nd ( B) c rawling p ostures at t he e dge of a n adjustable gap in the surface of support. Caregivers (not shown) offered lures from the f ar sid e of t he gap . A n e xperimenter ( shown) fol lowed a longside i nfants to ensure their safety. Reprinted from Psychological Science, by K. E. Adolph, “Specificity of le arning: Why infants fall over a ve ritable cliff,” pp. 290–295, 2000, with permission from Blackwell Publishers.

posture (M crawling experience = 6 w eeks). (Note t hat t he average duration of experience in the less familiar crawling posture was similar to the duration of crawling experience in the more experienced group in Bertenthal and colleagues’ (1984) study with infants on the visual cliff.) As shown in Figures 9.6A-B, the task for the infants was

The Growing Body in Action

295

the same in both postures—to lean forward over the gap to retrieve an attractive lure. Caregivers stood at the far side of the landing platform a nd e ncouraged i nfants’ efforts. R ather th an t esting i nfants with only one trial, infants were observed over dozens of trials. We used a modified psychophysical staircase procedure to estimate each infant’s affordance threshold, and then assessed infants’ motor decisions b y p resenting s afe a nd r isky g aps, r elative t o t he t hreshold increment. Motor dec isions were de termined ba sed on a n attempt rate: The number of successful plus failed attempts to span the gap divided by the sum of successful attempts, failed attempts, and refusals to attempt the action. (The inverse avoidance rate yields the same information.) Adaptive motor decisions would be evidenced by high attempt rates on gaps smaller than the affordance threshold, and low attempt rates on gaps larger than the threshold. Two experiments confirmed the role of experience with balance and l ocomotion i n de tecting a ffordances f or c rossing a c revasse (Adolph, 20 00). W hen tested i n t heir ex perienced s itting post ure, infants g auged p recisely h ow fa r f orward t hey co uld l ean w ithout falling into the hole. They scaled their motor decisions to their body size a nd sitting proficiency so t hat t heir attempt rates matched t he conditional p robability o f suc cess. N one o f t he i nfants a ttempted the la rgest 90-cm gap. Like t he v isual cl iff, t he gaps pa radigm was relatively novel: Typically, infants are not encouraged to cross a deep precipice on their own. Thus, adaptive motor decisions in the sitting posture attest to transfer of learning from everyday experiences to a novel environmental challenge. However, when facing the crevasse in their less familiar crawling posture, t he s ame i nfants g rossly o verestimated t heir abilities a nd fell into impossibly wide gaps on trial after trial. Infant showed a higher proportion of errors at each risky gap size in the less experienced crawling posture compared with the more experienced sitting posture. For example, in the crawling posture, 33% of infants in each of the two experiments fell over the brink of the largest 90-cm gap on every t rial, but none of t he infants attempted t he 90-cm gap in the sitting posture. It i s u nlikely t hat t he d ifference be tween s itting a nd c rawling reflects different r esponse c riteria. C onditions w ere b locked a nd counterbalanced, and there were no differences between infants who experienced repeated rescues by the experimenter when tested first in the crawling posture and infants who were tested first i n the sitting posture. It is also unlikely that the difference be tween s itting

296

Embodiment, Ego-Space, and Action

and crawling reflected differences in visual or mechanical access to the gap. In both postures, we confirmed from videotape that infants made v isual contact w ith t he gap at t he start of each t rial; i ndeed, the apparatus and lure were arranged so that the gap was directly in infants’ line of sight. In addition, infants spontaneously explored the gap in both sitting and crawling positions by leaning forward while extending and then retracting their a rm i nto the gap. R ather, the disparity between attempt rates in the sitting and crawling postures indicates that learning does not transfer across developmental transitions in perception-action systems. Apparently, what infants know about ke eping ba lance wh ile s itting d oes n ot h elp t hem t o g auge affordances in the same situation when learning to crawl. The new methodological approach of testing infants with multiple trials at a range of safe and risky increments also revealed something that could not be a ssessed with one trial on each side of the v isual cliff—the a daptiveness of i nfants’ motor d ecisions re lative to t heir individual affordance thresholds and over variations in the perceptual information. The proportion of infants (33%) who attempted to span the 90-cm gap width in a crawling posture replicated the results from Bertenthal and colleagues’ (1984) study of infants on the visual cliff, where 35% of infants in the six-week-experience group crawled over the deep side. However, trials at sma ller but equally risky gap increments (the depth of the crevasse was always the same), showed that s ix w eeks o f c rawling ex perience w as n ot su fficient to ensure even a 33% error rate when faced with a deep crevasse. On sma ller, risky gap increments, attempt rates increased sharply in the crawling posture, indicating that infants’ motor decisions were more adaptive farther out on the tails of the affordance function and less adaptive on r isky gap i ncrements closer to t he t hreshold. The s ame pattern held for t he ex perienced sitting post ure, but t he attempt r atio w as lower at each risky gap width. Similarly, o ther r esearchers ha ve f ound spec ificity o f l earning between sitting and crawling postures when testing infants with barriers in their path. In a longitudinal study, infants reached around a barrier t o r etrieve a t arget ob ject wh ile te sted i n a s itting post ure several w eeks bef ore t hey dem onstrated t he ab ility t o r etrieve t he object while tested in a crawling posture (Lockman, 1984). In a follow-up cross-sectional study, 10- and 12-month-olds were more successful at retrieving objects f rom behind a ba rrier when t hey were sitting than when they had to execute the detour by crawling (Lockman & Adams, 2001).

The Growing Body in Action

297

In a v ariant of t he gaps pa radigm, infants a lso showed posturespecific l earning ac ross t he de velopmental t ransition be tween t wo upright postures—cruising a nd walking (Adolph, 2005; Leo, Chiu, & Adolph, 2000). Infants were tested at 11 months of age, when they averaged eig ht weeks of ex perience c ruising s ideways a long f urniture, but t hey had n ot yet beg un to walk i ndependently. Using t he psychophysical procedure to estimate affordance thresholds and assess m otor dec isions, i nfants w ere te sted i n t wo co nditions. A “handrail” condition was relevant to infants’ experience maintaining balance with the arms in cruising: There was an adjustable gap (0 to 90 cm) in the handrail they held for support and a continuous floor ben eath t heir f eet ( Figure 9.7A). A “ floor” co ndition w as r elevant f or ma intaining ba lance w ith t he l egs i n w alking: The floor had a n adjustable gap (0–90 cm) a nd t he ha ndrail was continuous (Figure 9.7B). Because the handrail may have blocked infants’ view of the floor near their feet, an experimenter called infants’ attention to the gap in both conditions at the start of each trial to ensure that they saw the size of the obstacle. In the handrail condition, infants correctly gauged how far they could st retch t heir a rms t o c ruise o ver t he g ap i n t he ha ndrail. Attempt rates were scaled to the actual affordance function: Attempt rates w ere h igh o n s afe g aps sma ller t han t heir t hresholds a nd decreased sharply on risky gaps larger than their thresholds. However, i n t he floor condition, t he s ame i nfants showed g rossly i naccurate moto r de cisions. They a ttempted s afe a nd r isky ga ps a like, despite viewing the gap in the floor at the start of each trial. Every infant showed higher error rates on risky gap increments in the floor condition compared with the handrail condition, and 41% of infants attempted to cruise into the 90-cm gap in the floor. Newly walking 11-month-olds erred in both conditions, as if they did not know how many steps t hey could ma nage be tween g aps i n t he ha ndrail, a nd they did not realize that they need a solid floor to support their bodies. I n su mmary, a lthough c ruising a nd w alking sha re a co mmon upright post ure, practice cruising does not appear to te ach i nfants how to detect affordances for walking.

Case Study 2: Navigating Slopes A series of longitudinal and cross-sectional experiments with infants on slopes provide further evidence that infants must learn to detect

298

Embodiment, Ego-Space, and Action A

B

Figure 9.7 Cruising i nfants w ith ( A) a n a djustable gap i n t he h andrail u sed for m anual s upport, a nd a c ontinuous floor b eneath t he fe et a nd ( B) a n a djustable gap i n t he floor, a nd a c ontinuous h andrail to hold for s upport. C aregivers (not shown) encouraged infants from the end of the landing platform. An experimenter (shown) followed alongside infants to e nsure their safety. Reprinted from A. Woodward & A. Needham (Eds.), Learning and the infant mind, by K. E. Adolph & A. S. Joh, “Multiple learning mechanisms in the development of action,” in press, with permission from Oxford University Press.

affordances f or ba lance a nd l ocomotion, a nd t hat l earning oc curs in t he context of d ramatic de velopmental cha nge (for r eviews, s ee Adolph, 2002, 2005; Adolph & Berger, 2006; Adolph & Eppler, 2002). Like c liffs a nd g aps, na vigating o ver sl opes i s r elatively n ovel f or infants. On pa rents’ r eports, m ost i nfants ha ve n ever c rawled o r walked over steep slopes or u sed a p layground sl ide on t heir own.

The Growing Body in Action

299

Compared w ith c liffs a nd g aps, sl opes ha ve a co ntinuous v isible and tangible surface between the brink and the edge of the obstacle rather than an abrupt discontinuity. Thus, sloping ground provides a u nique test c ase for a ssessing t ransfer of learning f rom e veryday experience to a novel environmental challenge. We devised an adjustable sloping walkway by connecting a middle sloping r amp to t wo flat pl atforms w ith pi ano h inges (Fi gure 9.8). One platform was stationary. Raising and lowering the second platform allowed the degree of slant to be adjusted from 0° to 90° in 2° increments. The psychophysical method was used to determine an affordance t hreshold f or e ach i nfant f or c rawling o r w alking. Safe a nd r isky sl opes w ere p resented r elative t o t he t hreshold t o assess infants’ motor decisions. Caregivers stood at the top or bottom of t he w alkway a nd en couraged t heir i nfants t o come up or down. An experimenter followed alongside infants to ensure their safety. As in the gap studies, the penalty for errors on downhill slopes was falling downward, a co nsequence t hat i nfants find aversive. Many falls d uring de scent w ere q uite d ramatic—with t he ex perimenter catching infants midair, spread-eagled like Superman, on the slope

Figure 9.8 Infant descending an adjustable slope. Caregivers (not shown) encouraged infants from the end of t he landing platform. An experimenter (shown) followed a longside i nfants to e nsure t heir s afety. Re printed f rom t he Monographs of th e S ociety fo r R esearch in C hild D evelopment, 62 (3, S erial N o. 251), by K . E . Adolph, “Learning in the development of i nfant locomotion,” 1997, with permission from Blackwell Publishers.

300

Embodiment, Ego-Space, and Action

as t hey t umbled downward he adfirst or on t heir back s a s i nfants’ feet slid out from under themselves. In contrast, falling while crawling or walking uphill is less aversive. When infants fell, they could lean forward and safely catch themselves with their hands.

Crawling and Walking Down Slopes In a longitudinal study (Adolph, 1997), infants were observed going up a nd d own sl opes e very t hree w eeks, f rom t heir first w eek o f crawling, u ntil s everal m onths a fter they began walking. Infants’ performance on uphill slopes reflected their indifference to the penalty for er rors. Attempt r ates were u niformly h igh on r isky sl opes at each week of crawling and walking. Infants launched themselves at impossibly steep slopes and made h eroic efforts to reach t he top platform, but failed repeatedly over dozens of trials. Infants’ performance on downhill slopes was a very different story. They ad opted m ore co nservative r esponse cr iteria a nd i t wa s possible to track weekly changes in their motor decisions. In their first week of crawling, i nfants plunged headfirst down i mpossibly r isky slopes and required rescue by the experimenter. They fell down 75% of risky slopes. With each week of locomotor experience, motor decisions gradually zeroed in on infants’ actual ability until attempts to descend closely matched the probability of success. After 22 weeks of crawling, attempt rates had decreased to 0.10 on risky slopes. Clearly, infants were not simply learning that the experimenter would catch them bec ause t hey bec ame m ore r eticent t o a ttempt r isky sl opes, rather than more reckless. The steady decrease in errors points to impressive transfer because infants’ affordance thresholds, crawling proficiency on flat ground, and body dimensions changed from week to w eek. A r isky sl ope o ne w eek w as per fectly s afe t he n ext w eek when c rawling sk ill had i mproved. S afe sl opes f or bel ly c rawling were impossibly risky a week or so later when infants began crawling on hands and knees. Despite m onths o f te sting a nd h undreds o f t rials de scending slopes, infants showed no evidence of transfer from crawling to walking. Errors were just as high in infants’ first week of w alking as in their first week of crawling, and learning was no faster the second time around. Learning was so posture-specific that new walkers showed dissociations in their motor decisions between consecutive

The Growing Body in Action

301

trials, tested a lternately in t heir old, fa miliar crawling posture and in t heir u nfamiliar upright posture. When placed prone at t he top of a 36° sl ope, new walkers behaved l ike ex perienced crawlers a nd avoided de scent. Moments later, when placed upright at t he top of the same slope, they walked straight over the brink and fell. When placed prone again, they avoided, and so on. As in crawling, errors were a lways h ighest o n r isky s lopes cl osest t o i nfants’ a ffordance thresholds and lowest on slopes farthest out on the tail of the affordance function. Moreover, infants in a control group who were tested only in their first and tenth weeks of crawling and in their first week of w alking w ere i ndistinguishable f rom i nfants wh o had ex perienced hundreds of trials on slopes. Apparently, learning transferred from everyday experience on flat ground to detecting affordances of slopes. Slope experience was not required. Cross-sectional d ata re plicated t he findings f rom t he lo ngitudinal obs ervations. E ight- t o 9 -month-old i nfants w ith 6 .5 w eeks of c rawling ex perience, o n a verage, a ttempt t o c rawl d own sl opes far be yond t heir ab ilities ( Adolph e t a l., 1 993). E leven-month-old infants, averaging 13 weeks of c rawling ex perience, ma tched t heir attempts to crawl to the probability of success (Mondschein, Adolph, & Tamis-LeMonda, 2000). Likewise, 12-month-old crawlers showed highly adaptive motor decisions for descending slopes. In contrast, 12-month-olds who had j ust beg un walking, attempted i mpossibly steep slopes on repeated trials. Their attempt rates were 0.73 on 50° slopes (Adolph, Joh, Ishak, Lobo, & Berger, 2005). By 14 months of age, when infants averaged 11 weeks of walking experience, attempts to walk were geared to infants’ actual abilities (Adolph, 1995). By 18 months of a ge, i nfants were h ighly ex perienced w alkers, a nd t heir motor de cisions w ere e ven mo re finely at tuned to t he a ffordances for walking down slopes (Adolph et al., 2005). At every age, infants showed more adaptive motor decisions for going down slopes than for going up, indicating that they adopted different response criteria for a scent a nd de scent. Moreover, at e very a ge a nd for both uphill and d own, i nfants sh owed l ower a ttempt r atios o n sl opes fa rthest out on the tail of the affordance function and higher attempt ratios on s lopes closest t o t heir a ffordance t hreshold. These data s uggest that perceptual discrimination of affordances is most difficult in the region bordering the transition from possible to impossible actions. Across ages, perceptual learning reflects a process of gradually gearing motor decisions to the affordance threshold.

302

Embodiment, Ego-Space, and Action

Walking with Weights By the time infants are tested in the laboratory, they may have had several days to adjust to naturally occurring changes in their body dimensions a nd m otor p roficiency. H owever, tem porary cha nges in bodily propensities, such as changes in the location of the center of mass while carrying a t oy, require instant recalibration to a n ew affordance t hreshold. The add ition o f t he t oy a lters i nfants’ f unctional body dimensions. Experimental manipulation of infants’ body dimensions showed that experienced 14-month-old walking infants (averaging 9 w eeks of w alking ex perience) c an u pdate t heir a ssessment o f t heir o wn abilities on the fly (Adolph & Avolio, 2000). Infants wore a tightly fitted Velcro vest with removable shoulder packs filled with either lead weights (25% of their body weight) or feather-weight polyfil (Figure 9.9). The lead-weight load made infants’ bodies more top-heavy and immaturely p roportioned, a h indrance e specially wh ile w alking down sl opes. W hile c arrying t he l ead-weight l oads, i nfants’ a ffordance t hresholds were several deg rees sha llower t han wh ile c arrying the feather-weight loads. The load condition changed randomly

Figure 9.9 Infant we aring fitted V elcro ve st w ith re movable s houlder p acks. Lead-weight or feather-weight loads could be fitted into the shoulder packs at various percentages of i nfants’ body weight. Reprinted from J. L ockman, J. Re iser, & C. A . Ne lson (E ds.), Action a s a n or ganizer of p erception a nd c ognition d uring learning and development: Symposium on C hild Development (Vol. 33), by K . E. Adolph, “Learning to le arn in the development of a ction,” pp. 91–122, 2005, with permission from Lawrence Erlbaum Associates.

The Growing Body in Action

303

from trial to trial, meaning that infants would have to detect the different affordance thresholds for the lead- and feather-weight loads at the start of each trial. Indeed, infants recalibrated their judgments of risky slopes to their new, more precarious balance constraints. They correctly t reated t he s ame deg rees of slope a s r isky wh ile wearing the lead-weight shoulder-packs but as safe while wearing the featherweight shoulder-packs. In both conditions, attempt rates were lowest for slopes farthest out on the tail of the affordance function and highest for slopes closest to the threshold. Recalibration t o t he w eights a ttests t o i mpressive t ransfer o f learning on several counts. The slopes were relatively novel—few of the infants had walked over steep slopes prior to participation. Carrying a l oad attached to the body w as novel—infants did not carry backpacks or purses. And, the loads decreased infants’ walking proficiency and required adjustments in gait patterns just to stay upright and move forward. In fact, walking with smaller loads (15% of body weight) over flat ground causes 14-month-olds to take smaller, slower steps, keep both feet on the ground for longer periods of time, and hold one foot in the air for shorter periods of time (Garciaguirre et al., 2007). Case Study 3: Spanning Bridges One of the consequences of developmental changes in infants’ motor skills is that aspects of the environment can take on new functional significance. A handrail, for example, is a necessary support for balance and locomotion for cruising infants. Without the handrail (or edge of a piece of furniture), cruisers cannot move in an upright position. D uring t he c ruising per iod of de velopment, t he ha ndrail ha s the same functional status as the floor. After infants begin walking, however, a ha ndrail is unnecessary under normal conditions. Children and adults typically use handrails only to augment their natural abilities when they are tired or when the conditions are treacherous. The handrail becomes supplemental and functions as a tool. Using a Handrail At 1 6 m onths o f a ge, m ost i nfants a re r elatively ex perienced ( 18 weeks, on average) and proficient walkers. On flat ground, they walk

304

Embodiment, Ego-Space, and Action

unsupported, w ithout h olding a ha ndrail o r a c aregiver’s ha nd t o augment t heir ba lance. Only a f ew infants have experience using a handrail to climb and descend stairs (Berger, Theuring, & A dolph, 2007). Thus, a novel situation was devised where the use of a ha ndrail would be w arranted (Berger & A dolph, 2003). Wooden bridges of v arious w idths (12 c m t o 7 2 c m) spa nned a 7 6-cm-deep p recipice over a 74-cm-long gap. The bottom of the precipice was always clearly vi sible o n e ither s ide o f th e b ridge an d th e l ength o f th e bridge required infants to take several steps at a minimum in order to cross. On some trials, a wooden handrail was available for infants to hold on to, a nd on other t rials, t he ha ndrail was absent (Figure 9.10). Without the handrail, the narrowest bridges were impossible: Bridges w ere na rrower t han t he i nfants’ w idest d imensions a nd their dynamic base of support (infants walk with their legs splayed apart and their bodies oscillate from side to side). Walking sideways was not a n option because infants cannot keep ba lance while edging along sideways unsupported. Parents stood at the far side of the precipice offering toys as a lure. An experimenter followed alongside infants to ensure their safety. As expected, infants perceived different possibilities for walking based on bridge width. More important, infants’ perception of affor-

Figure 9.10 Infant crossing an adjustable bridge. Caregivers (not shown) encouraged infants from the end of t he landing platform. An experimenter (shown) followed a longside i nfants to e nsure t heir s afety. Re printed f rom Developmental Psychology, 39(3), by S. E. Berger & K. E. Adolph, “Infants use handrails as tools in a locomotor task,” pp. 594–605, 2003, with permission from the American Psychological Association.

The Growing Body in Action

305

dances was also related to the presence of the handrail. Infants ran straight over the widest bridges, ignoring the handrail when it was available. However, on narrow 12- to 24 cm bridges, infants attempted to walk when the handrail was available and avoided walking when the handrail was removed. To use the handrails, infants turned their bodies sideways, and modified t heir walking gait by inching a long with their trailing leg following in the footstep of their leading leg. Falling w as r are i n bo th ha ndrail co nditions: On ly 6 % o f i nfants’ attempts en ded i n a fa ll. A pparently, i nfants r ecognized t hat t he handrail offered add itional pos sibilities f or u pright l ocomotion b y augmenting their balance on narrow bridges.

Wobbly and Wooden Handrails A follow-up study showed that experienced walking infants take the material substance of the handrail into account (Berger, Adolph, & Lobo, 2005). A pair of wobbly handrails was constructed from rubber a nd foam. W hen i nfants pressed t heir weight onto t he wobbly handrails, t he ha ndrails def ormed a nd s agged d ownward ( Figure 9.11). Parents standing on the far side of the landing platform encouraged their 16-month-old walking infants to cross 10- to 40-cm-wide

Figure 9.11 Infant testing a wobbly h andrail used for s upport. Caregivers (not shown) encouraged infants from the end of the landing platform. An experimenter (shown) fol lowed a longside i nfants to e nsure t heir s afety. Re printed f rom Child Development, 76, by S. E. Berger, K. E. Adolph, and S. A. Lobo, “Out of the toolbox: Toddlers d ifferentiate wobbly a nd wooden handrails,” pp. 1294–1307, 2005, w ith permission from Blackwell Publishers.

306

Embodiment, Ego-Space, and Action

bridges. A handrail was available on every trial. On some trials, the handrail was built of sturdy wood as in the earlier experiment, and on other trials, t he ha ndrail was wobbly rubber or foam. A ll t hree handrails looked solid. Thus, to make the distinction between sturdy and wobbly handrails, infants needed to test the handrail before stepping out onto the bridge to determine whether it would give beneath their weight. On the widest bridges, infants walked straight across regardless of handrail type. But, on the 10- to 20-cm bridges, infants were more likely to walk when the handrail was built of sturdy wood than when it was wobbly. Thus, infants d id not perceive t he ha ndrail as a support for augmenting ba lance merely bec ause it w as a su rface stretching from here to there. Rather the rigidity of the surface was an important source of information for affordances. Infants generated haptic information about the material substance of the handrail by pushing, tapping, squeezing, rubbing, and mouthing the surface before leaving the starting platform.

Learning in Development What do these case studies of infant locomotion tell us about perceptually guided action? As in research on affordances with adult participants (Kinsella-Shaw, Shaw, & Turvey, 1992; Mark, Jiang, King, & Paasche, 1999; Warren, 1984; Warren & Whang, 1987), the data from research with infants indicate that experienced observers can readily a nd ac curately de tect pos sibilities f or m otor ac tion. P erceiving whether it is possible to walk through an aperture, step over a gap, climb up a stair, walk over a slope, and so on requires that perceptual i nformation about t he environment (aperture w idth, gap size, stair height, slant) be related to perceptual information about the self (current body d imensions, level of balance control, and so on), and some so urces o f i nformation ma y s imultaneously spec ify r elevant aspects of both environment and self (e.g., optic flow). Exploratory movements close the perception-action loop by providing information about what to do next based on information for affordances in the current moment (Joh et al., in press; Mark, Baillet, Craver, Douglas, & Fox, 1990). Visual information about upcoming obstacles can elicit m ore f ocused i nformation g athering abo ut pos sibilities f or locomotion through touch, postural sway, and testing various strategies (Adolph, Eppler, Marin, Weise, & Clearfield, 2000).

The Growing Body in Action

307

Studies with infants tell us something new. The evidence indicates that the ability to detect affordances requires learning (e.g., Adolph, 2005; Ca mpos et a l., 2 000; E . J . Gibso n & P ick, 2 000). E xploiting the f ull r ange of pos sibilities for ac tion, a nd perhaps more i mportant, c urtailing i mpossible ac tions i n r isky s ituations, d oes n ot come automatically with the acquisition of balance and locomotion. Affordances (or lack o f t hem) t hat seem blatantly obvious to u s a s adults (“Don’t fall over the cliff !”) are not obvious to newly mobile infants. Instead, perceptually guided action improves as infants gain experience with their newly acquired perception-action systems. Of course, i nfants beco me o lder a s t hey acq uire m ore ex perience, so that a ge-related a nd ex perience-related cha nges a re normally confounded. H owever, t he i ndependent e ffects o f ag e a nd ex perience can be a ssessed st atistically a nd ex perimentally ( in l ongitudinal studies with experience held constant, and in cross-sectional studies with age held constant), and experience is the stronger predictor (for reviews, see Adolph, 1997; Adolph & Berger, 2006). A r elated iss ue co ncerns t he speci ficity o f l earning. O ptimally, detecting affordances should be ma ximally flexible so t hat learning transfers to novel variations in both the body and environment. Flexibility is critical at every stage of life because local conditions are continually in flux. However, the infancy period provides an especially useful window into learning and transfer, given the rapid, large-scale developmental c hanges in inf ants’ b ody dim ensions, m otor p roficiency, and environments. Moreover, the acquisition of new postural control systems over t he first t wo years of life a llows researchers to ask abo ut de velopmental co nstraints o n l earning, t hat i s, wh ether learning transfers from one perception-action system to another. The evidence points to broad transfer of learning within perception-action s ystems a nd na rrow spec ificity of learning between perception-action s ystems. E xperienced i nfants de tected n ovel affordances for sitting, crawling, cruising, and walking. Thei r motor decisions r eflected t he cha nging r elationship be tween body a nd environment. They ad apted to v ariations i n en vironmental c onstraints (gap width, slope, bridge width, and so on), and to naturally occurring and experimentally induced changes in their body dimensions and skills (e.g., lead-weight shoulder packs). In contrast, novice infants showed poorly adapted motor decisions. The y attempted impossibly steep slopes and wide gaps and deep cliffs, and appeared to ig nore i nformation abo ut t he l imits o f t heir p hysical ab ilities. Novices showed no evidence that learning transfers from an earlier

308

Embodiment, Ego-Space, and Action

developing perception-action system to a later developing one. Over weeks of experience, infants’ motor decisions gradually honed in on the current constraints on action so t hat infants’ attempts matched the actual affordances for action. Experience w ith pa rticular t ypes of challenges (such as practice descending slopes) was not required. What infants needed was 10 to 20 weeks of everyday experience with balance and locomotion.

Learning Sets A candidate learning mechanism that could support flexible transfer within perception-action systems and specificity across perception-action s ystems i s a l earning s et (Adolph, 2 002, 2 005; A dolph & E ppler, 2 002). The ter m w as first coined by Ha rlow (1949, 1959; Harlow & Kuenne, 1949) to refer to a s et of exploratory procedures and st rategies t hat p rovide t he m eans f or g enerating so lutions t o novel problems of a pa rticular t ype. Acquiring t he ability to solve novel problems—or “ learning to learn,” a s Ha rlow put it—is more effective for a b roader r ange of situations t han learning pa rticular solutions for familiar problems (Stevenson, 1972). In fact, in a world of continually varying and novel affordances, learning simple facts, cue-consequence a ssociations, a nd s timulus ge neralizations wou ld be ma ladaptive. Yesterday’s fac ts a nd consequences may no longer hold. Features of the environment may be truly novel. With a learning set, the scope of transfer is limited only by the boundaries of the particular problem space. Learning sets have three important characteristics. First, learning sets support broad transfer of learning to novel problems. For example, in Harlow’s (1949, 1959) model system, adult monkeys learned to solve discrimination problems (e.g., find which of two objects hides a raisin) or oddity problems (which of three objects hides a r aisin). Object features v aried f rom one t rial block t o t he next. W hen t he monkeys had acquired a learning set, they demonstrated perfect performance with pairs or trios of completely new shapes. They could figure out i n one t rial wh ich object features were relevant for t hat trial block. For discrimination problems, they used a win-stay/loseshift r ule: I f t he first object t hey ex plored co vered t he r aisin, t hey tracked that object over the trial block; if not, they tracked the other one. For oddity problems, they used an odd-man-out strategy: The

The Growing Body in Action

309

target ob ject d iffered f rom t he o ther t wo o n m ore features. A lbeit monkeys’ learning sets involved only simple strategies for operating within a tiny problem space, the learning sets represented something far more powerful than stimulus generalization. Monkeys were solving truly novel instances of a pa rticular type of problem. They had acquired a means for coping with novelty. The second characteristic of learning sets is that transfer is limited t o t he s ize o f t he p roblem spac e. F or ex ample, m onkeys wh o had acquired a learning set for discrimination problems could solve new instances of discrimination problems on the first presentation of a new pair of objects. However, experts at discrimination problems were no better than novices when challenged with oddity problems. Similarly, experts on oddity problems behaved like novices on discrimination problems. Discrimination and oddity problems each occupied a different problem space. The t hird cha racteristic o f l earning s ets i s t hat acq uisition i s extremely sl ow a nd d ifficult. I t en tails r ecognition o f t he p roblem space (e.g., the pairs of objects in a discrimination problem), identification of the relevant parameters for operating within it (object color or shape rather than spatial position), acquisition of the appropriate exploratory procedures for generating the requisite information (visual a nd ha ptic ob ject s earch), a nd abst raction o f g eneral st rategies t o so lve t he p roblem a t ha nd (win-stay/lose-shift strategy to track the food). Harlow’s monkeys required hundreds of trial blocks with d ifferent i nstances o f t he pa rticular p roblem t ype p resented over multiple sessions—thousands of trials in total—to acquire a learning set for the small and circumscribed arena of discrimination problems or o ddity proble ms. A t first, m onkeys s earched ha phazardly or searched under objects in a particular position. They had to unlearn overly na rrow st rategies for s earching w ith t he pa rticular pair of objects i n t he c urrent t rial block (e.g., g reen c ylinders h ide raisins), before abstracting general strategies that would enable them to find the raisin with novel objects in the next trial block. How might the learning set framework be a pplied to perceptual guidance of action? On the learning set account, each new postural control system that arises in the course of motor development operates as a distinct problem space. Compared with the simple discrimination and oddity problems in Harlow’s model system, the problem space for postural control is extremely broad. The range of problems is en ormous. E very m ovement o n e very su rface co nstitutes a d if-

310

Embodiment, Ego-Space, and Action

ferent problem for post ural control. Every cha nge i n i nfants’ body growth and skill level creates different biomechanical constraints on balance and locomotion. Whereas Harlow’s monkeys required thousands of trials in a daily regimen to acquire learning sets for simple discrimination and oddity problems, human infants might require hundreds of t housands or m illions of “ trials” ove r m any we eks or months—epochs of experience—to acquire a learning set for coping with balance and locomotion. Flexibility o f l earning a rises a s i nfants beg in t o co nsolidate a learning set. Experience using a newly developed perception-action system provides infants with the necessary repertoire of exploratory procedures a nd st rategies f or so lving p roblems o n-line. I nformation-generating behaviors provide the basis for identifying the critical parameters for the particular problem space and for calibrating the settings of those parameters under changing conditions. Exploratory movements generate the requisite information about the current st atus of t he body r elative to t he environment a nd v ice versa: visual exploration as infants notice the obstacle, swaying movements as t hey ma ke t heir approach, haptic ex ploration a s t hey probe t he obstacle with a l imb, and means-ends exploration as they test various alternatives (Adolph & Eppler, 2002; Adolph et al., 2000; Adolph & Joh, in press). Thus, within the boundaries of each problem space, infants can cope with novel and variable changes in affordances for action. Specificity o f l earning em erges bec ause d ifferent perceptionaction s ystems a re defined b y d ifferent s ets o f c ritical pa rameters. Sitting, c rawling, c ruising, a nd w alking, f or ex ample, i nvolve d ifferent parameters for maintaining balance: different regions of permissible post ural s way, muscle g roups for ba lance a nd propulsion, vantage points for viewing the ground, sources of perceptual information abo ut t he body ’s m ovements, co rrelations be tween v isual and v estibular i nput, a nd so o n. B ecause e ach p roblem spac e ha s unique parameters, learning does not transfer between perceptionaction s ystems. W hat i nfants l earn abo ut ba lance a nd l ocomotion in a c rawling posture, for instance, does not help them to solve the problem o f ba lance a nd l ocomotion i n a n ewly acq uired w alking posture.

The Growing Body in Action

311

Everyday Experience A q uestion t hat ari ses a bout l earning m echanisms t hat r equire particular t ypes of practice is whether t he actual opportunities for learning a re co nsistent w ith t he p urported r egimen: A re l earning sets p lausible? I n t his c ase, i nfants w ould n eed i mmense a mounts of practice with varied instances of each problem type to acquire a learning set for each perception-action system. The available laboratory d ata support such a p roposition. For example, a fter six weeks of everyday experience with balance and locomotion, infants avoid a drop-off on the deep side of the visual cliff and a 90-cm-gap; after 15 w eeks o f ex perience, t hey ma tch t heir m otor dec isions t o t he physical d imensions o f t he d rop-off ( Adolph, 2 000; B ertenthal e t al., 1984). Infants require 10 weeks of locomotor experience before errors dec rease to 5 0% on r isky slopes a nd 2 0 weeks before er rors decrease to 10% (Adolph, 1997). More convincing evidence that everyday experience could support infants’ acquisition of learning sets is provided by naturalistic d ata ob tained f rom d aily ch ecklist d iaries, “ step-counters” i n infants’ shoes, and video recordings of infants crawling and walking in everyday environments (Adolph, 2002; Adolph, Robinson, Young, & Gill-Alvarez, in press; Chan, Biancaniello, Adolph, & Marin, 2000; Chan, L u, Ma rin, & A dolph, 1 999; G arciaguirre & A dolph, 2 006; Robinson, A dolph, & Y oung, 2 004). I n fac t, i f r esearchers co uld design a p ractice r egimen m ost co nducive t o acq uiring a l earning set, i t w ould r esemble i nfants’ e veryday ex periences w ith ba lance and locomotion. Infants’ locomotor experience is immense—on the order of practice r egimens u sed b y O lympic a thletes a nd co ncert p ianists t o achieve expert performance, rather than the experimental training sessions administered to monkeys in Harlow’s original studies. In 15 minutes of f ree play, for ex ample, t he average 14-month-old w alking i nfant t akes 550 steps , t raveling ha lf t he d istance up t he E iffel Tower (Garciaguirre & A dolph, 2 006). I n t he co urse o f a d ay, t he average toddler takes more than 13,000 steps, traveling the length of 39 American football fields. Crawling experience is less intense, but equally remarkable. At t he end of t he d ay, t he average crawler ha s taken more than 3,000 steps (Adolph, 2002). Practice w ith b alance an d l ocomotion i s di stributed o ver tim e, rather than massed, so that infants have time to recover from fatigue,

312

Embodiment, Ego-Space, and Action

renew their motivation to continue, and consolidate what they have learned. Infants must work to keep their bodies in balance during all o f t heir w aking h ours. H owever, t hey a re ac tually o n t he floor engaged in stance or locomotion for only 5 t o 6 h ours per d ay, and most of that time (approximately 80%), they are in stance (Adolph, 2002; G arciaguirre & A dolph, 2 006). Thus, l ocomotion oc curs i n short, l ittle bouts of ac tivity, i nterspersed w ith longer rest per iods. Practice i s a lso d istributed ac ross d ays. On so me d ays, i nfants s it, crawl, cruise, or walk, but on other days, they do not (Adolph, Robinson e t a l., 2 007; R obinson e t a l., i n p ress). Di stributed p ractice across days is most noticeable when infants are first acquiring a new perception-action system. Falling is commonplace. Toddlers fall, on average, 15 times per hour ( Garciaguirre & A dolph, 2 006). M ost fa lls a re n ot s erious enough to make infants cry or to warrant parents’ attention. Surprisingly, infants fall most frequently due to unexpected shifts i n their center of mass; they lift t heir a rm or t urn t heir head a nd t ip over. Although i nfants t rip a nd sl ip wh en t he el evation a nd f riction o f the ground surface is variable, most indoor floor coverings are uniform in elevation and provide sufficient traction. Thus, like Harlow’s monkeys who learned to ignore local contingencies between object features i n a pa rticular t rial block, i nfants may learn to ig nore t he color and visual texture of floor coverings, and focus on changes in elevation and layout that specify varying affordances. Finally, infants’ practice is variable, rather than blocked, so that infants experience balance and locomotion in various physical and social contexts, typically changing after a few minutes or so. Infants travel through all of the open rooms in their homes, averaging 6 to 12 different ground surfaces per day (Adolph, 2002; Chan, Biancaniello et al., 2000; Chan, Lu et al., 1999). Of course, the size of infants’ homes, t heir la yout, t he f urniture a nd floor co verings, a nd so o n, vary t remendously ac ross i nfants a nd g eographic r egions. I nfants reared in Manhattan, for example, are unlikely to have regular access to stairs, and unlikely to crawl and walk over grass, sand, and concrete (Berger et al., 2007). Infants reared in the suburbs have regular exposure to all of those surfaces. In sum, infants’ everyday locomotor ex periences resemble a t ype of practice regimen t hat would be highly conducive to acquiring a l earning set: immense a mounts of variable a nd d istributed p ractice ( Gentile, 2 000; S chmidt & L ee, 1999).

The Growing Body in Action

313

After Infancy? Walking is not the endpoint in locomotor development. After infancy, children and adults continue to master new locomotor skills. Unfortunately, developmental research on perceptually guided locomotion is largely limited to infancy, and research with adults has focused on skills acquired during infancy that involve sitting and upright locomotion (e.g., Mark et al., 1990; e.g., Warren, 1984; Warren & Whang, 1987). Thus, the developmental story of perceiving affordances after the infancy period is largely speculative. Most forms of locomotion acquired after i nfancy a re l ikely t o be o utgrowths o f t he per ception-action s ystems acq uired d uring infancy. Like toddlers’ use of a handrail for crossing bridges, specialized skills such as ice skating, skiing, surf boarding, and dancing on pointe are likely to represent enlargement of the problem space for upright locomotion. L ike st anding a nd walking, t he ke y pivots for maintaining balance are at the ankles and hips. Driving and kayaking may be en largements of t he problem spac e for s itting; t he ke y pivot is at the hips. Rock-climbing may be an outgrowth of crawling because the body is supported over all four limbs. Some specialized skills, however, may constitute new perceptionaction systems, as different from the skills acquired during infancy as crawling is from walking. Skills such as brachiating hand-overhand along monkey bars, bicycling, and swimming require special environments (water), opportunities (access to water), devices (bicycles, monkey bars), and social supports (supportive parents and peer models) f or acq uisition a nd per formance. L ike t he post ural m ilestones in infant development, bicycling, swimming, and brachiating involve n ew c ritical pa rameters f or co ntrolling ba lance a nd l ocomotion: new regions of permissible postural sway, key pivots about which the body r otates, sources of perceptual information, vantage points for viewing the support surface, and so on. As w ith inf ants’ p ostural mi lestones, flexibility i s i mperative for per ception-action s ystems acq uired la ter i n l ife. Ch ildren a nd adults m ust ad apt ac tions t o t he cha nging a ffordances offered by variations i n t he environment a nd i n t heir own bod ies a nd sk ills; motor decisions should take penalties for error into account. Thus, specialized sk ills ma y r equire acq uisition o f n ew l earning s ets f or each new problem space. Ten- to 12-year-old children, for example, showed evidence of flexible m otor dec isions f or b icycling t hrough

314

Embodiment, Ego-Space, and Action

traffic in tersections in an imm ersive, in teractive, v irtual e nvironment (Plumert, Kearney, & Cremer, 2004). While seated on stationary bikes, ch ildren v aried t he l ikelihood of c rossing t he simulated road and adjusted their rate of pedaling in accordance with the time gaps presented by faster and slower moving “cars.” Despite i mportant s imilarities, t he per ception-action s ystems acquired d uring i nfancy a nd t hose acq uired la ter i n l ife ha ve t wo important differences. First, age-related changes in cognition, motivation, and social skills may affect motor decisions about affordances. Infants, for example, are not embarrassed to fall, but older children and adults may be. Infants may not explicitly consider consequences, but older ch ildren a nd adults do. I nfants c annot u se ex plicit r ules (e.g., “ look bo th w ays bef ore c rossing t he st reet”), b ut o lder ch ildren a nd ad ults c an. M oreover, ch ildren a nd ad ults ha ve y ears o f experience ma king m otor dec isions f or t he per ception-action s ystems ac quired d uring i nfancy. P ossibly, per ception-action s ystems acquired later in life can bootstrap from earlier acquired systems by incorporating heuristic strategies and general information generating behaviors. A second important difference is that infants’ perception-action systems are acquired under intense practice regimens. Infants practice sitting, crawling, and walking, for example, during most of their waking hours. Practice is variable (hundreds of “trials” keeping balance on different su pport su rfaces i n d ifferent physical a nd social contexts) a nd distributed (short bouts of activity interspersed w ith rest periods), gearing infants for broad transfer of learning. In contrast, bec ause t he spec ialized sk ills acq uired a fter i nfancy r equire special opportunities for performance, children accumulate smaller quantities of experience under more limited practice schedules.

Conclusions I conclude where I began, with how infant locomotion speaks to the larger l iterature o n per ception-action co upling. S tudies o f i nfant locomotion raise several issues that any theory of perceptually guided action should address. First, motor decisions must be geared to affordances. For motor action to be adaptive, it must take the constraints and propensities of the body and the environment into account. Second, the central problem in detecting and exploiting affordances for

The Growing Body in Action

315

action is the means for coping with novelty and variability. Because the body and environment are always changing, behavioral flexibility is paramount. Third, human infants must learn to cope with novelty and variability. Infants develop new motor abilities before they use their abilities adaptively. With sufficient experience, infants’ motor decisions a re as accurate as t hose of adults. Fourth, situations t hat call for immense behavioral flexibility require a learning mechanism that supports extremely broad transfer. Learning sets are a likely candidate for such a mechanism. What infants learn when they acquire a learning set are exploratory procedures and strategies for on-line problem solving. As Harlow put it, they are learning to learn. Finally, learning i s a lways n ested i n t he co ntext o f de velopmental cha nge. Changes in infants’ bodies, sk ills, and environments are rapid and dramatic, but changing affordances are not limited to infancy, and learning surely does not cease after a few months of walking experience. Animals of every age continually learn new ways of detecting possibilities for ac tion at t he same t ime t hat t hose possibilities a re changing. References Adolph, K . E . (1995). A ps ychophysical a ssessment of to ddlers’ a bility to cope with slopes. Journal of Experimental Psychology: Human Perception and Performance, 21, 734–750. Adolph, K. E . (1997). L earning i n t he development of i nfant locomotion. Monographs of th e S ociety fo r Re search in C hild D evelopment, 62(3, Serial No. 251). Adolph, K. E. (2000). Specificity of learning: Why infants fall over a v eritable cliff. Psychological Science, 11, 290–295. Adolph, K. E. (2002). Learning to keep balance. In R. Kail (Ed.), Advances in chil d d evelopment an d be havior (Vol. 3 0, pp. 1–30). A msterdam: Elsevier Science. Adolph, K . E . (2005). L earning to le arn i n t he de velopment of ac tion. I n J. Lockman & J. Rei ser (Eds.), Action as an organizer of learning and development: The 32nd Minnesota Symposium on Child Development (pp. 91–122). Hillsdale, NJ: Erlbaum. Adolph, K. E., & Avolio, A. M. (2000). Walking infants adapt locomotion to c hanging b ody d imensions. Journal of E xperimental P sychology: Human Perception and Performance, 26, 1148–1166.

316

Embodiment, Ego-Space, and Action

Adolph, K . E ., & B erger, S . E . ( 2005). P hysical a nd m otor de velopment. In M. H. Bornstein & M. E. Lamb (Eds.), Developmental science: An advanced textbook (5th ed., pp. 223–281). Mahwah, NJ: Erlbaum. Adolph, K. E ., & B erger, S. E . (2006). Motor development. In D. Kuhn & R. S. Siegler (Eds.), Handbook of chil d psychology: Vo l. 2 . Cognition, perception, and language (6th ed., pp. 161–213). New York: Wiley. Adolph, K. E., & Eppler, M. A. (2002). Flexibility and specificity in infant motor skill acquisition. In J. W. Fagen & H. Hayne (Eds.), Progress in infancy research (Vol. 2, pp. 121–167). Mahwah, NJ: Erlbaum. Adolph, K. E ., Eppler, M. A., & Gibs on, E . J. (1993). Development of perception of affordances. In C. K. Rovee-Collier & L . P. Lipsitt (Eds.), Advances in infanc y r esearch (V ol. 8 , pp . 5 1–98). N orwood, N J: Ablex. Adolph, K . E ., Eppler, M. A ., Ma rin, L ., Weise, I. B., & C learfield, M. W. (2000). E xploration i n t he s ervice o f p rospective c ontrol. Infant Behavior and Development, 23, 441–460. Adolph, K . E ., & J oh, A . S . (in p ress). M ultiple le arning m echanisms i n the de velopment of ac tion. I n A . Woodward & A . Needham (Eds.), Learning and the infant mind. New York: Oxford University Press. Adolph, K. E., Joh, A. S., Ishak, S., Lobo, S. A., & Berger, S. E. (2005, October). Specificity of infants’ k nowledge for action. Paper presented at the Cognitive Development Society, San Diego, CA. Adolph, K. E., Robinson, S. R., Young, J. W., & Gi ll-Alvarez, F. (in press). What is the shape of developmental change? Psychological Review. Adolph, K . E ., Vereijken, B ., & Den ny, M . A . (1998). L earning to c rawl. Child Development, 69, 1299–1312. Adolph, K. E., Vereijken, B., & Shrout, P. E. (2003). What changes in infant walking and why. Child Development, 74, 474–497. Berger, S . E ., & A dolph, K . E . (2003). I nfants u se ha ndrails a s to ols i n a locomotor task. Developmental Psychology, 39, 594–605. Berger, S. E., Adolph, K. E., & Lobo, S. A. (2005). Out of the toolbox: Toddlers differentiate wobbly and wooden handrails. Child Development, 76, 1294–1307. Berger, S. E., Theuring, C. F., & Adolph, K. E. (2007). How and when infants learn to climb stairs. Infant Behavior and Development, 30, 36–49. Bernstein, N. (1996). On dexterity and its development. In M. L. Latash & M. T. Turvey (Eds.), Dexterity and its development (pp. 3–244). Mahwah, NJ: Erlbaum. Bertenthal, B. I., Campos, J. J., & Barrett, K. C. (1984). Self-produced locomotion: A n or ganizer of e motional, c ognitive, a nd s ocial d evelopment in infancy. In R. N. Emde & R. J. Harmon (Eds.), Continuities and discontinuities in development (pp. 175–210). New York: Plenum Press.

The Growing Body in Action

317

Bertenthal, B. I., & Clifton, R. K. (1998). Perception and action. In D. Kuhn & R. S. Siegler (Eds.), Handbook of child psychology: Vol. 2. Cognition, perception, and language (5th ed., pp. 51–102). New York: Wiley. Bly, L . (1994). Motor s kill a cquisition in th e first year. S an A ntonio, T X: Therapy Skill Builders. Bril, B., & Breniere, Y. (1992). Postural requirements and progression velocity in young walkers. Journal of Motor Behavior, 24, 105–116. Bril, B., & L edebt, A. (1998). Head coordination as a m eans to a ssist sensory integration in learning to walk. Neuroscience and Biobehavioral Reviews, 22, 555–563. Campos, J. J., Anderson, D. I., Barbu-Roth, M. A., Hubbard, E. M., Hertenstein, M. J., & Witherington, D. C. (2000). Travel broadens the mind. Infancy, 1, 149–219. Campos, J. J., H iatt, S ., R amsay, D., Henderson, C ., & S vejda, M . (1978). The emergence of fear on the visual cliff. In M. Lewis & L. Rosenblum (Eds.), The development of affect (pp. 149–182). New York: Plenum. Chan, M . Y ., Bi ancaniello, R ., A dolph, K . E ., & Ma rin, L . ( 2000, J uly). Tracking infant s’ l ocomotor e xperience: The te lephone di ary. P oster presented to t he International Conference on Infant Studies, Brighton, England. Chan, M. Y., Lu, Y., Marin, L., & Adolph, K. E. (1999). A baby’s day: Capturing crawling experience. In M. A. Grealy & J. A. Thomp son (Eds.), Studies in pe rception and a ction (Vol. 5, pp. 245–249). Mahwah, NJ: Erlbaum. Clark, A. (1997). Being there: Putting brain, body, and world together again. Cambridge, MA: MIT Press. de Vries, J. I. P., Visser, G. H. A., & Prechtl, H. F. R. (1982). The emergence of fetal behaviour. I. Qualitative aspects. Early Human Development, 7, 301–322. Eppler, M . A ., S atterwhite, T ., W endt, J., & B ruce, K . ( 1997). I nfants’ responses to a v isual c liff a nd o ther g round su rfaces. I n M . A . Schmuckler & J. M. Kennedy (Eds.), Studies in perception and action (Vol. 4, pp. 219–222). Mahwah, NJ: Erlbaum. Franchak, J. M ., & A dolph, K . E . (2007, Ma y). Perceiving ch anging a ffordances for action: Pregnant women walking through doorways. Paper presented a t t he M eeting o f t he Vi sion S ciences S ociety, S arasota, FL. Frankenburg, W. K ., & Do dds, J. B . (1967). The Denver Developmental Screening Test. Journal of Pediatrics, 71, 181–191. Freedland, R . L ., & B ertenthal, B . I . ( 1994). De velopmental c hanges i n interlimb c oordination: T ransition to ha nds-and-knees c rawling. Psychological Science, 5, 26–32.

318

Embodiment, Ego-Space, and Action

Garciaguirre, J. S., & Adolph, K. E. (2006, June). Infants’ everyday locomotor e xperience: A w alking an d fa lling m arathon. Pap er p resented to the International Conference on Infant Studies, Kyoto, Japan. Garciaguirre, J. S ., A dolph, K . E ., & Sh rout, P. E . (2007). B aby c arriage: Infants walking with loads. Child Development, 78, 664–680. Gentile, A. M. (2000). Skill acquisition: Action, movement, and neuromotor processes. In J. Carr & R. Shepard (Eds.), Movement science: Foundations fo r physical th erapy in r ehabilitation (2nd ed., pp. 111–187). New York: Aspen Press. Gibson, E. J. (1982). The concept of affordances in development: The renascence of functionalism. In W. A. Collins (Ed.), The concept of development: The M innesota s ymposia on chil d ps ychology (Vol. 1 5, pp . 55–81). Mahwah, NJ: Erlbaum. Gibson, E. J. (1987). Introductory essay: What does infant perception tell us about theories of perception? Journal of Experimental Psychology: Human Perception & Performance, 13, 515–523. Gibson, E . J. ( 1991). An od yssey i n l earning and pe rception. C ambridge, MA: MIT Press. Gibson, E . J., & P ick, A . D. (2000). An ecological approach to perceptual learning and development. New York: Oxford University Press. Gibson, E. J., & Walk, R. D. (1960). The “visual cliff.” Scientific American, 202, 64–71. Gibson, J. J. ( 1979). The ecol ogical appr oach t o v isual pe rception. B oston: Houghton Mifflin. Harlow, H. F. (1949). The formation of learning sets. Psychological Review, 56, 51–65. Harlow, H . F . ( 1959). L earning s et a nd er ror f actor t heory. I n S . K och (Ed.), Psychology: A s tudy of a sc ience ( pp. 4 92–533). N ew Y ork: McGraw-Hill. Harlow, H. F., & Kuenne, M. (1949). Learning to think. Scientific American, 181, 3–6. Joh, A . S ., A dolph, K . E ., N arayanan, P. J., & D ietz, V. A . (2007). G auging p ossibilities f or ac tion ba sed o n f riction u nderfoot. Journal of Experimental P sychology: H uman P erception an d P erformance, 3 3, 1145–1157. Johnson, M. L ., Veldhuis, J. D ., & L ampl, M. (1996). Is g rowth saltatory? The usefulness and limitations of frequency distributions in analyzing pulsatile data. Endocrinology, 137, 5197–5204. Kinsella-Shaw, J. M ., Shaw, R ., & Turvey, M. T. (1992). Perceiving “walkon-able” slopes. Ecological Psychology, 4, 223–239. Lampl, M . (1992). Fu rther obs ervations on d iurnal v ariation i n standing height. Annals of Human Biology, 19, 87–90. Lampl, M. (1993). Evidence of saltatory growth in infancy. American Journal of Human Biology, 5, 641–652.

The Growing Body in Action

319

Lampl, M., Ashizawa, K., Kawabata, M., & Johnson, M. L. (1998). An example of variation and pattern in saltation and stasis growth dynamics. Annals of Human Biology, 25, 203–219. Lampl, M ., & J eanty, P. (2003). T iming i s e verything: A re consideration of fetal growth velocity patterns identifies th e i mportance o f i ndividual a nd sex differences. American Journal of H uman Biology, 15, 667–680. Lampl, M., & J ohnson, M. L. (1993). A c ase study in daily growth during adolescence: A si ngle spurt or changes i n t he dy namics of saltatory growth? Annals of Human Biology, 20, 595–603. Lampl, M., Johnson, M. L ., & F rongillo, E . A. (2001). Mixed d istribution analysis identifies saltation and stasis growth. Annals of Human Biology, 28, 403–411. Lampl, M., Veldhuis, J. D., & Johnson, M. L. (1992). Saltation and stasis: A model of human growth. Science, 258, 801–803. Leo, A. J., C hiu, J., & A dolph, K. E. (2000, July). Temporal and functional relationships of c rawling, c ruising, an d w alking. P oster p resented a t the International Conference on Infant Studies, Brighton, UK. Lockman, J. J. ( 1984). The de velopment of de tour a bility during i nfancy. Child Development, 55, 482–491. Lockman, J. J., & Adams, C. D. (2001). Going around transparent and gridlike barriers: Detour ability as a perception-action skill. Developmental Science, 4, 463–471. Mark, L . S ., Baillet, J. A ., Cr aver, K . D., Douglas, S . D., & F ox, T. (1990). What an actor must do in order to perceive the affordance for sitting. Ecological Psychology, 2, 325–366. Mark, L. S., Jiang, Y., King, S. S., & Paasche, J. (1999). The impact of visual exploration o n j udgments o f w hether a gap i s c rossable. Journal of Experimental P sychology: H uman P erception an d P erformance, 25 , 287–295. Mondschein, E . R ., Adolph, K. E ., & Tamis-LeMonda, C . S. (2000). Gender b ias i n m others’ e xpectations a bout i nfant c rawling. Journal of Experimental Child Psychology, 77, 304–316. Moore, K. L., & Persaud, T. V. N. (1998). The developing human: Clinically oriented embryology (6th ed.). Philadelphia: W. B. Saunders. Noonan, K. J., Farnum, C. E., Leiferman, E. M., Lampl, M., Markel, M. D., & Wi lsman, N. J. ( 2004). Growing pa ins: A re t hey due to i ncreased growth during recumbency as documented in a lamb model? Journal of Pediatric Orthopedics, 24, 726–731. Ounsted, M ., & M oar, V. A . (1986). P roportionality c hanges i n t he first year of life: The influence of weight for gestational age at birth. Acta Paediatrica Scandinavia, 75, 811–818. Palmer, C . E . (1944). St udies of t he center of g ravity i n t he human body. Child Development, 15, 99–163.

320

Embodiment, Ego-Space, and Action

Palmer, C. F. (1987, April). Between a rock and a hard place: Babies in tight spaces. Poster presented at the meeting of the Society for Research in Child Development, Baltimore, MD. Plumert, J. M., Kearney, J. K., & Cremer, J. F. (2004). Children’s perception of gap a ffordances: Bicycling ac ross t raffic-fi lled intersections in an immersive virtual environment. Child Development, 75, 1243–1253. Rader, N., B ausano, M ., & R ichards, J. E . ( 1980). O n t he na ture o f t he visual-cliff-avoidance r esponse in h uman inf ants. Child De velopment, 51, 61–68. Reed, E. S. (1989). Changing theories of postural development. In M. H. Woollacott & A. Shumway-Cook (Eds.), Development of posture and gait across the lifespan (pp. 3–24). Columbia, SC: University of South Carolina Press. Richards, J. E., & Rader, N. (1981). Crawling-onset age predicts visual cliff avoidance in inf ants. Journal of E xperimental P sychology: H uman Perception and Performance, 7, 382–387. Richards, J. E ., & R ader, N. ( 1983). A ffective, b ehavioral, a nd a voidance responses on t he v isual cliff: Effects of crawling onset age, crawling experience, and testing age. Psychophysiology, 20, 633–642. Robinson, S. R., Adolph, K. E., & Young, J. W. (2004, May). Continuity vs. discontinuity: H ow different ti me scal es o f beha vioral mea surement affect th e pat tern of d evelopmental ch ange. P oster p resented a t t he meeting of the International Conference on Infant Studies, Chicago, IL. Robinson, S. R., & Kleven, G. A. (2005). Learning to move before birth. In B. Hopkins & S. Johnson (Eds.), Advances in infancy research: Vol. 2. Prenatal development of postnatal functions (pp. 131–175). Westport, CT: Praeger. Schmidt, R. A., & L ee, T. D. (1999). Motor control and learning: A be havioral emphasis (3rd ed.). Champaign, IL: Human Kinetics. Stevenson, H. W. (1972). Children’s learning. New York: Appleton-CenturyCrofts. Thelen, E ., & F isher, D . M . (1983). The or ganization of s pontaneous le g movements i n ne wborn i nfants. Journal of M otor B ehavior, 1 5, 353–377. Titzer, R . ( 1995, Ma rch). The d evelopmental d ynamics of un derstanding transparency. Pap er p resented a t t he m eeting o f t he S ociety f or Research in Child Development, Indianapolis, IN. Vereijken, B ., & A dolph, K . E . (1999). Transitions i n t he de velopment o f locomotion. In G. J. P. Savelsbergh, H. L. J. van der Ma as, & P. C. L. van Geert (Eds.), Non-linear analyses of developmental processes (pp. 137–149). Amsterdam: Elsevier.

The Growing Body in Action

321

Vereijken, B., & Waardenburg, M. (1996, April). Changing patterns of interlimb coo rdination f rom su pported t o i ndependent w alking. Poster presented at t he meeting of t he I nternational C onference on I nfant Studies, Providence, RI. Walk, R. D., & Gibson, E. J. (1961). A comparative and analytical study of visual depth perception. Psychological Monographs, 75(15, Whole No. 519). Walk, R. D., Gibson, E. J., & Tighe, T. J. (1957). Behavior of light- and darkreared rats on a visual cliff. Science, 126, 80–81. Warren, W . H . ( 1984). P erceiving a ffordances: Vi sual g uidance of s tair climbing. Journal of Experimental Psychology: Human Perception and Performance, 10, 683–703. Warren, W. H., & W hang, S. (1987). Visual guidance of walking through apertures: Body-scaled information for affordances. Journal of Experimental Psychology: Human Perception and Performance, 13, 371–383. Witherington, D. C ., C ampos, J. J., A nderson, D. I., L ejeune, L ., & S eah, E. (2005). Avoidance of heig hts on t he v isual cliff i n newly walking infants. Infancy, 7, 3.

10 Motor Knowledge and Action Understanding A Developmental Perspective

Bennett I. Bertenthal and Matthew R. Longo

The prediction of where and how people are going to move has obvious relevance for social interaction. As adults, we are extremely adept at predicting at least some of t hese behaviors automatically i n real time. If, for example, we observe someone reaching in the direction of a half-filled glass on a table, we can predict with relative certainty that the reaching action is directed toward the glass. Often, we can also predict if the actor intends to drink from the glass or intends to remove t he glass depending on t he state of t he actor as well as t he context surrounding the action. How d o w e de tect t he g oals, i ntentions, a nd st ates o f o thers so rapidly w ith l ittle i f a ny a wareness o f t hese i mplicit i nferences? According to a g rowing number of social neuroscientists, there are specialized mechanisms in the brain for understanding actions and responding to them. Evidence from neuroimaging studies and neuropsychological studies of normal and brain damaged patients offer considerable support for t his cla im (Decety & S ommerville, 2 004; Frith & F rith, 2 006; Gr èzes, F rith, & P assingham, 2 004; Pelphrey, Morris, McCarthy, 2005; Saxe, Xiao, Kovacs, Perrett, & K anwisher, 2004). The a vailability o f spec ialized p rocesses su ggests t hat t he brain may be i ntrinsically prepared for this information and, thus, that action understanding should be evident early in development. 323

324

Embodiment, Ego-Space, and Action

Indeed, ev en n eonates s how ev idence o f r esponding t o h uman behaviors, such as speech, gaze, and touch (Lacerda, von Hofsten, & Heimann, 2001; von Hofsten, 2003). It is not entirely clear, however, that these responses are contingent on perceived actions as opposed to movements. In order to avoid unnecessary confusion, it behooves us t o beg in b y d istinguishing be tween m ovements a nd ac tions. Human actions a re comprised of a b road range of limb, head, a nd facial movements, but not all body movements are actions. We reserve this latter description for goal-directed movements. These are movements that are planned relative to an intrinsic or extrinsic goal prior to their execution. For example, a prototypical goal-directed action might i nvolve ex tending a n a rm to g rasp a ba ll resting on a t able. By contrast, waving an arm and accidentally hitting a ba ll does not represent an action. Recent findings suggest that infants understand the g oal-directed na ture o f ac tions b y t he s econd ha lf o f t he first year a nd per haps e ven e arlier (e.g., C sibra, G ergely, B iro, K oos, & Brockbank, 1999; Kiraly, Jovanovic, Prinz, Aschersleben, & Gergely, 2003; Luo & Ba illargeon, 2002; Woodward, 1998; 1999; Woodward & Sommerville, 2000). Two Modes for Understanding Actions What are the mechanisms that underlie action understanding? By action understanding, we mean the capacity to achieve an internal description or representation of a per ceived ac tion a nd to use it to organize a nd p redict a ppropriate f uture beha vior. Rec ent n europhysiologically motivated theories (Jeannerod, 1997, 2001; Rizzolatti, Fogassi, & Gallese, 2001) suggest that there are two mechanisms that might explain how action understanding occurs. The more conventional mechanism involves some form of visual analysis followed by categorization and inference. This type of analysis can be thought of as progressing through stages of processing comparable to those proposed by Marr (1982). These processes are mediated via the ventral visual pathway of the brain and are independent of the motor system. For ex ample, when we obs erve a ha nd g rasping a g lass, t he v isual system parses this scene into an object (i.e., the glass) and a moving hand that eventually contacts and grasps the glass. This visual input is recognized and associated with other information about the glass and the actor in order to understand the observed action.

Motor Knowledge and Action Understanding

325

Whereas this first mechanism applies to all visual information, the second mechanism is unique to the processing of actions (and, perhaps, ob ject a ffordances) a nd ha s be en r eferred t o a s a direct matching or observation-execution m atching system (Rizzolatti et al., 2001). With this mechanism, visual representations of observed actions are mapped directly onto our motor representation of the same action; an action is understood when its observation leads to simulation (i.e., representing the responses of others by covertly generating similar subthreshold responses in oneself) by the motor system. Thus, when we observe a ha nd grasping a g lass, the same neural ci rcuit t hat p lans o r ex ecutes t his g oal-directed a ction becomes active in the observer’s motor areas (Blakemore & Decety, 2001; Iacobo ni, M olnar-Szakacs, G allese, B uccino, Ma zziotta, & Rizzolatti, 2005; Rizzolatti et al., 2001). It is the “motor knowledge” of the observer that is used to understand the observed goaldirected a ction v ia c overt i mitation. For t his re ason, k nowledge of the action will depend in part on the observer’s specific motor experience with the same action (Calvo-Merino, Glaser, Grèzes, Passingham, & Ha ggard, 2005; Longo, Kosobud, & B ertenthal, in press). In co ntrast t o t he first v isual m echanism, t he flow of information b etween p erception an d a ction b y dir ect m atching e nables more than an appreciation of the surface properties of the perceived actions. Simulation enables an appreciation of the means (i.e., how the body parts are arranged to move) by which the action is executed as well as an appreciation of the goal or the effects of the action. This implies that the observer is able to covertly imitate as well as predict the outcome of an observed action.

Simulation of Actions Although this latter hypothesis for explaining how we understand actions d ates back t o t he i deomotor t heory o f J ames ( 1890) a nd Greenwald (1970), direct evidence supporting this view emerged only r ecently w ith t he d iscovery o f m irror n eurons i n t he v entral premotor c ortex of t he mon key’s br ain. The se neurons discharge when the monkey performs a goal-directed action as well as when the monkey observes a human or conspecific perform the same or a similar action (Gallese, Fadiga, Fogassi, & Rizzolatti, 1996; Rizzolatti,

326

Embodiment, Ego-Space, and Action

Fadiga, G allese, & F ogassi, 1 996). Thus, t hese n eurons p rovide a common internal representation for executing and observing goaldirected action. More recently, mirror neurons were also observed in the inferior parietal lobule which shares direct connections with the premotor cortex (Fogassi et al., 2005). This latter finding is important for explaining how visually perceived face and body movements are represented by mirror neurons in the premotor cortex. These movements a re coded b y t he superior tem poral su lcus, wh ich d oes n ot p roject d irectly t o t he m irror n eurons i n t he v entral p remotor co rtex. I nstead, t he su perior temporal sulcus projects to the inferior parietal lobule which is connected to the ventral premotor cortex (Rizzolatti & Cr aighero, 2004). Thus, mirror neurons are innervated by a fronto-parietal circuit in the motor system that also receives visual inputs from the superior temporal su lcus. Human neuroimaging a nd t ranscranial magnetic st imulation st udies have shown ac tivation of a h omologous fronto-parietal circuit during both the observation as well as the imitation of actions (Brass, Zysset, & v on Cramon, 2001; Buccino e t a l., 2 001; Dec ety, Cha minade, Gr èzes, & M eltzoff, 2002; Fadiga, Fogassi, Pavesi, & R izzolatti, 1995; Grèzes, Armony, Rowe, & Passingham, 2003; Iacoboni, Woods et al., 1999; Koski, Iacoboni, Dubeau, Woods, & Ma zziotta, 2003; for a r eview, see R izzolatti & Craighero, 2004). This ne urophysiological e vidence i s c omplemented by re cent behavioral evidence showing that the observation of actions facilitates o r p rimes r esponses i nvolving s imilar ac tions. F or ex ample, visuomotor priming is observed when grasping a bar in a horizontal or vertical orientation is preceded by a picture of a hand or the observation of an action congruent with the required response (Castiello, Lusher, Mari, Edwards, & Humphreys, 2002; Edwards, Humphreys, & Castiello, 2003; Vogt, Taylor, & Hopkins, 2003). Similarly, response facilitation is observed when the task involves responding to a t apping or lifting finger or an opening or closing hand that is preceded by a congruent (e.g., index finger responds to observation of t apping i ndex finger) a s opposed to i ncongruent st imulus (e.g., index finger responds to observation of tapping middle finger) (Bertenthal, Longo, & Kosobud, 2006; Brass, Bekkering, & Prinz, 2001; Brass, Bekkering, Wohlschlager, & Pr inz, 2000; Heyes, Bird, Johnson, & Haggard, 2005; Longo et al., in press).

Motor Knowledge and Action Understanding

327

Prediction of the Effects of Actions While t he p receding e vidence su ggests t hat ac tion obs ervation i s accompanied by covert imitation of the observed action, additional evidence su ggests t hat ac tion obs ervation a lso leads to t he prediction of t he effects or outcome of t he ac tion. I n a st udy by K andel, Orliaguet, a nd V iviani ( 2000), pa rticipants w ere sh own a po intlight tracing of the first letter of a t wo-letter sequence handwritten in cursive, and they were able to predict the second letter from the observation of t he pre ceding move ments. Ne uroimaging e vidence revealed t hat a f ronto-parietal c ircuit (associated w ith t he h uman mirror system) was activated when predicting the next letter from a point-light tracing, but this same circuit was not activated when predicting the terminus of a point-light tracing of a spring-driven ball after it bounced (Chaminade, Meary, Orliaguet, & Decety, 2001). The finding that the fronto-parietal circuit was only activated when predicting the outcome of a human action and not when predicting the outcome of a mechanical event is consistent with other research suggesting that the mirror system is restricted to biological movements (e.g., K ilner, Paulignan, & Bla kemore, 2003; Tai, S cherfler, Brooks, Sawamoto, & Castiello, 2004). This last finding implies that the direct matching system is sensitive to the perceived similarity between observed and executed responses, and t hus, observers should show an improved ability to predict t he effects o r o utcome o f a n ac tion ba sed o n t heir o wn a s o pposed t o others’ m ovements bec ause t he ma tch be tween t he obs erved a nd executed actions should be greatest when the same person is responsible for both (e.g., Knoblich, 2002; Knoblich, Elsner, Aschersleben, & Metzinger, 2003). A few recent studies manipulating the “authorship” of the movements find that predictability is indeed greater when predicting the outcomes of self-produced as opposed to other produced movements (Beardsworth & Buckner, 1981; Flach, Knoblich, & Prinz, 2003; Knoblich & Flach, 2001; Repp & Knoblich, 2004). Why s hould c overt i mitation or si mulation of move ments c ontribute t o p redicting t heir e ffects? The a nswer i s r elated t o t he inertial la gs a nd n eural co nduction dela ys t hat acco mpany l imb movements in the human body. As a co nsequence of these delays, it i s i nsufficient for movements to b e g uided by s ensory fe edback, because such m ovements would be per formed in a jer ky and stac-

328

Embodiment, Ego-Space, and Action

cato fashion as opposed to being smooth and fluid (Wolpert, Doya, & Kawato, 2003). Thus, the execution of most movements requires prospective control or planning. Based on computational studies, it has been proposed that this planning or control involves an internal model, d ubbed forward mo del, wh ich p redicts t he s ensory co nsequences of a motor command (Jordan, 1995; Kawato, Furawaka, & Suzuki, 1987; M iall & W olpert, 1996; Wolpert & Fla nagan, 2 001). Presumably, t he s imulation o f a g oal-directed ac tion i ncludes t he activation of t hese forward models enabling prediction a s well a s covert imitation of the behavior. As motor behaviors are practiced and l earned, t hese f orward m odels beco me be tter spec ified and enable m ore p recise p rediction o f t he en suing m otor c ommands. According t o Wolpert e t a l. (2003), s imilar forward models could be used to predict social behaviors, such as facial expression, gaze direction, or posture. Perception of the Structure of Human Movements One additional source of evidence suggesting that an observationexecution matching system is functional in human observers derives from their showing greater sensitivity to biological motions that are consistent as opposed to inconsistent with the causal structure of an action. A common technique for studying the perception of human movements i nvolves t he depiction of t hese movements w ith pointlight displays. These displays are created by filming a person in the dark with small lights attached to his or her major joints and head. (An ex ample of six s equential f rames f rom a po int-light d isplay i s presented in Figure 10.1.) It is also possible to synthesize these nested pendular motions, which is the technique that we have used in much of our previous research (see Bertenthal, 1993, for a review). Johansson (1973) was the first to systematically study the perception of these displays. He reported that adult observers perceive the human form and identify different actions (e.g., push-ups, jumping jacks, etc.) in displays lasting less than 200 ms, corresponding to about five frames of a film sequence. This finding is very impressive because these displays are devoid of a ll featural information, such a s clothing, sk in, or hair. It thus appears that recognition of actions can result exclusively from the extraction of a unique structure from motion-carried information.

Motor Knowledge and Action Understanding

329

Figure 10.1 Six sequentially sampled frames from a moving point-light display depicting a person walking.

In spite of t hese i mpressive findings, t he recognition of upsidedown biological motion displays as depicting a person or the person’s direction of gait is significantly impaired (Bertenthal & Pinto, 1994; Pavlova & Sokolov, 2000; Sumi, 1984; Verfaillie, 1993). If recognition was based only on the perception of structure from motion, then the orientation of the human form should not matter. It thus appears that some additional processes involving the causal structure of human movement contribute to the perception of biological motions. For similar reasons, point-light trajectories obeying the kinematics of human movement are perceived as moving at a uniform speed, even though the variations in speed can exceed 200%. By co ntrast, observers are much more sensitive to speed differences in point-light trajectories that do not conform to the kinematics of human movements (V iviani, Ba ud-Bovy, & Red olfi, 1997; V iviani & M ounoud, 1990; V iviani & S tucchi, 1 992). These la tter findings t hus su ggest that curvilinear movements are not perceived in terms of the actual changes that occur in velocity, but rather in terms of the biological movements that appear smooth and uniform when executed by one or more articulators of the human body. Although the preceding evidence is consistent with motor knowledge contributing to action understanding, it is difficult to rule out perceptual l earning a s t he p rincipal r eason f or obs ervers sh owing differential sensitivity to familiar and unfamiliar biological motions. Indeed, this same confound is present when interpreting much of the previously reviewed evidence suggesting that motor knowledge contributes to the prediction of human actions. This confound also adds to the challenge of testing whether motor knowledge contributes to infants’ understanding of others’ actions, which is why it is necessary to consider the contributions of visual attention and experience when studying the functionality of the observation-execution matching system during infancy.

330

Embodiment, Ego-Space, and Action

Developmental Evidence for an Observation– Execution Matching System For t hose u nfamiliar w ith t he preceding e vidence f or a n obs ervation–execution ma tching s ystem, i t w ill be u seful t o r ecap h ow a direct matching system contributes to action understanding. The traditional view is that others’ actions are understood via the same perceptual and conceptual processes as are all other visual events. If, however, t he perception of ac tions a lso ac tivates t he motor system (i.e., direct matching), then the specific motor knowledge associated with the perceived actions will contribute to understanding others’ actions. The specific m echanism f or t his u nderstanding i s co vert imitation or simulation of the observed action. Although the motor response is not overtly executed, the planning for specific movements (i.e., means) as well as the effects or perceptual consequences (i.e., the goal) are automatically activated in the motor cortex. Thi s activation imparts to the observer embodied knowledge of the perceived action (i.e., motor knowledge) without the need to rely on visual experience or logical inferences based on t he perceived information. The implication is that we can then understand others’ actions, such as lifting a c up, playing the piano, or displaying a d isposition, such a s displeasure, based on the internalized motor programs available to us for performing the same actions. In the remainder of this chapter, we will review a series of experiments designed to investigate whether motor knowledge contributes to action understanding by infants. Although it is certainly possible for i nfants t o ach ieve t his u nderstanding f rom v isual ex perience alone, t here ex ists a de velopmental adv antage for d irect matching, because such a m echanism does not necessitate specific conceptual or symbolic knowledge of actions, which demand the development of h igher-level cog nitive p rocesses. Thus, t he f unctioning o f t his system may offer an explanation for the precocious development of young i nfants’ u nderstanding o f ac tions, a nd soc ial de velopment more generally (Tomasello, 1999). The evidence is divided into three sections. First, we will review a series of experiments showing that infants demonstrate perseverative s earch er rors f ollowing obs ervation o f so meone el se’s ac tions. This evidence will be used to support the claim that an observation execution matching system is functional in infants and that action observation e licits c overt i mitation. S econd, we w ill re view re cent

Motor Knowledge and Action Understanding

331

experiments showing that infants visually orient in response to deictic gestures, and that this response is specifically a function of motor knowledge enabling prediction of t he effect of a n observed action. Finally, we will review evidence showing that infants perceive biological motions depicting fa miliar but not u nfamiliar ac tions, a nd that t he de velopment o f t his per ceptual sk ill i s a t l east co rrelated with the development of their motor skills.

Perseverative Errors in Searching for Hidden Objects The Piagetian A-not-B error, observed in 8- to 12-month-old infants, is among the most consistently replicable findings in developmental psychology. In t his task, i nfants first search correctly for a n object they see hidden in one location (A) on one or more trials, but then continue to search at the A location after the object has been hidden in a new location (B). A number of researchers attribute this search error to the formation of a prepotent response. For example, Smith, Thelen, and colleagues (Smith, Thelen, Titzer, & McLin, 1999; Thel en, Schoner, Scheier, & Smith, 2001) claim that the error arises from the task dynamics of reaching, which causes the motor memory of one reach to persist and influence subsequent reaches. Diamond (1985) argues that one cause of the error is the inability to inhibit a previously rewarded motor response. Zelazo and colleagues (Marcovitch, & Z elazo, 1 999; Z elazo, Re znick, & S pinazzola, 1 998) ac count f or perseverative r esponses i n y oung ch ildren i n ter ms o f t he r elative dominance of a response-based system “activated by motor experience” o ver a co nscious r epresentational s ystem ( Marcovitch & Zelazo, 1999, p. 1308). In each of these accounts, a history of reaches to the A location is a crucial aspect of the perseverative response. If an observation-execution matching system is functional in young infants, then simply observing someone else reach to the same location repeatedly may be sufficient for eliciting this error. This prediction follows from the claim that observing an action will lead to covert imitation of that same action, and thus will be functionally similar t o ex ecuting t he ac tion o neself. We ( Longo & B ertenthal, 2006) tested this prediction by administering the standard A-not-B hiding t ask t o a s ample o f 9 -month-old i nfants. Twenty 9 -monthold infants were tested with the canonical reaching task, and twenty

332

Embodiment, Ego-Space, and Action

were tested in a co ndition in which t hey watched a n experimenter reach, but did not reach themselves. Infants were seated on their mother’s lap in front of a t able covered with black felt and allowed to play with a toy (a rattle or a plastic Big-Bird) for several seconds. Four pretraining trials were administered using procedures similar to those used by Smith et al. (1999). On the first pretraining trial, the toy was placed on top of a covered well. On the second trial, the toy was placed in the well but with one end st icking out of t he well. On t he t hird t rial, t he toy was placed completely in the well but left uncovered. On the final trial, the toy was placed completely in the well and covered. The experimental trials used a t wo-well apparatus and consisted of three A trials and one B trial (see Figure 10.2). Infants in the reaching condition were allowed to search on all trials. Infants in the looking condition were only allowed to search on the B trial and observed the ex perimenter recover t he object on t he A t rials. On e ach t rial, the toy was waved and the infant’s name was called to attract his or her attention. The experimenter removed the lid with one hand and placed the toy in the well with the other hand. The location (right or left) of t he A t rials a nd ex perimenter’s a rm (right or left; coded as which arm the experimenter used to hide the toy) were counterbalanced between infants. Thus, t he experimenter’s reaches were ipsilateral ha lf o f t he t ime (right-handed r each t o t he l ocation o n t he right or left-handed reach to location on t he left) and contralateral

Figure 10.2 Infant searching for toy in one of the two hiding wells (from Longo & Bertenthal, 2006).

Motor Knowledge and Action Understanding

333

half of the time (right-handed reach to the location on the left or lefthanded reach to the location on the right). For A trials in the reaching condition, the apparatus was slid forward to within the baby’s reach following a 3 s delay. If, after 10 s, the infant had not retrieved the toy from the A location, the experimenter uncovered the well and encouraged the infant to retrieve the toy. For A trials in the looking condition, the experimenter did not slide the apparatus toward the infants and recovered the toy following a three second delay. The ex perimenter u sed t he s ame a rm to retrieve t he toy as was used to hide the toy. On B t rials, in both conditions, the experimenter hid the toy (using the same hand as on the A trials) and then the apparatus was moved to within the infant’s reach following a 3 s delay. The dependent measure was whether the infant searched for the hidden toy at the correct B l ocation or reverted to search at the A location where the toy had been previously found. The results revealed that infants in the Reaching condition were significantly more likely to make an error on B trials (15 of 20) than on any of the A trials, the canonical A-not-B error. Infants also made significantly more errors on looking B t rials (12 of 20) than infants in the reaching condition made on A trials (see Figure 10.3). These results dem onstrate t hat o vert r eaching t o t he A l ocation b y t he infant is not necessary to elicit the A-not-B error. During training, 80

Search Error (% Infants)

70 60 50 40 30 20 10 0 A1

A2

A3

Reaching B Looking B

Trial

Figure 10.3 Percentage of i nfants s earching i ncorrectly on first, second, and third reaching A t rials, searching B t rial, and looking B t rial (from Longo & Bertenthal, 2006).

334

Embodiment, Ego-Space, and Action

infants in the looking condition had reached four times to a central location u sing t he s ingle-well a pparatus, b ut had never r eached t o the A location. Still, they were found to “perseverate” on their very first reach using t he t wo-well apparatus. The l ikelihood of ma king an error did not differ between the two B conditions. In sum, these data suggest that looking A trials influenced performance on the B t rials. Nonetheless, in order to establish that these responses a re t ruly perseverative, a s o pposed t o s imply r andom, i t is necessary to demonstrate errors on significantly more than 50% of the trials. Binomial tests revealed greater than chance perseveration on reaching, but not on looking B t rials. Thus, t hese d ata a re consistent with findings from previous studies showing that infants perseverate after reaching A trials, but the evidence of perseveration following looking A trials was somewhat equivocal. In t he s econd e xperiment, we s ought to prov ide more d efinitive evidence for pers everative s earch i n t he looking condition. Ma rcovitch, Z elazo, a nd S chmuckler ( 2002) f ound t hat t he l ikelihood of a pers everative s earch er ror i ncreased a s t he number of A t rials increased, at least between the range of one to six. If the A-not-B error is indeed a function of similar mechanisms inducing perseverative search in both the reaching and looking conditions, then we would expect search errors in the looking condition to be more robust as the number of A trials increases. Thus, in order to increase the likelihood of finding perseveration at greater than chance levels, the following experiment included six looking A trials instead of three. Thirty 9-month-old infants were tested following the procedures described for the first experiment, except that all infants were tested in t he looking condition a nd t here were six i nstead of t hree A t rials. The r esults f rom t his ex periment r evealed t hat 70% (21 of 3 0) of the infants made t he A-not-B error, significantly more than half of t he s ample. This finding s uggests t hat ob servation of a re aching ac tion is su fficient to elicit perseverative search. As such, t hese findings are consistent with infants covertly imitating the observed action of searching for the toy in the covered well. Can these results be explained by other mechanisms? Some researchers suggest that the crucial factor leading to search errors at the B location is not a history of reaching to the A location, but rather a history of visually attending to or planning to reach to the A l ocation (e.g., Di edrich, H ighlands, S pahr, Thelen, & S mith, 2001; Munakata, 1997; Ruffman & L angman, 2002). In t he current

Motor Knowledge and Action Understanding

335

experiments, g reater a ttention t o o ne l ocation t han t he o ther w ill likely covary with the history of simulated reaching to that location, and t hus i t i s d ifficult t o d isambiguate t hese t wo i nterpretations. Although th is attentional c onfound i s o ften a p roblem wh en te sting the contribution of motor knowledge to performance, converging analyses assessing infants’ own reaching behavior were helpful in showing t hat a n attentional i nterpretation was not su fficient for explaining their search errors. It is well documented in the literature that infants show an ipsilateral bias in their reaching. Bruner (1969), for example, referred to the apparent inability of young infants to reach across the body midline as t he “mysterious midline ba rrier,” a rguing t hat contralateral reaches do not oc cur before 7 m onths of a ge. C ontralateral reaching becomes more frequent with age both on reaching tasks during infancy (van Hof, van der Kamp, & Savelsbergh, 2002) and in “hand, eye, and ear” tasks in later childhood (Bekkering, Wohlschläger, & Gattis, 2000; Schofield, 1976; Wapner & Cirillo, 1968). Nevertheless, a clear preference for ipsilateral reaches is consistently observed in early development. In Experiment 1, infants showed an ipsilateral bias in their reaching. On l ooking B t rials, 9 0% (18 o f 2 0) o f t he i nfants made ipsilateral r eaches, wh ich w as sig nificantly more than would be expected by chance. Similar ipsilateral biases were observed on the three reaching A t rials (81.7%), and the reaching B t rials (73.7%, 14 of 19 one-handed reaches). I ntriguingly, infants’ simulation of t he experimenter’s actions mirrored t heir motor bias, as infants i n t he looking condition were significantly more likely to reach to location A than to location B wh en the experimenter had r eached ipsilaterally (8 of 10), rather than contralaterally (4 of 10) (see Figure 10.4). This result suggests that infants’ responses following observation of the ex perimenter’s reaches were not random, at least for ipsilateral reaches. In Experiment 2, an ipsilateral bias in infants’ reaching was again observed, with 85% (23 of 27) of one-handed reaches scored as ipsilateral. Furthermore, this ipsilateral bias was again mirrored by infants’ representation of the experimenter’s reaching. Perseveration was observed more often than predicted by chance when the experimenter had r eached i psilaterally o n t he A t rials (13 o f 1 5 i nfants made the error), but not when the experimenter had reached contralaterally (8 of 15 made the error). This difference between conditions was significant (see Figure 10.4).

336

Embodiment, Ego-Space, and Action

100

Search Error (% Infants)

90

Ipsilateral Contralateral

80 70 60 50 40 30 20 10 0

Exp. 1

Exp. 2

Figure 10.4 Percentage of i nfants s earching i ncorrectly fol lowing ob servation of ipsi- and contralateral reaches by t he experimenter in the looking condition of Experiments 1 and 2 (from Longo & Bertenthal, 2006).

It is very possible that the infants’ ipsilateral bias may have influenced their likelihood of covertly imitating the experimenter. A number of recent studies suggest that simulation by the mirror system is significantly st ronger wh en obs erved ac tions a re w ithin t he m otor repertoire of the observer (e.g., Calvo-Merino et al., 2005; Longo et al., in press). If an observer does not possess the motor skill to precisely and reliably perform an action, then he or she cannot simulate it with the same level of specificity as a skilled performer. Since infants have difficulty r eaching co ntralaterally, s imulation o f o bserved co ntralateral reaches should be w eaker than that for ipsilateral reaches, or perhaps absent entirely. Thus, if an observation-execution matching mechanism is operative, then infants should perseverate more often following observation of ipsi- rather than contralateral reaches to the A location by the experimenter, as we found. Although the preceding findings a re not meant t o d iscount t he relative contributions of at tention to response perseveration, at t he very least the current evidence appears to challenge the sufficiency of an attentional explanation focused on spatial coding of the hidden object. I n pa rticular, it i s not at a ll apparent how such a n ac count would ex plain wh y i nfants sh owed g reater pers everation a fter observing the experimenter reach ipsilaterally than contralaterally.

Motor Knowledge and Action Understanding

337

Other po tential ex planations i nvolving, f or ex ample, ob ject r epresentations (e.g., Munakata, 1998), have similar difficulty accounting for this effect. By contrast, a direct matching interpretation accounts for this effect in terms of infants’ own difficulties with contralateral reaching, which should lead to weaker or absent motor simulation following observed contralateral, compared with ipsilateral reaches, and consequently less perseveration. The final experiment in this series was designed to probe whether a simulative response to an observed action is initiated only when the ac tion i s per formed b y a h uman a gent o r a lso wh en i t i s per formed by a r obot or some other mechanical a gent. A s previously discussed, research with adults reveals that the execution of an action is often facilitated when observing t hat same ac tion, a nd i mpaired when observing a d ifferent action (Brass et al., 2001; Bertenthal et al., 2006). It appears, however, that this conclusion applies only when observing actions performed by human agents. For example, Kilner et al. (2003) instructed participants to make vertical or horizontal arm movements while observing either a human or a robot making the same or t he opposite a rm movements. The results showed t hat observation of incongruent arm movements interfered significantly with the performance by the observer, but this effect was limited to observation of t he human a gent. W hen obs erving t he robot, t here was no evidence of an interference effect on the performance of the observer. In order to test this same question with infants, we modified t he testing situation so t hat the human experimenter would be h idden behind a c urtain, but would be ab le t o ma nipulate t he covers a nd the toy with two large mechanical claws that were held vertically in front of the curtain. Thus, t he i nfant w as only able to obs erve t he mechanical claws, a nd even t he experimenter’s ha nds t hat g ripped the mechanical claws were not visible (see Figure 10.5). A total of 30 infants were tested, and the procedure was similar to that used in the preceding experiment. After the training trials, there were six A trials followed by one B trial. From the infants’ perspective, the hiding and finding of the toy was identical to the previous two experiments, except t hat t he ex perimenter w as n ot v isible a nd t wo m echanical claws a ppeared i n h er p lace. Unlike t he r esults f rom t he p revious experiments, t he pers everative er ror w as made b y only 4 0% (12 of 30) of the infants, which was significantly less than occurred in the previous two experiments. Moreover, the likelihood of the error was

338

Embodiment, Ego-Space, and Action

essentially the same for ipsilateral and contralateral searches by the claw. Thus, the substitution of a mechanical agent for a human agent reduced the frequency of the perseverative error. Our i nterpretation f or t his finding i s t hat inf ants’ t endency t o simulate observed ac tions is less l ikely when t he ac tion is not performed by a human. We realize, however, that the mechanical agent is less fa miliar t han t he human agent, a nd t hus fa miliarity, per s e, may be r esponsible for t hese differences in t he likelihood of a per severative sea rch e rror. I n f uture r esearch, w e p lan t o m anipulate whether the mechanical claw is observed as an independent agent or as a t ool used by the human agent. If perseverative performance is greater when the mechanical claw is perceived as a tool, the importance o f fa miliarity f or ex plaining pers everative per formance w ill be diminished, because familiarity will remain constant in both the tool and agent conditions. Although support for t his prediction awaits a n empirical te st, a recent study by Hofer, Hauf, and Aschersleben (2005) suggests that infants d istinguish be tween t ools a nd m echanical a gents. I n t his experiment, 9 -month-old i nfants d id n ot i nterpret a n ac tion per formed by a m echanical claw as goal-directed but did interpret the action as goal-directed when the mechanical claw was perceived as a tool. Presumably the tool is interpreted as goal-directed at an earlier age than the mechanical agent because infants perceive it as an

Figure 10.5 Infant observing the toy being hidden by two mechanical claws.

Motor Knowledge and Action Understanding

339

extension of the human arm, and thus are better able to simulate and understand its effects. Until r ecently, t his finding ma y ha ve s eemed a t odd s w ith t he evidence f or m irror n eurons i n t he m onkey’s b rain. W hen t hese neurons w ere first d iscovered, it w as reported that they d ischarge to goal-directed actions performed by conspecifics or humans, but not to actions performed by tools, such as a pair of pliers (Rizzolatti et al., 2001). Recently, however, Ferrari, Rozzi, and Fogassi (2005) reported identifying a po pulation of neurons in t he monkey’s ventral p remotor c ortex th at s pecifically d ischarges to g oal-directed actions executed by tools. Taken together, this evidence suggests that an action performed by a tool perceived as an extension of a human agent will be more likely to induce motor simulation than an action performed by the same tool perceived as a mechanical agent. Visual Orienting in Response to Deictic Gestures Joint attention to objects and events in the world is a necessary prerequisite for sharing experiences with others and negotiating shared meanings. As Baldwin (1995, p. 132) puts it: “ joint attention simply means the simultaneous engagement of two or more individuals in mental focus on one and the same external thing.” A critical component in establishing joint attention involves following the direction of someone else’s gaze or pointing gesture (Deak, Flom, & Pick, 2000). Both of these behaviors require that the deictic gesture is interpreted not a s t he g oal, i tself, b ut r ather a s t he m eans t o t he g oal. W hen responding to a redirection of gaze or to the appearance of a pointing gesture, the observer does not fi xate on the eyes or the hand but rather focuses on the referent of these gestures (Woodward & Guarjardo, 2002). In this case the behavior is communicative and the goal is some distal object or event (“there’s something over there”). Thus, it is necessary for the observer to predict the referent or the goal of the action from observing its execution by someone else. Until recently, the empirical evidence suggested that infants were unable t o f ollow t he d irection o f a g aze o r a po int u ntil a pproximately 9 to 12 months of age (e.g., Corkum & Moore, 1998; Leung & Rheingold, 1981; Scaife & Bruner, 1975). If, however, these behaviors are m ediated b y a n obs ervation-execution ma tching s ystem, t hen gaze-following should precede following a pointing gesture because

340

Embodiment, Ego-Space, and Action

control of eye movements and saccadic localization appear at birth or soon thereafter (von Hofsten, 2003), whereas the extension of the arm a nd i ndex finger t o f orm a po inting g esture d oes n ot a ppear until approximately 9 months of age (Butterworth, 2003). Indeed, it should be possible to show evidence of gaze-following long before 9 months of age. This prediction has now been validated. Hood, Willen, and Driver (1998) ad apted a m ethod po pularized b y P osner ( 1978, 1 980) f or studying spatial orienting. In a prototypical study, adult subjects are instructed to detect visual targets, which may appear on either side of a fixation point. Their attention can be cued to one side or the other before the target appears (e.g., by a brief but uninformative flash on that side). The consistent finding f rom t his paradigm is t hat target detection i s more r apid on t he c ued s ide, bec ause attention i s oriented in that direction. It is relevant to note that the attentional cueing preceding the orienting response is covert and does not involve an overt eye movement. As such, this attentional cueing is consistent with a simulation of an eye movement that enables prediction before the visual target appears. Hood et al. tested 4-month-old infants to determine if they would shift their visual attention in the direction toward which an adult’s eyes turn . The d irection o f per ceived g aze w as ma nipulated i n a digitized adult face. After infants oriented to blinking eyes focused straight ahead, the eyes shifted to the right or to the left. A key innovation of this paradigm was that the central face disappeared after the eyes were averted to avoid difficulties with infants disengaging their fi xation of the face. An attentional shift w as measured by the latency and direction of infants’ orienting to peripheral probes presented after t he face d isappeared. Infants oriented faster and made fewer errors when presented with a probe congruent with the direction of gaze than when presented with an incongruent probe. The se findings suggest that young infants interpret the direction of gaze as a cue to shift attention in a specified direction. It is significant to note t hat t he attentional cue was not i n t he location of t he probe, but simply pointed to t hat location. Thus, faster responding to t he spa tially co ngruent c ue r equired t hat i nfants u nderstand a t some level the meaning of the change in gaze direction in order to predict the future location of the probe. More recent research reveals that gaze following was restricted to a face in which infants observed a dynamic shift in gaze direction (Farroni, Johnson, Brockbank, &

Motor Knowledge and Action Understanding

341

Simion, 2000). When the averted gaze was presented statically, there was no evidence of i nfants following t he gaze sh ift. Intriguingly, a new report by Farroni and colleagues (Farroni, Massaccesi, Pividori, & J ohnson, 2 004) su ggests t hat t hese findings a re re plicable w ith newborn infants. With K atharina R ohlfing, we sought to extend this paradigm to study whether infants younger than 9 m onths of age would also reorient their attention in response to the direction that another person is pointing with their hand. Interestingly, Amano, Kezuka, and Yamamoto (2004) co nducted a n obs ervational st udy sh owing t hat 4-month-old infants responded differently to the pointing done by an ex perimenter a nd t heir mothers. I n order to redirect attention, infants more often needed t he combination of eye gaze a nd pointing while interacting with the experimenter than while interacting with their mothers. W hen i nteracting with their mothers, i nfants were ab le t o f ollow t he po inting g esture a lone wh ile t he m others maintained eye contact. One interpretation for these findings is that infants are more familiar with their mothers’ gestures and thus are more likely to correctly interpret them. A different interpretation is that i nfants a re more l ikely to follow t he pointing gesture of t heir mothers because their mother’s face is more familiar and thus it is easier for them to disengage from the face. By adapting the same paradigm used by Hood et al. (1998) to study gaze-following, we were able to eliminate some of the possible confounds present in previous studies of pointing because infants did not have to disengage from a central stimulus of a face. Infants between 4.5- and 6.5-months-old were tested. They were shown a series of computerized stimuli on a projection screen while sitting o n t heir pa rents’ la ps. E ach t rial co nsisted o f t he f ollowing sequence of events (see Figure 10.6): 1.

The hand appeared on the screen with fingers oriented upward. The fingers waved up and down and were accompanied by a voice saying “look baby, look!” to recruit the baby’s attention. This segment was played until the infant fi xated the hand. 2 . Once the finger waving ended, t he ha nd was s een t ransforming into a c anonical pointing gesture that moved a sho rt distance in the direction of the pointing finger. This segment lasted 1000 ms. 3. After the hand disappeared, a digitized picture of a toy appeared randomly on the left or right side of the screen. On half of the trials, this probe was congruent with the direction of the point and

342

Embodiment, Ego-Space, and Action

1. Finger waving

2. Finger pointing (1000 ms)

3. Probe appears

Figure 10.6 Stimulus sequence for e ach t rial. 1. Fi ngers wave up a nd down. 2 . Index finger points toward left or right side of screen. 3. Probe appears on left or right side. (This sequence corresponds to an incongruent trial.)

on the other half of the trials it was incongruent with the pointing direction. The probe rema ined v isible for 3 s a nd was accompanied by a voice saying “wow!” Two different toys (a clown and an

Ernie puppet) were presented i n a r andomized order (Figure 10.6).

Based on the videotapes of the infants’ behavior, we measured the response t ime to s hift a ttention i n t he d irection o f t he per ipheral probe. The probe that appeared in the direction cued by the pointing finger is referred to as the congruent probe, a nd t he probe t hat appeared in the opposite direction is referred to as the incongruent probe. A total of 20 infants completed an average of 26 trials (S =6.2, range: 10–32). As can be seen in Figure 10.7 (Dynamic condition), infants oriented toward the congruent probe significantly faster than they d id t oward t he i ncongruent p robe. These r esults su ggest t hat infants as young as 4.5 months of age respond to a dynamic pointing gesture by shifting their visual attention to a shared referent. The second experiment tested whether the movement of the pointing finger w as n ecessary t o el icit t he sh ift in attention or whether a st atic po inting g esture w ould be su fficient. A n ew s ample o f 14 infants between 4.5 to 6.5 months of age was tested with t he same

Motor Knowledge and Action Understanding

343

Figure 10.7 Mean response times for infants to orient toward the congruent and incongruent probes. In the dynamic condition, infants were shown a pointing finger that moved a short distance in the same direction that the fi nger was pointing. In the static condition, infants were shown a pointing fi nger that didn’t move.

procedure, except that the pointing finger remained stationary when it appeared on t he screen for 1000 ms . I nfants completed a n average of 22 trials (SD=7.7, range: 10–32). Unlike t he responses to t he dynamic pointing finger, infants showed no difference in responding to the congruent and incongruent probe (see Figure 10.7, Static condition). It thus appears that observation of an action and not just the final state i s n ecessary f or y oung i nfants t o f ollow a po int, a nalogous to t he findings of i nfants following e ye gaze (Farroni e t a l., 2 000). Two possible interpretations for these results are that the movement associated w ith t he gesture i ncreased t he s alience of t he st imulus, or increased the likelihood of visual tracking in the direction of the moving finger. Either of these two possibilities would bias infants to shift their attention in the direction of the pointing gesture. If the results were exclusively a function of following the principal direction of the moving finger, then we would expect a reversal in the reaction time results when the finger moved backwards rather than forwards. The n ext ex periment te sted t his i nterpretation. A t otal of 25 i nfants between 4.5 and 6.5 months of age were tested in two conditions. In t he forward condition, t he d irection of t he pointing finger and the movement of the finger were compatible, whereas in

344

Embodiment, Ego-Space, and Action

Figure 10.8 Mean response times for infants to orient toward the congruent and incongruent probes. In the forward condition the finger moved in the same direction it w as pointing. In the backward condition the finger moved in the opposite direction that it was pointing.

the backward condition, the direction of the pointing finger and the movement of the finger were incompatible. Half of the trials in each condition involved a congruent probe and half of the trials involved an incongruent probe. Infants completed an average of 25 trials (SD=5.68, range: 14–31). In t he f orward co ndition, i nfants sh owed t he s ame adv antage f or responding to the congruent vs. incongruent probe that they showed in the first experiment (see Figure 10.8). In the backward condition, infants were ex pected to show t he opposite pattern of responses i f they were responding only to the direction of movement and not to the direction of the pointing finger. As can be observed in Figure 10.8, this pattern of results was not obtained—infants responded just as fast to the congruent as to the incongruent probe. The se results thus suggest t hat i nfants as young as 4.5 months of age do not respond to a sha red referent simply by following the direction of movement irrespective of the direction of the point. A third possibility for why infants were able to follow the direction of a dy namic point is that the perceived point is mapped onto infants’ own motor representations for pointing. This hypothesis is supported by at least two sources of evidence. In the case of pointing, the goal of the observed action is to reorient the attention of another

Motor Knowledge and Action Understanding

345

person so t hat a n ob ject beco mes t he sha red f ocus o f a ttention (Woodward & G uajardo, 2 002). Re search w ith ad ults r eveals t hat action s imulation fac ilitates t he ab ility o f obs ervers t o p redict t he effect of an action (e.g., Chaminade et al., 2001; Knoblich, & Flach, 2001; L ouis-Dam, Orliaguet, & C oello, 1999; Orliaguet, K andel, & Boe, 1997). In addition, recent research suggests that slightly older infants, 6 to 12 months of age, also understand the goals or effects of an action (e.g., Gergely, Nádasdy, Csibra, & Biró, 2003; Luo & Ba illargeon, 2006; Woodward, 1998). If visual attention is the principal factor responsible for the preceding results, then it should not matter whether the action is carried out by a h uman or a m echanical a gent a s long a s both a gents are equally salient. If, however, the preceding results are a f unction of ac tion simulation, t hen t he d istinction be tween a h uman a nd a mechanical agent should ma ke a d ifference. Recall t hat simulation depends o n t he per ceived s imilarity be tween t he obs erved ac tion and the specific motor responses available to the observer. In the last experiment, we put this hypothesis to the test by repeating t he p revious ex periment w ith a st ick m oving t o t he l eft or t o the right in place of a h uman ha nd. The stick initially appeared to be pointing toward t he i nfant a nd w aved up a nd down i n a ma nner similar to the fingers waving up and down. After the infant fixated the stick, it was rotated so that it pointed toward the left or the right side of t he screen, a nd t hen moved a sh ort d istance either i n the direction it was pointing or in the opposite direction. This movement lasted 1000 ms. Similar to the previous experiments, the stick then disappeared and was replaced by a toy probe that appeared on the left or right side of the screen. A new sample of 18 infants between 4.5 and 6.5 months of age was tested. Infants completed an average of 22 trials (SD=5.3, range: 12– 30). In both t he forward a nd back ward conditions, infants did not show a sig nificant response time advantage to either t he congruent direction of pointing or the congruent direction of movement (i.e., forward m ovement: co ngruent co ndition o r back ward m ovement: incongruent condition) (see Figure 10.9). It is possible that this result is attributable t o t he dec reased fa miliarity or s alience of t he st ick, because, on average, response times were higher. However, this factor, by itself, is unlikely sufficient to explain the null results, because the critical finding is not the absolute difference in response times, but rather the relative difference in response times. Accordingly, we conclude that infants, like adults, are more likely to predict the effect

346

Embodiment, Ego-Space, and Action

Figure 10.9 Stick experiment. Mean response times for infants to orient toward the congruent and incongruent probes. In the forward condition the finger moved in the same direction it was pointing. In the backward condition the finger moved in the opposite direction that it was pointing.

of a n a ction w hen it i s p erformed by a h uman e ffector, s uch a s a hand, as opposed to a mechanical effector with little resemblance to the morphology or movements of the human action. In su m, t hese results on following t he d irection of a po int su ggest that infants as young as 4.5 months of age are capable of shifting their attention in response to this action as long as the action is performed by a h uman agent. This finding t hus appears consistent with infants predicting the goal of the deictic gesture, but there is a problem with attributing this prediction to simulating the observed action. A n umber of studies agree that canonical pointing emerges on average at 11 months of age, although some babies as young as 8.5 months of age have been observed to point (Butterworth & M orissette, 1996). If action simulation requires that the observed action is already in the motor repertoire of the observer, then the interpretation of simulation as the basis for predicting the referent cannot be correct. There is, however, another way to interpret the basis for simulation which is more compatible with the current findings. Infants as young as 4.5 months of age are not able to differentiate t heir ha nd and fingers so that only the index finger is extended in the direction of t he a rm, but it is entirely possible t hat t he ex tension of t he a rm

Motor Knowledge and Action Understanding

347

and hand is sometimes performed for the same purpose as a pointing gesture at an older age. Consistent with this hypothesis, Leung and R heingold ( 1981) d irectly co mpared 1 0.5- t o 1 6.5-month-old infants’ arm extensions with open or closed hands and arm extensions with index finger extended toward objects located at a distance from where they were sitting. The authors report that at the younger ages the majority of responses were reaches rather than pointing gestures to the distal objects. Although reaches are typically associated with an instrumental response, they were interpreted in this context as serving a social communicative function because they accompanied looking at and vocalizing to the mother. The preceding evidence thus suggests that two actions executed by infants can share the same goal even though the means for achieving that goal differ. Interestingly, most of the evidence for an observationexecution matching system is based on shared goals and not shared means to achieve these goals. Consider, for example, the previously discussed n europhysiological findings on mirror neurons. It was shown that these neurons discharge when observing or executing the same goal-directed action regardless of whether or not the specific movements ma tched ( Rizzolatti e t a l., 1 996). L ikewise, beha vioral research with human adults shows that response priming following observation of an action depends primarily on the observation of the goal and not the specific means to the goal. For example, Longo et al. (in press) tested human adults in a choice reaction time task involving imitation of an index or middle finger tapping downwards. The results showed equivalent levels of response priming following t he observation of a b iomechanically possible or impossible finger tapping movement. In this example, the movements were different, but the goal of tapping downwards was the same in both conditions. If the movements associated with the observation of a goal-directed action need not be identical in order to simulate an observed action, then it may be sufficient that the two representations share some similar features. Indeed, this is the basis for the theory of event coding as presented by Hommel, Müsseler, Aschersleben, and Prinz (2001). It is well established that infants are capable of predictive reaching for moving objects by 4.5 months of age (Bertenthal & von Hofsten, 1998; R ose & B ertenthal, 1995). The m otor r epresentation f or predictive reaching may be su fficient to make contact with the goal of specifying a distal referent for young infants to index and predict the goal of the point. Clearly, more research is needed to fully evaluate this h ypothesis, b ut t he pos sibility t hat a co mmon code u nderlies

348

Embodiment, Ego-Space, and Action

the observation and execution of a ma nual gesture for specifying a distal referent by 4.5 months of age is certainly consistent with the available evidence. Perception of Point-Light Displays of Biological Motion The first step i n u nderstanding h uman ac tions is t o per ceptually organize the constituent movements in a manner consistent with the causal structure of the action. We have relied on moving point-light displays depicting biological motions to study this question, because observers a re t hen f orced t o per ceptually o rganize t he st imuli i n terms of t he movements of t he l imbs w ithout a ny contextual c ues specified by featural information. In spite of the apparent ambiguity in these displays, adult observers are quite adept at extracting a coherent a nd unique structure f rom t he moving point-lights (Bertenthal & P into, 1994; Cutting, Moore, & M orrison, 1988; Johansson, 1973; Proffitt, Bertenthal, & Roberts, 1984). This conclusion is true even when the stimulus displays are masked by a large number of additional point-lights that share the same absolute motion vectors w ith t he point-lights comprising t he biological m otion d isplay. I n one ex periment (Bertenthal & P into, 1994) obs ervers w ere i nstructed t o j udge wh ether t he b iological motion display depicted a person walking to the right or to the left. The t arget w as comprised of 11 point-lights t hat moved i n a ma nner consistent with the spatiotemporal patterning of a person walking, but was masked by an additional 66 moving point-lights that preserved the absolute motions and temporal phase relations of the stimulus display (see Figure 10.10). In spite of the similarity between the point-lights comprising the target and those comprising the distracters, ad ult obs ervers d isplayed v ery h igh r ecognition r ates f or judging t he correct d irection of t he g ait. This ju dgment c ould not be a ttributed t o t he per ception o f i ndividual po int-lights, bec ause recognition performance declined to chance levels when the stimuli were rotated 180°. I f per formance was ba sed on t he movements of individual p oint-lights, th en th e o rientation o f th e d isplay s hould not have mattered. Apparently, obs ervers were de tecting a n orientation specific spatiotemporal st ructure of t he moving point-lights because it matched their internal representation which was limited by ecological constraints to a person walking upright.

Motor Knowledge and Action Understanding

349

Figure 10.10 Left panel depicts point-light walker display appearing to walk to the right. (Outline of human form is drawn for illustrative purposes only.) Right panel depicts point-light walker display masked by moving point-lights preserving the a bsolute mot ions a nd t emporal ph ase re lations of t he t arget s timulus (from Bertenthal & Pinto, 1994).

This orientation specificity appears to be the norm with regard to the perception of biological motions (e.g., Pavlova & Sokolov, 2000; Sumi, 1984). One intriguing interpretation for this repeated finding is that motor experience contributes to the perception of biological motions. Of course, this finding is also consistent with visual experience contributing to the perception of biological motions in a pointlight display. Although there is still insufficient evidence to reach any firm conclusions, some recent experiments by Shiffrar and colleagues (Jacobs, P into, S hiffrar, 2 005; L oula, Pr asad, Ha rber, & Sh iffrar, 2005; S hiffrar & P into, 2 002; S tevens, F onlupt, Sh iffrar, & Dec ety, 2000) have suggested that motor experience provides a unique contribution t o t he per ception o f b iological m otions b y ad ults. I n t he remainder of this section, we will explore whether the same conclusion holds for infants’ perception of biological motions. In a s eries o f st udies beg un i n t he 1 980s, m y co lleagues a nd I (Bertenthal, Proffitt, & Cutting, 1984; Bertenthal, Proffitt, & Kramer, 1987; Bertenthal, Proffitt, Kramer, & Spetner, 1987; Bertenthal, Proffitt, Spetner, & Thomas, 1985; Pinto & Bertenthal, 1992) showed that infants are sensitive to the spatial and temporal structure in biologi-

350

Embodiment, Ego-Space, and Action

cal m otion d isplays. F or ex ample, 3 - a nd 5 -month-old i nfants a re able to discriminate a canonical point-light walker display from one in which the spatial arrangement of the point-lights is scrambled or the temporal phase relations of the point lights are perturbed (Bertenthal, Proffitt et al., 1987a; Bertenthal, Proffit, Spetner et al., 1987b). Similar to adults, infants show an orientation specific response by 5 months o f a ge ( Bertenthal, 1 993; P into & B ertenthal, 1 993; P into, Shrager, & Bertenthal, 1994). At 3 months of age, infants discriminate a c anonical f rom a per turbed po int-light w alker d isplay wh en t he displays are presented upright or upside down (Pinto et al., 1994). By 5 m onths of age, i nfants only d iscriminate t hese d isplays when they a re p resented u pright ( Pinto e t a l., 1 994). O ur i nterpretation for t his de velopmental sh ift i s t hat i nfants a re responding to local configural differences at 3 months of age, but they are responding to global configural differences at 5 m onths of age (Bertenthal, 1993). In essence, the local configural differences can be detected independent of orientation. Converging e vidence f or t his i nterpretation co mes f rom t wo experiments sh owing t hat i nfants d o n ot d iscriminate po int-light walker d isplays r equiring a g lobal per cept u ntil 5 m onths o f a ge. In the first of these experiments (Pinto, 1997), 3- and 5-month-old infants were tested for discrimination of two point-light walker displays w ith a hab ituation pa radigm. I n t his pa radigm, i nfants a re presented with one of the two stimulus displays for a series of trials until their visual attention to the stimulus declines significantly, and then t hey a re p resented w ith a n ovel st imulus d isplay f or t wo t rials. Increased visual attention to the novel display is interpreted as discrimination. Infants were presented with a point-light walker display translating across the screen during the habituation phase of the experiment. Following habituation, infants were shown a t ranslating point-light walker display in which the point-lights corresponding to the upper portion of the body were spatially shifted relative to the point-lights corresponding to the lower portion of the body. According to adult observers, this spatial displacement resulted in the perception of two point-light walkers in which one appeared to have its legs occluded and the other appeared to have its torso, arms, and head occluded. If i nfants d id n ot per ceive a po int-light w alker d isplay a s a g lobal percept, then they would be less likely to detect this perturbation because bo th t he spa tially a ligned a nd sh ifted d isplays w ould be

Motor Knowledge and Action Understanding

351

perceived as a n umber of subgroupings of point lights. If, however, infants d id per ceive t he hab ituation st imulus d isplay a s a u nitary object, t hen t he spa tially sh ifted t ranslating w alker wou ld b e d iscriminated f rom t he p receding t ranslating po int-light w alker. The results revealed that 3-month-old infants did not discriminate these two displays, but 5-month-old infants did. In the second study, Amy Booth, Jeannine Pinto, and I conducted two ex periments testing 3 - a nd 5 -month-old i nfants’ sensitivity to the symmetrical patterning of human gait (Booth, Pinto, & Bertenthal, 2 001). I n t his c ase, s ensitivity t o t he pa tterning o f t he l imbs implies t hat d iscrimination be tween d isplays co uld n ot oc cur o n the basis of the perceived structure of any individual limb. If infants were primarily s ensitive to t his patterning, t hen we predicted t hat they w ould n ot d iscriminate be tween a po int-light d isplay dep icting a perso n walking a nd a perso n r unning bec ause both d isplays share the same symmetrical gait pattern (even though they differ on many additional dimensions). By contrast, infants should discriminate between two displays in which the symmetrical phase relations of the limbs were perturbed. A habituation paradigm was again used to test infants’ discrimination of the point-light displays. In Experiment 1 infants were presented with a po int-light display depicting a perso n running and a second display depicting a person walking (see Figure 10.11). Unlike previous experiments in which the stimulus displays were synthesized with a computational algorithm, the stimulus displays in this study were created with a m otion analysis system that tracked and stored the three-dimensional coordinates of discrete markers on the major joints of a person walking or running on a treadmill. Infants’ discrimination per formance r evealed t hat 3 -month-old i nfants

Figure 10.11 Four frames of the walker (in gray) superimposed over the runner (in bl ack). Point-lights a re c onnected for e ase of c omparison (from B ooth e t a l., 2001).

352

Embodiment, Ego-Space, and Action

(a)

(b)

Figure 10.12 (a) Top panel; Walker vs. Runner. Mean looking times on t he last three habituation trials and on the two test trials as a function of age. The stimuli included a point-light walker and a point-light runner, each of which served as the habituation stimulus for half of the infants. (b) Bottom panel; Walker vs. PhaseShifted Runner. Mean looking times on the last three habituation trials and on the two test trials as a function of age. The stimuli included a point-light walker and a phase-shifted point-light runner, each of which served as the habituation stimulus for half of the infants (from Booth et al., 2001).

Motor Knowledge and Action Understanding

353

discriminated t he w alker a nd t he r unner, wh ereas 5 -month-old infants did not (see Figure 10.12a). These results are consistent with the possibility that 3-month-old infants were responsive to the differences i n t he speed a nd joint a ngles of t he t wo d isplays, but t hat 5-month-old i nfants w ere m ore s ensitive t o t he s ymmetrical pa tterning of both d isplays a nd t herefore were less attentive to lowerlevel differences. In order to confirm this interpretation, a second experiment was conducted to assess whether 3- and 5-month-old infants were sensitive to d ifferences i n t he s ymmetrical pa tterning o f g ait. I nfants were tested for discrimination of a canonical point-light walker and a point-light runner in which the point-lights corresponding to the right leg and the left arm were temporally phase shifted by 90o. The effect of this manipulation was to create a display in which one pair of di agonally opposite limbs reversed direction at 90 o a nd 2 70o of the gait cycle, whereas the other pair of limbs reversed direction at 0o and 180o of the gait cycle (see Figure 10.13). The results from this experiment revealed that both 3- and 5-month-old infants discriminated t he t wo point-light d isplays (see Figure 10.12b). Presumably, the younger i nfants d iscriminated t hese d isplays for t he same reason that they discriminated the two displays in the first experiment, although st rictly spe aking w e c annot r ule o ut t he pos sibility t hat they also detected the change in the symmetrical patterning of the displays. By contrast, the older infants had not discriminated the two displays in the first experiment, and thus their discrimination performance can only be explained in terms of detecting the changes in the gait pattern.

Figure 10.13 Four frames of the walker (in gray) superimposed over the phaseshifted runner (in black). Point-lights are connected for ease of comparison (from Booth et al., 2001).

354

Embodiment, Ego-Space, and Action

Taken t ogether, t he r esults f rom t hese la st t wo ex periments a re illuminating for a number of reasons. First, they confirm that by 5 months of age, infants respond perceptually to the global structure of a moving point-light display. Second, the results from the second study suggest that infants by 5 months of age are sensitive to a fairly subtle h igher-level property of t he d isplays—symmetrical pa tterning of the gait pattern. One interpretation for these findings is that the visual system becomes more spatially integrative with development and also becomes more sensitive to the temporal properties of moving point-light displays. A second interpretation is that infants’ sensitivity to a point-light walker display as a u nitary object or as a hierarchical nesting of pendular motions w ith dy namic s ymmetry is specifically related to this stimulus depicting a familiar event. We have previously argued that the orientation specificity of our findings suggests that visual experience is an important factor in the perceptual organization of these displays (Bertenthal, 1993). The third and final interpretation is that a globally coherent and hierarchically nested moving object with bilateral symmetry corresponds to an internal representation that is relevant not only to the perception of human gait, but to the production of this behavior as well. Previous research reveals that infants capable of stepping on a split treadmill are biased to maintain a 180° phase relation between their two legs even when both sides of the treadmill are moving at different speeds (Thelen, U lrich, & N iles, 1 987). I n a l ongitudinal investigation of t his phenomenon, Thelen a nd Ulrich (1991) report that infants’ performance shows rapid improvement between 3 a nd 6 months of age (see Figure 10.14). Interestingly, this is the same period of development during which infants show increased perceptual sensitivity to the global coherence of biological motions, especially as defined by bilateral symmetry or 0˚ and 180˚ phase modes. This correspondence in age gives further credence to the suggestion of a shared representation between the perception and production of biological motions. As previously discussed by Bertenthal and Pinto (1993), t he perception a nd production of human movements sha re similar processing constraints relating to the phase relations of the limb movements; thus the development of one skill should facilitate the development of the other skill and vice versa. The evidence for a shared representation is also supported by the results from a neuroimaging study conducted by Grèzes, Fonlupt et al.(2001) showing t hat perception of point-light walker d isplays by

Motor Knowledge and Action Understanding

355

Figure 10.14 Mean number of a lternating steps by t rial and age, pooled across all infants. Trials 1 and 9 are baseline trials (from Thelen & Ulrich, 1991).

adults activates an area in the occipital-temporal junction as well as an area in the intraparietal sulcus. Whereas the former cortical area is associated with the perception of objects, the latter area is part of the neural circuit involved in the planning and execution of actions. Converging evidence supporting this finding has also been reported by Saygin, Wilson, Hagler, Bates, & Sereno (2004). It is thus possible that the perception of these point-light displays by infants also activates t he m otor s ystem wh ich co nfers v ia s imulation a n a ppreciation o f t he d ifferences be tween a c anonical a nd a n u nnatural g ait pattern. One final source of evidence to support this conjecture comes from an experiment testing infants’ discrimination of a canonical pointlight cat and a phase-shifted point-light cat (Bertenthal, 1993). Similar to the findings on discrimination of inverted point-light displays, 3-month-old infants discriminated these two point-light cat displays, but neither 5- nor 8-month-old infants discriminated these displays. Presumably, 3-month-old infants discriminated these displays based on local differences that were not specific to the identity of the stimulus. Although it is reasonable to suggest that older infants did not

356

Embodiment, Ego-Space, and Action

discriminate these displays at a global level because they lacked sufficient v isual ex perience, co rrelative e vidence i s i nconsistent w ith this hypothesis (Pinto et al., 1996). When infants were divided into two groups based on whether one or more cats or dogs lived in their home, the results revealed absolutely no difference in discrimination performance as a function of whether or not infants had daily visual experience with a cat or dog. Thus, it appears that infants’ insensitivity to t he spatiotemporal per turbations i n t hese d isplays may have not be en attributable to t heir l imited v isual ex perience, but r ather to their limited motor knowledge of quadrupedal gait. Interestingly, infants begin crawling on hands-and-knees between 7 and 9 months of age, which suggests that crawling experience as opposed to visual experience may have been a better predictor of their discrimination performance. Conclusions The findings su mmarized i n t his chapter provide some of t he first evidence to suggest that an observation-execution matching system is functional during infancy and contributes to action understanding through: (1) online simulation of observed actions; (2) prediction of the effects of observed actions; and (3) perceptual organization of component movements comprising an action. In order to avoid any confusion, we want to emphasize that suggesting a possible contribution o f t he m otor s ystem f or t he per ception a nd u nderstanding of actions is not meant to exclude the very important contributions of m ore t raditional m echanisms ( e.g., v isual a ttention a nd v isual experience) f or u nderstanding ac tions. F urthermore, t he e vidence presented is by no means definitive, but it is buttressed by recent neurophysiological, neuroimaging, and behavioral findings suggesting a shared representation for t he observation a nd execution of ac tions by primates and human adults (e.g., Bertenthal et al., 2006; Decety & Grèzes, 1999; Rizzolatti et al., 2001; Rizzolatti & Craighero, 2004). The spec ialized c ircuitry a nd a utomatic ac tivation f ollowing observation of an action suggest that the neural mechanisms mediating these functions may be part of the intrinsic organization of the brain. Indeed, this hypothesis is supported by evidence showing neonatal imitation (Meltzoff and Moore, 1994). The best known example of imitation in young infants is the evidence for oro-facial gestures

Motor Knowledge and Action Understanding

357

(mouth opening and tongue protrusion) by infants who have never seen their own face (Meltzoff & Moore, 1977). Unlike true imitation (cf. Tomasello & Call, 1997), only actions already in the motor repertoire can be facilitated. Still, visual information about the perceived action must somehow be mapped onto the infants’ own motor representations (Meltzoff & Moore, 1994). In essence, this is the function of an observation-execution matching system. Corollary evidence on t he prenatal and early postnatal development of these oro-facial gestures is consistent with this conjecture. It is well established that fetuses perform mouth opening and closing and tongue protrusion while in utero (Prechtl, 1986). Thus, these gestures are already part of the neonate’s behavioral repertoire at birth. The evidence also suggests that neonates are more likely to match the modeled gesture after it’s been presented for some period of time (~40 s), rather than immediately (Anisfeld, 1991). This finding is consistent with a motor priming explanation in which activation would be expected to build up gradually as the gesture is modeled, as opposed to an explanation claiming the availability of higher-level processes from birth (cf. Meltzoff and Moore, 1994). Finally, the empirical evidence suggests that the likelihood of automatic imitation increases until around 2 months of age, and then declines and virtually disappears by 5 months of age (Fontaine, 1984; Maratos, 1982). It is during this same window of time, approximately 2 and 6 months of age, that neonatal re flexes a re g radually i nhibited (McGraw, 1943), su ggesting that similar cortical inhibitory processes may serve to suppress neonatal imitation. As t he spo ntaneous el icitation o f t hese o vert fac ial g estures becomes gradually inhibited with age, they do not disappear entirely. Instead they become subject to volitional control such that the infant determines when and how they are elicited—imitation is no longer automatic, and the observation of a facial gesture will not lead to its execution by the infant. Thus, rather than reflecting a precocial social ability of the infant as suggested by Meltzoff and Moore (1994), neonatal imitation may reflect a striking inability of the infant to inhibit activation of the motor system by direct matching mechanisms. (See Nakagawa, S ukigara, a nd B enga (2003) f or so me p reliminary e vidence supporting this interpretation.) Similar compulsive imitation is observed in adults after lesions of areas of the frontal lobe involved in inhibitory control (Lhermitte, Pillon, & Serdaru, 1986), and even in healthy adults when attention is diverted (Stengel, 1947).

358

Embodiment, Ego-Space, and Action

Although overt imitation of facial gestures ceases with the development of inhibition, covert imitation continues and provides specific k nowledge about t hese gestures when observed i n others. We suggest that this same developmental process is played out at different ages for many other important behaviors (e.g., direction of gaze, visually directed reaching and grasping, vocalizations of sounds). As these behaviors are practiced, the infant develops greater control of their ex ecution a s w ell a s k nowledge o f t heir e ffects o r o utcomes. The development of these motor schemas enables infants to covertly simulate and predict the effects of similar actions performed by others. This reliance on the developing control of self-produced actions explains why action understanding continues to develop throughout the lifespan. Finally, t he findings reviewed in t his chapter a re relevant to t he current debate in the literature regarding the development of action understanding. The e arly de velopment o f r epresenting ac tions a s goal-directed ha s be en st udied f rom t wo d ifferent t heoretical perspectives: ( 1) ac tion u nderstanding i s r eciprocally co upled t o t he capacity t o produce g oal-directed ac tions ( Hofer, Ha uf, & A schersleben, 2 005; L ongo & B ertenthal, 2 006; S ommerville, Woodward, & Needham, 2005), or (2) recognizing, interpreting, and predicting goal directed actions is an innately based, abstract, and domain-specific representational system, specialized for identifying intentional agents or for representing and interpreting actions as goal-directed (e.g., Ba ron-Cohen, 1 994; C sibra & G ergely, 1 998; G ergely e t a l., 1995; P remack, 19 90). The first perspec tive i s co nsonant w ith t he views discussed in this chapter. The seco nd perspec tive s uggests that i nfants a re i nnately s ensitive to abstract be havioral c ues (s uch as self-propulsion, direction of movement or eye gaze) that indicate agency, intentionality, or goal-directedness, irrespective of previous experience with the types of agents or actions that exhibit these cues. Infants are presumed sensitive to unfamiliar actions of humans or unfamiliar agents with no human features from very early in development as long as the actions are consistent with one or more of the proposed abstract cues. Although the findings presented here cannot resolve this controversy, at the very least they cast doubt on the assertion that understanding goal-directed actions is based on an abstract representation that is independent of whether the agents are human or mechanical. In both object search a nd follow t he point st udies, i nfants showed

Motor Knowledge and Action Understanding

359

different levels of responding to human and mechanical agents. Moreover, the evidence reviewed on infants’ sensitivities to biological motions suggests that the perceived structure of a point-light display is significantly greater when the display depicts a human action as opposed to a fa miliar or u nfamiliar quadrupedal ac tion. A s we and o ur co lleagues co ntinue t o i nvestigate t he e vidence f or co mmon coding of observed and executed actions by infants, we hope to develop a more finely nuanced theory that will reveal what specific knowledge abo ut ac tions i s i nnate a nd wha t k nowledge de velops with age and experience. References Anisfeld, M. (1991). Neonatal imitation. Developmental Review, 11, 60–97. Amano, S., Kezuka, E ., & Yamamoto, A. (2004). Infant shifting attention from an adult’s face to an adult’s hand: A precursor of joint attention. Infant Behavior and Development, 27, 64–80. Baldwin, D. A. (1995). Understanding the link between joint attention and language. In C. Moore & P. Dunham (Eds.), Joint attention: Its origins and role in development (pp. 131–158). Hillsdale, NJ: Erlbaum. Baron-Cohen, S. (1994). How to build a baby that can read minds: Cognitive mechanisms in mindreading. Cahiers de Psychologie Cognitive/ Current Psychology of Cognition, 13, 1–40. Beardsworth, T., & Buckner, T. (1981). The ability to recognize oneself from a video recording of one’s movements without seeing one’s body. Bulletin of the Psychonomic Society, 18, 19–22. Bekkering, H., Wohlschlager, A., & Gattis, M. (2000). Imitation of gestures in children is goal-directed. Quarterly Journal of E xperimental Psychology, 53A, 153–164. Bertenthal, B . I . (1993). Perception of biomechanical motions by i nfants: Intrinsic image and knowledge-based constraints. In C. E. Granrud (Ed.), Carnegie symposium on cognition: Visual perception and cognition (pp. 175–214). Hillsdale, NJ: Erlbaum. Bertenthal, B. I., Longo, M. R., & K osobud, A. (2006). Imitative response tendencies f ollowing obs ervation o f i ntransitive ac tions. Journal of Experimental P sychology: H uman P erception an d P erformance, 3 2, 210–225. Bertenthal, B. I., & P into, J. (1993). Complimentary processes in t he perception a nd production of human movements. I n L . B . Sm ith & E . Thel en (Eds.), A dynamic systems approach to d evelopment: Applications (pp. 209–239). Cambridge, MA: MIT Press.

360

Embodiment, Ego-Space, and Action

Bertenthal, B. I., & Pinto, J. (1994). Global Processing of Biological Motions. Psychological Science, 5, 221–225. Bertenthal, B. I., Proffitt, D. R., & Cutting, J. E. (1984). Infant sensitivity to figural coherence in biomechanical motions. Journal of Experimental Child Psychology, 37, 213–230. Bertenthal, B. I., Proffitt, D. R., & Kramer, S. J. (1987). Perception of biomechanical motions by i nfants—Implementation of various processing constraints. Journal of E xperimental Psychology: Human Perception and Performance, 13, 577–585. Bertenthal, B . I ., P roffitt, D . R ., K ramer, S . J., & Sp etner, N. B . ( 1987). Infants’ en coding o f k inetic d isplays v arying i n rel ative c oherence. Developmental Psychology, 23, 171–178. Bertenthal, B. I., Proffitt, D. R., Spetner, N. B., & Thomas, M. A. (1985). The development o f i nfant se nsitivity t o b iomechanical m otions. Child Development, 56, 264–298. Bertenthal, B. I., & Von Hofsten, C . (1998). Eye, head a nd t runk control: The foundation for manual development. Neuroscience and Biobehavioral Reviews, 22, 515–520. Blakemore, S . J., & De cety, J. ( 2001). F rom t he p erception o f ac tion to the u nderstanding o f i ntention. Nature Re views N euroscience, 2 , 561–567. Booth, A., Pinto, J., & Bertenthal, B. I. (2002). Perception of the symmetrical patterning of human ga it by i nfants. Developmental Psychology, 38, 554–563. Brass, M., Bekkering, H., & Prinz, W. (2001). Movement observation affects movement e xecution i n a si mple re sponse t ask. Acta Ps ychologica, 106, 3–22. Brass, M., Bekkering, H., Wohlschlager, A., & P rinz, W. (2000). Compatibility b etween obs erved a nd e xecuted finger movements: C omparing s ymbolic, spa tial, a nd i mitative c ues. Brain and C ognition, 4 4, 124–143. Brass, M., Zysset, S., & von Cramon, D. Y. (2001). The inhibition of imitative response tendencies. NeuroImage, 14, 1416–1423. Bruner, J. S. (1969). Eye, hand, and mind. In D. Elkind & J. H. Flavell (Eds.), Studies in cognitive development: Essays in honor of Jean Piaget (pp. 223–235). Oxford: Oxford University Press. Buccino, G., Binkofski, F., Fink, G. R., Fadiga, L., Fogassi, L., Gallese, V. et al. (2001). Action observation ac tivates premotor a nd pa rietal a reas in a somatotopic manner: An fMRI study. European Journal of Neuroscience, 13, 400–404. Butterworth, G. (2003). Pointing is t he royal road to l anguage for babies. In S. Kita (Ed.), Pointing: Where language, culture, and cognition meet (pp. 61–83). Mahwah, NJ: Erlbaum.

Motor Knowledge and Action Understanding

361

Butterworth, G., & Morisette, P. (1996). Onset of pointing and the acquisition of language in infancy. Journal of Reproductive and Infant Psychology, 14, 219–231. Calvo-Merino, B., Glaser, D. E., Grezes, J., Passingham, R. E., & Haggard, P. ( 2005). A ction obs ervation a nd ac quired m otor s kills: A n f MRI study with expert dancers. Cerebral Cortex, 15, 1243–1249. Castiello, U ., Lu sher, D ., Ma ri, M ., E dwards, M ., & H umphreys, G . W. (2002). Obs erving a h uman o r a rob otic ha nd g rasping a n ob ject: differential motor priming effects. In W. Prinz & B. Hommel (Eds.), Common mechanisms in perception and action: Attention and performance (Vol. 12, pp. 315–333). Oxford: Oxford University Press. Chaminade, T., Meary, D., Orliaguet, J.-P., & De cety, J. ( 2001). Is p erceptual anticipation a motor simulation? A PET study. NeuroReport, 12, 3669–3674. Corkum, V., & Moore, C. (1998). The origins of visual attention in infants. Developmental Psychology, 34, 28–38. Csibra, G ., & G ergely, G . ( 1998). The tele ological o rigins o f m entalistic action explanations: A developmental hypothesis. Developmental Science, 1, 255–259. Csibra, G., G ergely, G., Bi ro, S ., Koos, O., & B rockbank, M . (1999). G oal attribution without agency cues: The perception of “pure reason” in infancy. Cognition, 72, 237–267. Cutting, J. E ., Moore, C., & M orrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44, 339–347. Deak, G. O., Flom, R. A., & Pick, A. D. (2000). Effects of gesture and target on 12- and 18-month-olds’ joint visual attention to objects in front of or behind them. Developmental psychology, 36, 511–523. Decety, J., Chaminade, T., Grezes, J., & Meltzoff, A. N. (2002). A PET exploration of the neural mechanisms involved in reciprocal imitation. NeuroImage, 15, 265–272. Decety, J., & Grèzes, J. (1999). Neural mechanisms subserving the perception of human actions. Trends in Cognitive Sciences, 3, 172–178. Decety, J., & Sommerville, J. A. (2004). Shared representations between self and other: A social cognitive neuroscience view. Trends i n C ognitive Sciences, 7, 527–533. Diamond, A. (1985). The development of the ability to u se recall to g uide action, as indicated by infants’ performance on A-not-B. Child Development, 56, 868–883. Diedrich, F. J., H ighlands, T. M., Spa hr, K . A ., Thelen, E ., & Sm ith, L . B. (2001). The role of target distinctiveness in infant perseverative reaching. Journal of Experimental Child Psychology, 78, 263–290. Edwards, M. G., Humphreys, G. W., & Castiello, U. (2003). Motor facilitation following ac tion obs ervation: A b ehavioral s tudy i n prehensile action. Brain and Cognition, 53, 495–502.

362

Embodiment, Ego-Space, and Action

Fadiga,L., Fogassi, L., Pavesi, G., & Rizzolatti, G. (1995). Motor facilitation during action observation: A magnetic stimulation study. Journal of Neurophysiology, 73, 2608–2611. Farroni, T., Johnson, M. H., Brockbank, M., & Si mion, F. (2000). Infants’ use of ga ze d irection to c ue attention: The i mportance of perceived motion. Visual Cognition, 7, 705–718. Farroni, T., Massaccesi, S., Pividori, D., & Johnson, M. H. (2004). Gaze following in newborns. Infancy, 5, 39–60. Ferrari, F., Rozzi, S ., & F ogassi, L . (2005). M irror neurons re sponding to observation of actions made with tools in monkeys ventral premotor cortex. Journal of Cognitive Neuroscience, 17, 221–226. Flach, R., Knoblich, G., & P rinz, W. (2003). Off-line aut horship effects in action perception. Brain and Cognition, 53, 503–513. Fogassi, L., Ferrari, P. F., Gesierich, B., Rozzi, S., Chersi, F., & Rizzolatti, G. (2005). Pa rietal lob e: From ac tion o rganization to i ntention u nderstanding. Science, 308, 662–667. Fontaine, R. (1984). Imitative skills between birth and six months. Infant Behavior and Development, 7, 323–333. Frith, C., & Frith, U. (2006). How we predict what other people are going to do. Brain Research, 1079, 36–46. Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition in the premotor cortex. Brain, 119, 593–609. Gergely, G ., Nádasdy, Z ., C sibra, G ., & B írό, S . (1995). Taking t he i ntentional stance at 12 months of age. Cognition, 56, 165–193. Greenwald, A . G . (1970). S ensory f eedback m echanisms i n p erformance control: Wit h s pecial re ference to t he id eo-motor me chanism. Psychological Review, 77, 73–99. Grèzes, J., Armony, J. L., Rowe, J., & Passingham, R. E. (2003). Activations related to “mirror” and “canonical” neurones in the human brain: An fMRI study. NeuroImage, 18, 928–937. Grèzes, J., Fonlupt, P., Bertenthal, B. I., Delon-Martin, C., Segebarth, C., & Decety, J. (2001). Does perception of biological motion rely on specific brain regions? NeuroImage, 13, 775–785. Grèzes, J., Frith, C., & Passingham, R. E. (2004). Inferring false beliefs from the ac tions o f o neself a nd o thers: A n f MRI s tudy. NeuroImage, 21, 744–750. Heyes, C., Bird, G., Johnson, H., & Haggard, P. (2005). Experience modulates automatic imitation. Cognitive Brain Research, 22, 233–240. Hofer, T., Hauf, P., & Aschersleben, G. (2005). Infant’s perception of goaldirected ac tions p erformed by a m echanical c law de vice. Infant Behavior and Development, 28, 466–480. Hommel, B., Musseler, J., Aschersleben, G., & Prinz, W. (2001). The theory of event coding (TEC): A framework for perception and action planning. Behavioral and Brain Sciences, 24, 849–937.

Motor Knowledge and Action Understanding

363

Hood, B. M., Willen, J. D., & Driver, J. (1998). Adult’s eyes trigger shifts of visual attention in human infants. Psychological Science, 9, 131–134. Iacoboni, M., Molnar-Szakacs, I., Gallese, V., Buccino, G., Mazziotta, J. C., & Rizzolatti, G. (2005). Grasping the intentions of others with one’s own mirror neuron system. PLoS Biology, 3, 529–535. Iacoboni, M., Woods, R . P., Brass, M., Bekkering, H., Ma zziotta, J. C ., & Rizzolatti, G. (1999). Cortical mechanisms of human imitation. Science, 286, 2526–2528. Jacobs, A ., P into, J., & Sh iffrar, M . ( 2004). E xperience, c ontext, a nd t he visual perception of human movement. Journal of Experimental Psychology: Human Perception and Performance, 30, 822–835. James, W. (1890). The principles of psychology (Vol. 1). New York: Dover. Jeannerod, M . ( 1997). The co gnitive n euroscience o f acti on. C ambridge, MA: Blackwell. Jeannerod, M. (2001). Neural simulation of action: A unifying mechanism for motor cognition. NeuroImage, 14, S103–S109. Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception and Psychophysics, 14, 201–211. Jordan, M . I . (1996). C omputational a spects of motor c ontrol a nd motor learning. In H. Heuer & S. Keele (Eds.), Handbook of perception and action, Vol. 2: Motor skills (pp. 71–120). New York: Academic Press. Kandel, S., Orliaguet, J. P., & Viviani, P. (2000). Perceptual anticipation in handwriting: The role of i mplicit motor c ompetence. Perception and Psychophysics, 62, 706–716. Kawato, M., Furawaka, K., & Suzuki, R. (1987). A hierarchical neural network m odel f or t he c ontrol a nd le arning o f vol untary m ovements. Biological Cybernetics, 56, 1–17. Kilner, J. M ., Pa ulignan, Y ., & Bl akemore, S .-J. ( 2003). A n i nterference effect of ob served biolo gical move ment on a ction. Current Biol ogy, 13, 522–525. Kiraly, I., Jovanovic, B., Prinz, W., Aschersleben, G., & Gergely, G. (2003). The e arly o rigins o f g oal a ttribution i n i nfancy. Consciousness an d Cognition, 12, 752–769. Knoblich, G. (2002). Self-recognition: Body and action. Trends in Cognitive Sciences, 6, 447–449. Knoblich, G., Elsner, B., Aschersleben, G., & Metzinger, T. (2003). Grounding the self in action. Consciousness and Cognition, 12, 487–494. Knoblich, G., & Flach, R. (2001). Predicting the effects of actions: Interactions of perception and action. Psychological Science, 12, 467–472. Koski, L ., Iacoboni, M., Dubeau, M.-C., Woods, R . P., & Ma zziotta, J. C . (2003). M odulation o f c ortical ac tivity d uring d ifferent imitative behaviors. Journal of Neurophysiology, 89, 460–471.

364

Embodiment, Ego-Space, and Action

Lacerda, F., von Hofsten, C., & Heimann, M. (Eds.). (2001). Emerging cognitive abilities in early infancy. Hillsdale, NJ: Erlbaum. Leung, E. H., & Rheingold, H. L. (1981). Development of pointing as a social gesture. Developmental Psychology, 17, 215–220. Lhermitte, F., Pillon, B., & Serdaru, M. (1986). Human autonomy and the frontal lobes. Part I: Imitation and utilization behavior: A neuropsychological study of 75 patients. Annals of Neurology, 19, 326–334. Longo, M. R., & B ertenthal, B. I. (2006). Common coding of observation and execution of action in 9-month-old infants. Infancy, 10, 43–59. Longo, M . R ., Kosobud, A, & B ertenthal, B. I. (in press). Automatic i mitation of biome chanically i mpossible a ctions: E ffects o f p riming movements v s. g oals. Journal of E xperimental P sychology: H uman Perceptions and Performance. Louis-Dam, A ., O rliaguet, J.-P., & C oello, Y. (1999). Perceptual a nticipation in grasping movement: When does it become possible? In M. A. Grealy & J. A. Thom son (Eds.), Studies in perception and action V (pp. 135–139). Mahwah, NJ: Erlbaum. Loula, F., Prasad, S., Harber, K., & Shiff rar, M. (2005). Recognizing people from t heir m ovement. Journal of E xperimental P sychology: H uman Perception and Performance, 31, 210–220. Luo, Y., & Baillargeon, R. (2005). Can a self-propelled box have a goal? Psychological Science, 16, 601–608. Maratos, O. (1982). Trends in the development of imitation in early infancy. In T. G. Bever (Ed.), Regressions in m ental de velopment: Ba sic phenomena and theories (pp. 81–102). Hillsdale, NJ: Erlbaum. Marcovitch, S., & Z elazo, P. D. (1999). The A-not-B er ror: Results f rom a logistic meta-analysis. Child Development, 70, 1297–1313. Marcovitch, S., Zelazo, P. D., & Schmuckler, M. A. (2002). The effect of the number of A t rials on performance on t he A-not-B task. Infancy, 3, 519–529. Marr, D. (1982). Vision: A computational investigation into the human representation and processing of information. New York: W. H. Freeman. McGraw, M . B . (1943). Neuromuscular m aturation of th e hum an infant . New York: Columbia University Press. Meltzoff, A. N., & Moore, M. K. (1977). Imitation of facial and manual gestures by human neonates. Science, 198, 75–78. Meltzoff, A. N., & Moore, M. K. (1994). Imitation, memory, and the representation of persons. Infant Behavior and Development, 17, 83–99. Miall, R . C ., & W olpert, D. M . (1996). Forward models for physiological motor control. Neural Networks, 9, 1265–1279. Munakata, Y. (1997). Perseverative reaching in infancy: The roles of hidden toys and motor history in the AB task. Infant Behavior and Development, 20, 405–415.

Motor Knowledge and Action Understanding

365

Munakata, Y. (1998). Infant perseveration and implications for object permanence theories: A PDP model of the AB task. Developmental Science, 1, 161–184. Nakagawa, A., Sukigara, M., & Benga, O. (2003). The temporal relationship between reduction of early imitative responses and the development of attention mechanisms. BMC Neuroscience, 4, http://www.biomedcentral.com/1471-2202/4/33. Orliaguet, J.-P., Kandel, S., & Boe, L. J. (1997). Visual perception of motor anticipation in cursive handwriting: influence of spatial a nd movement information on the prediction of forthcoming letters. Perception, 26, 905–912. Pavlova, M ., & S okolov, M . ( 2000). O rientation sp ecificity i n biolo gical motion perception. Perception and Psychophysics, 62, 889–899. Pelphrey, K. A., Morris, J. P., & M cCarthy, G. (2005). Neural basis of eye gaze processing deficits in autism. Brain, 128, 1038–1048. Pinto, J. ( 1997). De velopmental c hanges o f i nfants’ p erceptions o f p ointlight d isplays o f h uman ga it. Dissertation A bstracts I nternational: Section B: The Sciences and Engineering, 57(8-B), 5365. Pinto, J., & B ertenthal, B. I. (1992). E ffects of phase relations on t he perception of biomechanical motions. Investigative Ophthalmology and Visual Science, 33Suppl., 1139. Pinto, J., & B ertenthal, B. I. (1993). Infants’ perception of u nity i n pointlight displays of human gait. Investigative Ophthalmology and Visual Science, 34 Suppl., 1084. Pinto, J., B ertenthal, B. I., & B ooth, A. (1996). Developmental changes in infants’ re sponses to b iological m otion d isplays. Investigative O phthalmology and Visual Science, 37 Suppl, S63. Pinto, J., Sh rager, J., & B ertenthal, B. I. (1993). Developmental changes in infants’ perceptual processing of biomechanical motions. In Proceedings of th e Annual Conference of th e Cognitive Science Society. Hillsdale, NJ: Erlbaum. Posner, M . I . ( 1978). Chronometric e xplorations of mind . H illsdale, N J: Erlbaum. Posner, M . I . (1980). O rienting of attention. Quarterly Journal of E xperimental Psychology, 32A, 3–25. Prechtl, H . F. R . (1986). N ew p erspectives i n e arly h uman de velopment. European Journal of Obstetrics, Gynecology and Reproductive Biology, 21, 347–355. Premack, D. (1990). The infant’s theory of self-propelled objects. Cognition, 36, 1–16. Proffitt, D. R., Bertenthal, B. I., & Roberts, R. J. (1984). The role of occlusion in reducing multistability in moving point light displays. Perception and Psychophysics, 35, 315–323.

366

Embodiment, Ego-Space, and Action

Repp, B ., & K noblich, G . ( 2004). P erceiving ac tion i dentity: H ow p ianists re cognize t heir o wn p erformances. Psychological S cience, 1 5, 604–609. Rizzolatti, G., & Cr aighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192. Rizzolatti, G., Fadiga, L ., Gallese, V., & F ogassi, L . (1996). Premotor cortex a nd t he recognition of motor ac tions. Cognitive Brain Re search, 3, 131–141. Rizzolatti, G., Fogassi, L., & Gallese, V. (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience, 2, 661–670. Rose, J. L., & Bertenthal, B. I. (1995). A longitudinal study of the visual control of posture in infancy. In R. J. Bootsma & Y. Guiard (Eds.), Studies in perception and action (pp. 251–253). Mahwah, NJ: Erlbaum. Ruff man, T., & Langman, L. (2002). Infants’ reaching in a multi-well A not B task. Infant Behavior and Development, 25, 237–246. Saxe, R ., X iao, D.-K., K ovacs, G ., Perrett, D. I ., & K anwisher, N. ( 2004). A re gion o f r ight p osterior su perior tem poral su lcus re sponds to observed intentional actions. Neuropsychologia, 42, 1435–1446. Saygin, A. P., Wilson, S. M., Hagler, D. J., Bates, E., & Sereno, M. I. (2004). Point-light biological motion p erception ac tivates human premotor cortex. Journal of Neuroscience, 24, 6181–6188. Scaife, M., & Bruner, J. (1975). The capacity for joint attention in the infant. Nature, 253, 265–266. Schofield, W. N. (1976). Ha nd movements which cross t he body midline: Findings rel ating a ge d ifferences to handedness. Perceptual a nd Motor Skills, 42, 643–646. Shiff rar, M ., & P into, J. ( 2002). The visual analysis of bodily motion. In W. P rinz & B . H ommel ( Eds.), Common me chanisms in pe rception and action: Attention and performance (Vol. 19, pp. 381–399). Oxford: Oxford University Press. Smith, L. B., Thelen, E., Titzer, R., & McLin, D. (1999). Knowing in the context of acting: The task dynamics of the A-not-B error. Psychological Review, 106, 235–260. Sommerville, J. A., Woodward, A. L., & Needham, A. (2005). Action experience alters 3-month-old infants’ perception of others’ actions. Cognition, 96, B1–B11. Stengel, E . (1947). A c linical a nd p sychological s tudy of e cho-reactions. Journal of Mental Science, 93, 598–612. Stevens, J. A ., F onlupt, P., Sh iff rar, M ., & De cety, J. ( 2000). N ew a spects of motion perception: Selective neural encoding of apparent human movements. NeuroReport, 11, 109–115. Sumi, S. (1984). Upside-down presentation of the Johansson moving lightspot pattern. Perception, 13, 283–286.

Motor Knowledge and Action Understanding

367

Tai, Y. F., Scherfler, C., Brooks, D. J., Sawamoto, N., & Castiello, U. (2004). The h uman p remotor c ortex i s ‘ mirror’ o nly f or b iological ac tions. Current Biology, 14, 117–120. Thelen, E., Schoner, G., Scheier, C., & Smith, L. B. (2001). The dynamics of embodiment: A field theory of infant perseverative reaching. Behavioral and Brain Sciences, 24, 1–86. Thelen, E., & Ulrich, B. (1991). Hidden skills. Monographs of the Society for Research in Child Development, 56 (No. 1 Serial No. 223). Thelen, E., Ulrich, B. D., & Niles, D. (1987). Bilateral coordination in human infants: Ste pping o n a spl it-belt t readmill. Journal of E xperimental Psychology: Human Perception and Performance, 13, 405–410. Tomasello, M. (1999). The cultural origins of human cognition. Cambridge, MA: Harvard University Press. Tomasello, M., & C all, J. ( 1997). Primate cog nition. O xford: O xford University Press. van Hof, P., van der Kamp, J., & Savelsbergh, G. (2002). The relation of unimanual and bimanual reaching to crossing the midline. Child Development, 73, 1353–1362. Verfaillie, K. (1993). Orientation-dependent priming effects in the perception of biological motion. Journal of Experimental Psychology: Human Perception and Performance, 19, 992–1013. Viviani, P., Baud-Bovy, G., & Re dolfi, M. (1997). Perceiving a nd tracking kinesthetic s timuli: Fu rther e vidence of motor -perceptual i nteractions. Journal of E xperimental P sychology: H uman P erception an d Performance, 23, 1232–1252. Viviani, P ., & M ounoud, P . ( 1990). P erceptual-motor c ompatibility i n pursuit t racking o f t wo-dimensional m ovements. Journal of M otor Behavior, 22, 407–443. Viviani, P., & Stucchi, N. (1992). Biological movements look uniform: Evidence of motor-perceptual interactions. Journal of Experimental Psychology: Human Perception and Performance, 18, 603–623. Vogt, S., Taylor, P., & Hopkins, B. (2003). Visuomotor priming by pictures of hand postures: Perspective matters. Neuropsychologia, 41, 941–951. von Hofsten, C. (2003). On the development of perception and action. In J. Valsiner & K. J. Connolly (Eds.), Handbook of developmental psychology (pp. 114–171). London: Sage. Wapner, S., & Ci rillo, L . (1968). Imitation of a m odel’s ha nd movements: Age c hanges i n t ransposition o f le ft-right relations. Child De velopment, 39, 887–894. Wolpert, D. M., Doya, K., & Kawato, M. (2003). A unifying computational framework f or m otor c ontrol a nd s ocial i nteraction. Philosophical Transactions of the Royal Society of London B, 358, 593–602. Wolpert, D. M., & Fl anagan, J. R . (2001). Motor prediction. Current Biology, 11, R729–732.

368

Embodiment, Ego-Space, and Action

Woodward, A . L . (1998). I nfants s electively encode t he goal object of a n actor’s reach. Cognition, 69, 1–34. Woodward, A. L. (1999). Infants’ ability to distinguish between purposeful and nonpurposeful behaviors. Infant Behavior and Development, 22, 145–160. Woodward, A . L ., & Gu ajardo, J. J. ( 2002). I nfants’ u nderstanding of t he point gesture as an object-directed action. Cognitive Development, 17, 1061–1084. Woodward, A. L., & S ommerville, J. A . (2000). Twelve-month-old infants interpret action in context. Psychological Science, 11, 73–77. Zelazo, P. D., Reznick, J. S ., & Spinazzola, J. (1998). Representational flexibility and response control in a multistep multilocation search task. Developmental Psychology, 34, 203–214.

11 How Mental Models Encode Embodied Linguistic Perspectives

Brian MacWhinney

Humans d emonstrate a re markable a bility to t ake ot her p eople’s perspectives. W hen w e w atch m ovies, w e find o urselves i dentifying with the actors, sensing their joys, hopes, fears, and sorrows. As viewers, we c an be m oved t o ex hilaration a s we w atch our h eroes overcome obstacles; or we can be moved to tears when they suffer losses and defeats. This process of identification does not always have to be l inked to i ntense emotional i nvolvement. At a soc cer match, we c an follow t he movements of a p layer moving i n to shoot for a goal. We can identify with the player’s position, stance, and maneuvers a gainst t he cha llenges offered by t he defenders. We c an t rack the actions, as t he player drives toward t he goal a nd k icks t he ba ll into the net. This ability to take the perspective of another person is very general. Just as we follow the movements of dancers, actors, and athletes, we can also follow the thoughts and emotions expressed by others in language. In this paper, we will explore the ways in which language builds upon our basic system for projecting the body image to support a r ich system of perspective tracking and mental model construction.

369

370

Embodiment, Ego-Space, and Action

Projection It is useful to think of projection as relying on the interaction of four systems, each of which is fundamental to psychological functioning. These four systems involve body i mage, localization, empathy, a nd perspective tracking.

Body Image Matching In order to assume t he perspective of another actor, one must first be ab le t o co nstruct a f ull i mage f or o ne’s o wn body. This image expresses not only t he positions of our organs a nd l imbs, but a lso their movements and configurations. For most of us, the notion of a body schema is something natural and inescapable. However, there are v arious n eural d isorders a nd i njuries ( Ramachandran, 2 000; Ramachandran & Hubbard, 2001) that can lead to the disruption of the body image. The construction of the body image is based on processing in a w ide variety of sites including the cerebellum (Middleton & S trick, 1998), medial prefrontal cortex (Macrae, Heatherton, & Kelley, 2 004), primary motor cortex (Kakei, Hoffman, & St rick, 1999), and insula (Adolphs, 2003). In order to achieve perspective taking, we need a system that can project our body i mage t o o ther a gents. The first step in this process must involve body part mapping. We have to identify specific organs or body parts on others that we then map onto parallel parts in our own body i mage. Meltzoff (1995) and others have traced the roots of body part matching back to infancy, when the baby can already demonstrate matching by imitating actions such a s tongue protrusion or head motions. We have learned t hat, when a sub ject in an f MRI experiment tracks t he movements of a pa rticular limb or organ, there is corresponding activation of the same neural pathways t he subject would use to produce t hose actions. For example, when w e i magine per forming b icep c urls, t here a re d ischarges t o the biceps (Jeannerod, 1997). When a t rained marksman imagines shooting a gun, the discharges to the muscles mimic those found in real target practice. When we imagine eating, there is an increase in salivation. Neuroimaging studies by Parsons et al. (1995) and Martin, Wiggs, Ungerleider, a nd Ha xby (1996) a nd Cohen et a l. (1996) have shown that, when subjects are asked to engage in mental imag-

How Mental Models Encode Embodied Linguistic Perspectives

371

ery, t hey use m odality-specific s ensorimotor c ortical s ystems. For example, in the study by Martin et al., the naming of tool words specifically activated t he areas of t he left premotor cortex that control hand movements. Similarly, when we observe pain in a particular finger i n a nother person, t ranscranial ma gnetic st imulation shows activation of the precise motor area that controls the muscle for that same finger (Avenanti, Bueti, G alati, & Ag lioti, 2 005). The general conclusion from this research is that, once we have achieved body part matching, we can perceive t he ac tions of others by projecting them onto the mechanisms we use for creating and perceiving these actions ou rselves. I n e ffect, w e r un a cog nitive s imulation ( Bailey, Chang, Feldman, & Narayanan, 1998) and we match the products of this cognitive simulation to our perception of the actions of others. This ability to compute a cognitive simulation is at the heart of both projection and imagery. From recent single-cell work with monkeys, we know that perceptual-motor ma tching i s a v ery g eneral a spect o f t he p rimate brain. Apart from the mirror neurons (Rizzolatti, Fadiga, Gallese, & Fogassi, 1996) that were originally identified in the monkey counterpart to Broca’s area, there are also perceptual-motor matching systems in other areas of both frontal and parietal cortex. The facility with wh ich we a re able to project perspec tive onto others suggests that the body matching system is probably quite extensive, but the exact extent of this system is not yet known. Localization In order to assume t he perspective of another actor, one most a lso position that actor correctly in mental space. To do this we rely on the same system that allows us to locate our own bodies in space. This system must be robust against eye movements and turns of the head (Ballard, Hayhoe, Pook, & Rao, 1997); it should allow us to maintain facing and update our position vis-à-vis landmarks and other spatial configurations (Klatzky & Wu; Proffitt, this volume). Once we have succeeded i n projecting our body i mage to t he other, we c an next begin to position this projected image in the distal physical location (Loomis & Philbeck, this volume). One difficult part of this projection i s t he r equirement t hat w e r otate o ur body i mage 180º i f t he other person is facing us.

372

Embodiment, Ego-Space, and Action

Empathy In order to assume the emotional and social perspective of another, we must first be ab le to have access to our own emotions. We c an then project or transfer these feelings to others. Of course, there is no doubt that humans have a well-organized system of emotions that maps into physiological expression through the face, body, and autonomic nervous s ystem. Once we have ach ieved projection of body image and spatial location, we can begin to identify with the emotions of the other person. Studies of patients with lesions (Adolphs & Spezio, 2006) have pointed to a role for the amygdala in the processing of emotional expression. Studies w ith normals using f MRI (Meltzoff & Decety, 2003) have shown activation in the amygdala and temporal areas (Pelphrey et al., 2003; Pelphrey, Morris, & McCarthy, 2005; Pelphrey, Viola, & McCarthy, 2004) for the processing of social expressions and actions.

Perspective Tracking To obtain smooth projection, we must be able to constantly update, shift, a nd t rack t he other actor on each of t hese d imensions (body image, spa tial l ocation, em pathy). To d o t his, w e ha ve t o t reat t he other body as obeying the same kinematic principles that govern our own body (Knoblich, this volume). At the same time, we must be able to distinguish the projected or imagined image from our own body image. This means that the tracking system must allow us to treat the projected image as “fictive” or “simulated” and our own body image as real. Ma intaining t his t ype of clear s eparation be tween simulation and reality would seem to require special neuronal adaptations. These ad aptations i nclude m irror n eurons a s o ne co mponent, b ut they also require additional support from frontal areas for attention and r egulation. This additional support is required to allow us to shift perspective between multiple ex ternal agents. For t his system to function smoothly, we must store a coherent fictive representation of t he other person. In order to switch smoothly between perspectives, it is crucial that the projected agent be easily accessed and constructed as a clearly separate fictive representation. Researchers such a s Do nald (1991) a nd Giv on (2002) ha ve p roposed that, two million years ago, homo erectus relied on a system of

How Mental Models Encode Embodied Linguistic Perspectives

373

gestural and mimetic communication to further social goals. Adaptations that supported this tracking of visual perspectives would have f ormed a n e volutionary subst rate f or t he la ter elabo ration o f perspective tracking in language (MacWhinney, 2005). There is still a close temporal a nd conceptual l inkage between gesture a nd la nguage (McNeill, 1992), suggesting that both systems continue to rely on a core underlying set of abilities to track perspective. Subjective evidence from drama, movies, and literature suggests that identification is a dynamic and integrated process. This process produces a flowing match between our own motoric and emotional functions and the motoric and emotional functions of others. However, n eurological e vidence r egarding t he dy namic f unctioning o f this system is still missing. We know that there is neuronal support for each of the four projection systems. However, we cannot yet follow perspective tracking in real time. In large part, this gap in our knowledge i s a f unction of t he l imitations of m easurements t aken through methods such as fMRI and ERP.

The Perspective Hypothesis Fortunately, there is evidence of a very different type that can help us understand the nature of ongoing projection and perspective taking. This is the evidence that is encoded in the structure of grammatical systems. Close a nalysis shows how g rammar reflects the cognitive operations u nderlying perspec tive t racking. I r efer t o t his g eneral view o f m ental p rojection a s t he perspec tive h ypothesis. W e c an decompose the overall perspective hypothesis into six more specific assumptions or claims: 1. Language functions by p romoting the sharing of mental models between the speaker and the listener. 2. To build up complex mental models, referents a nd ac tions must be connected dynamically through temporal, spatial, and causal linkages. 3. The l inks i n m ental m odels a re s tructured by p erspective tracking. 4. Perspective tracking is supported by specific neuronal projection systems that keep these cognitive simulations separate from direct perception and action.

374

5.

Embodiment, Ego-Space, and Action

The primary function of grammatical devices is to mark perspective tracking, as it emerges in conversation and narrative. 6. Perspective tracking, as realized in language, facilitates the linking o f m ental s ystems a nd t he c ultural t ransmission o f l inked structures.

For ps ychologists a nd cog nitive l inguists, t he first t wo a ssumptions a re la rgely u ncontroversial. S ince t he beg inning o f t he C ognitive R evolution, r esearchers h ave a ssumed t hat c omprehension involves the construction of linkages in complex mental models. In regard to these first two claims, the perspective hypothesis is simply building on traditional, well-supported, assumptions. The traditional form of assumption 3 i s that that discourse linkages i nvolve coreference between nodes i n semantic st ructure. For example, we could have a discourse composed of two simple propositions in a sentence such as: the boy kicked the ball, and the ball rolled into the gutter. In the mental model or semantic memory constructed from this sentence, the ball of the first clause is linked through coreference to the ball of the second clause. In classic models of sentence interpretation (Budiu & Anderson, 2004; Kintsch & Van Dijk, 1978), the ma in m echanism f or d iscourse l inkage w as co reference. E ven models t hat d ecomposed s emantic s tructure ( Miller & J ohnsonLaird, 1976) still maintained a reliance on coreference as the method for linking propositions in mental models. The perspective hypothesis introduces a fundamental shift in our understanding o f t he co nstruction o f m ental m odels. I n t his n ew view, wh ich i s en coded a s a ssumption 3 , p ropositions a re em bodied representations constructed from a specific perspective, which is initially the perspective of the sentential subject. We can refer to the combined i nteraction o f perspec tive t aking a nd perspec tive sh ifting a s perspec tive t racking. A lthough co reference p lays a s econdary r ole i n l icensing l inkage be tween propositions, mental models use perspec tive t racking a s t heir p rimary i ntegrating m echanism. In the case of our simple example sentence, this means that the perspective of the boy in the first sentence is sh ifted to that of the ball in the second clause and we then track the motion of the ball as it rolls into the gutter. In other words, we construct mental models by taking and shifting perspectives. Within this larger process of perspective tracking, deixis, or verbal pointing, plays a role of bringing new referents to our attention or locating referents in either working memory or long-term memory. However, the structuring of proposi-

How Mental Models Encode Embodied Linguistic Perspectives

375

tions into mental models depends primarily on taking the perspective of the entities discussed in a discourse and tracking this flow of perspective between the various discourse participants. Assumption 4 is grounded on the growing evidence from cognitive neuroscience regarding specific neuronal systems t hat manage the projection of body image, spatial location, and emotion. Without making t his assumption, it would be v ery difficult to imagine how one could believe that language processing relies intimately and continually on neuronal support for perspective tracking. Assumption 5 i s t he c enterpiece o f t he perspec tive h ypothesis. When this assumption is linked to assumption 3, it takes on a particularly st rong f orm. The co mbination o f t hese t wo a ssumptions represents a n ovel pos ition i n cog nitive s cience. Re searchers wh o emphasize the formal determination of linguistic structure (Chomsky, 1975) have repeatedly rejected links between grammatical structure a nd “pragmatic” fac tors such a s perspec tive t aking. A lthough cognitive linguistics provides a role for perspective in the theory of subjectivisation (Stein & Wright, 1995), there is little acceptance of assumption 3. Despite the importance of embodied cognition to cognitive linguistics, there are no current models in this tradition that rely on perspec tive t racking a s t he major force i ntegrating m ental models. Within experimental psychology, recent work has focused on demonstrating the embodied nature of mental models. For example, Stanfield and Zwaan (2001) found t hat, when given sentences such as John poun ded th e n ail into th e floor, subjects a re fa ster to na me pictures of nails pointing downward than nails pointing sideways. This indicates that they construct interpretations with a na il pointing downward. Results of this type, summarized in the chapters in Pecher and Zwaan (2005), provide clear and important evidence for the embodied nature of mental models, but they tell us little about perspective t racking. To ex amine t he co urse o f perspec tive t racking experimentally, we will need to use online measures of interpretation, such a s c ross-modal na ming, wh ich w ill a llow u s t o p robe ongoing changes in mental models as they are being constructed. Assumption 6 focuses on the consequences of perspectival mental models for promoting conceptual integration and cultural transmission. Perspectival mental models provide a g eneral rubric for k nitting together all of cognition. Consider a sentence, such as Last night, my sister’s friend reminded me I had dropped my keys under the table behind th e ga rage. H ere, w e s ee h ow a s ingle u tterance integrates

376

Embodiment, Ego-Space, and Action

information about time (last night), social relations (sister’s friend), mental a cts ( remind), spac e ( under, be hind), ob jects ( keys, tabl e, garage), a nd ev ents ( drop). The sentence produces an integrated tracking across the perspectives of the sister, the friend, the speaker, and t he v arious l ocations. A lthough t his i nformation ma y be i nitially activated in different regions of the brain, language achieves an integration of this information across all of these domains. The p rimary f ocus o f t his pa per i s o n a ssumption 5 . W e w ill conduct this exploration in three parts. We will begin with a psycholinguistic account of how perspectives are shifted t hrough sentence st ructures. Second, we w ill conduct a l inguistic examination of a w ide r ange o f g rammatical co nstructions t o u nderstand h ow they mark perspective tracking. Finally, we will consider the consequences of this analysis for theories of cognition and development in accord with assumption 6. Perspective Tracking Modern psycholinguistic research has tended to focus on the process of sentence comprehension or interpretation, rather than sentence production. It is much easier to achieve experimental control over comprehension t han over production. B ecause our models of sentence i nterpretation a re m ore de tailed, i t i s e asiest t o ex plain the function of perspective tracking first for comprehension and to extend this account later to perspective marking in production. On the comprehension side, there are five important principles that have a direct bearing on perspective tracking. 1.

2.

Incremental In terpretation. The first p rinciple, w hich ha s b een widely supported in recent research, is the principle of incrementalism. According to this principle, the listener attempts to go from words to mental models as soon as material is unambiguously recognized. In processing terms, incrementalism is equivalent to the notion of c ascading (McClelland, 1979) i n w hich processes feed into each other as soon as t hey reach a p oint where data can be passed. Load R eduction. I n s ome c ases, p rocessing ma y i nvolve pl acing phrases i nto w orking m emory w ithout ye t c ommitting to t heir grammatical role o r w ithout a ttaching t hem to o ther p hrases. However, if the listener can attach material to a f ully interpreted mental model, then this will reduce processing load.

How Mental Models Encode Embodied Linguistic Perspectives

3.

4.

5.

377

Starting Points. This principle of load re duction t hrough attachment ( Gibson, 1 998) i nteracts w ith a t hird p rinciple t hat g overns t he centrality of t he starting point (MacWhinney, 1977) or the A dvantage o f F irst M ention (Gernsbacher, 1 990). W hen w e begin a sentence, we use the first nominal phrase as the basis for structure building (Gernsbacher, 1990). As we move through the sentence from left to right incrementally, we add to this structure through attachments. When a phrase or word cannot be attached, it i ncreases t he load . S o, w e a re m otivated to a ttach p hrases a s soon as possible to reduce these costs (O’Grady, 2005). Role Slots. The process of phrasal attachment is driven by role slot assignments. In d ifferent m odels, t his p rinciple ha s v ery d ifferent names, varying from thematic role assignment to theta-binding. The basic idea (MacWhinney, 1987a) is that predicates (verbs, adjectives, prepositions) e xpect to at tach to no minal a rguments that fi ll various thematic or case roles, such as agent, object, or recipient. For l anguages t hat rely o n SVO, S OV, or VSO orders, the first noun is placed tentatively into the role of the perspective. This perspective t hen actively searches for a v erb t hat w ill a llow it to assume a dynamic perspective. For example, in the sentence the runner fell, we begin with the perspective of the runner as the starting p oint. We t hen m ove o n i ncrementally to t he v erb fall. At that point, the linkage of runner to the role slot for a perspective for fell allows us to build a mental model in which the runner engages in the action of falling. Competition and Cu es. Role s lot fi lling i s a c ompetitive p rocess ( MacWhinney, 1 987a). I n c omprehension, s everal n ominal phrases o r “ slot fi llers” ma y c ompete f or a ssignment to a g iven slot and role. Only one of the fi llers can win this competition, and the losers must then compete for other slots. The outcome of the competition is determined by the presence of grammatical, lexical, and semantic cues that favor one or the other competitor. The process of cue summation obeys basic Bayesian principles.

At this point, it would perhaps be useful to think about how these five principles lay the groundwork for the Perspective Hypothesis. In a sense all of these principles are integrated by the basic need during comprehension to construct a m ental model. The pr inciples of incremental i nterpretation a nd load reduction a re d irect responses to t he fac t t hat t here i s a cost a ssociated w ith ma intaining u nattached verbal chunks in short-term memory. In order to reduce this cost, we adopt the initial hypothesis that the first nominal is the perspective. Because the construction of mental models is perspectival

378

Embodiment, Ego-Space, and Action

(assumption 3 abo ve), we are able to take this starting point as the foundation for a larger mental model. The status of the starting point as the perspective is further supported if it can compete successfully for the perspective slot on the verb. In some marked constructions and word orders, it may lose out in this competition. But to understand how this happens, we need to look more closely at the dynamics o f perspec tive t racking t hrough g rammatical co nstructions. Some of the forces that work to either maintain or shift this initial perspective include: 1. Shift. If the verb takes multiple thematic roles, then it may shift perspective from the starting point to secondary perspectives. This can o ccur f or t ransitives, pa ssives, c lefts, a nd a v ariety o f o ther constructions. In most cases, this shift i s not c omplete, a nd t he perspective of the starting point is at least partially maintained. 2. Modification. I ndividual ref erents ma y b e f urther m odified or el aborated by a ttached p hrases a nd c lauses. This occ urs i n relativization, c omplex N P f ormation, app ositives, a nd o ther constructions. 3. Maintenance. P erspective c an b e ma intained ac ross c lauses by devices (or cues) such as anaphoric pronouns, gerunds, resumptives, and conjunctions.

Structure vs. Function Having now listed the general assumptions of the perspective hypothesis, we are in a position to explore the application of this hypothesis to a wide variety of syntactic constructions. However, before beginning t hat ex ploration, w e n eed t o co nsider a co mpeting a pproach that accounts for some, but not all of the phenomena to be discussed. This approach, developed within generative linguistics over the last 50 years (Chomsky, 1957), which attempts to account for syntactic patterns in terms of structural relationships. Of t he various structural r elationships ex plored d uring t his ha lf c entury, per haps t he most prominent is the relation called c-command. We will therefore begin our exploration by comparing two different accounts of coreference—one based on the constraints imposed by c-command and another based on t he constraints imposed by perspective tracking. This analysis is not intended to reject a possible role for c-command in linguistic description. Rather, the goal here is illustrate the impact of perspective tracking on grammatical constructions.

How Mental Models Encode Embodied Linguistic Perspectives

379

Coreference We will begin our explorations with a consideration of a few selected aspects of the grammar of coreference. Consider sentences 1 and 2: Coreference b etween he and Bill is possible in (1), but blocked in 2. 1. 2.

*Hei says Billi came early. Billi says hei came early.

The coreferential reading of these sentences is marked by the presence of subscripts on the nominals. Without these subscripts and the coreference they require, the pronoun he in 1 c an refer to someone mentioned outside of the sentence, such as Tom. What is specifically blocked in 1 is coreference between he and Bill as indicated by their subscripts. The perspec tive h ypothesis acco unts f or t he u ngrammaticality of 1 by invoking the principle of referential commitment. When we hear he in 1, we need to take it as the starting point for the construction of a m ental model for the clause. This means that we must make a referential commitment. We cannot wait until later to identify he. In (2), on the other hand, Bill is already available for coreference and so it is easy to link he to Bill. The theory of government and binding (Chomsky, 1982; Grodzinsky & Rei nhart, 1993; Rei nhart, 1981) s eeks to ex plain t his phenomenon (and ma ny o thers) i n ter ms o f st ructural r elations i n a phrase-marker tree. The backbone of this account is a relation known as c-command. Each element in the tree is said to “c-command” its siblings a nd t heir de scendants. Pr inciple C o f t he b inding t heory requires that lexical NPs such as Bill or the man be “free”; that is, not coindexed with a c-commanding element. This principle excludes a coreferential reading for 1 in which Bill is coindexed with the c-commanding pronoun, but not for 2 i n which t he i ntended a ntecedent c-commands the pronoun rather than vice versa. Grammatical subjects are always in a syntactic position that allows them to c-command other elements in the same clause. This means that there is a very close parallel between the patterns expressed in ccommand and the principle of referential commitment from the perspective hypothesis. Because initial nominals serve as the bases for structure building and perspective propagation, it is initially difficult to distinguish the structural account from the functionalist account. However, i f we l ook cl osely, we w ill s ee t hat t here a re a variety of

380

Embodiment, Ego-Space, and Action

phenomena that can be understood better in terms of mental model construction than in terms of c-command. Noncentrality One i nteresting contrast i s t he r elative i ncrease i n ac ceptability of coreference t hat oc curs a s o ne m oves f rom c entral t o per ipheral arguments. C-command blocks coreference in 3 and 4. This prediction of c-command is correct for 3. However, for many speakers, 4 is possible, although c-command disallows it. 3. 4 . 5. 6.

*Hei said Billi was crazy. *John told himi that Billi was crazy. John said to himi that Billi was crazy. John told hisi mother that Billi was crazy.

Although c -command ma kes t he w rong p redictions f or 4 , i t co rrectly allows for coreference in 5 a nd 6, since the pronouns here do not c-command Bill. The perspective hypothesis views this contrast in a very different way. As a further corollary of assumption 3 and its corollary of referential commitment, we have the following: Principle o f N oncentrality: I f a n e lement i s not c entral to t he bu ilding of a me ntal mo del, t hen it ne ed not b e re ferentially c ommitted a nd i s therefore op en for b ackward a naphora. The le ss c entrally i nvolved t he element, the more it is open for backward anaphora.

The most central argument is the subject (he in 3), followed by the object (him in 4), the oblique or prepositional object (him in 5), and finally a possessor in a complex NP (his in 6). As these roles become less and less central to the process of structure building, they become more and more open to backward anaphora. Delaying Commitment C-command p rovides n o ac count f or t he g rammaticality o f s entences such as (7–9). This is because pronouns in subordinate clauses are too low in the structure to have NPs in other clauses as siblings. 7. 8. 9.

After Lesteri drank the third vodka, hei dropped his cup. After hei drank the third vodka, Lesteri dropped his cup. *Hei drank the third vodka, and Lesteri dropped his cup.

How Mental Models Encode Embodied Linguistic Perspectives

381

Although bo th 8 a nd 9 ha ve a p ronoun t hat p recedes t he t arget noun Lester, t he coreferential reading of 8 i s e asier to get t han the coreferential reading of 10. A lthough c-command provides no account for this contrast, it can be explained by reference to the perspective hypothesis. To do this, we can rely on the following corollary derived from assumption 5. Principle of Cue Marking: The default process of perspective tracking can be modified by grammatical cues that signal delays in referential commitment, clausal backgrounding, or shifts in perspective.

In t he c ase of 8 , t he c rucial g rammatical c ue i s t he subordinating conjunction after, which signals the beginning of a background clause. I n t he co nstruction o f a m ental m odel, back ground ma terial is placed “on hold” for later attachment to foreground material. However, the storage of the backgrounded initial clause of 8 does not incur a la rge p rocessing cost , s ince i ts co mponent p ieces a re f ully structured. Moreover, as long as it remains in the background, the pronoun can be i nvolved in backward coreference. Because there is no cue i n 9 t o protect t he pronoun, it must be co mmitted referentially and backward anaphora is blocked. As a further illustration of the effect of grammatical cues in the delaying of commitment, consider the contrast between 10 and 11. 10 . *Shei jumped inside the car, when Debra i saw a large man lurking in the shadows. 1 1. Shei w as j umping i nside t he c ar, w hen Deb ra i s aw a l arge ma n lurking in the shadows.

Here the presence of progressive aspect in 11 places the information in the main clause “on hold” in the mental model, because this information is being judged as relevant to the interpretation of the subsequent clause. In a s eries of experiments, Harris a nd Bates (2002) show that progressive marking leads to greater acceptance of backward anaphora. Thus, the progressive functions like the subordinating conjunction as a cue to delaying of commitment. A similar effect is illustrated by 12 and 13 from Reinhart (1983). 1 2. In Carter’si hometown, hei is still considered a genius. 1 3. In Carter’si hometown, hei is considered a genius.

382

Embodiment, Ego-Space, and Action

Here, it is easier to get a coreferential reading for 12 than for 13. This is because still serves as a c ue that forces perspective promotion in the preposed prepositional phrase. The opposite side of this coin involves the way in which indefinite reference blocks forward anaphora (i.e., the establishment of coreference between a f ull nominal and a f ollowing pronoun). It is somewhat easier to achieve forward anaphora in 14 than in 15. 14. While Ruth argued with the mani, hei cooked dinner. 15. *While Ruth argued with a mani, hei cooked dinner.

In the case of 14, once we shift perspec tive f rom Ruth to the man, we now have a definite person in mind and it makes good sense to continue that perspective with he. In 15, on the other hand, we have no clear commitment to the identity of a man and using this unclear referent as the binder of he seems strange.

Reflexivization The c-command relation is also used to account for patterns of grammaticality in the use of reflexive pronouns, such as herself or myself. The most common use of these pronouns is to mark coreference to a “clausemate,” which is often the subject of the current clause, as in 16 and 17. 1 6. *Mary i pinched heri. 17 . Mary i pinched herself i.

The perspective of t he reflexive is a r ather remarkable one, since it forces the actor to look back on herself as both the cause of the action and the recipient of the action at the same time. When bo th r eferents a re c entral a rguments, r eflexivization is mandatory. S entences l ike 1 6 a re i mpossible i f t he t wo n ominals are coreferential. However, i f one of t he nominals is central to t he process of s entence building, a nd i f t he other material i n t he s entence serves to shift perspective away from the starting point, then a clausemate coreferent can use a nonreflexive pronoun. Sentences 18 and 19 illustrate this.

How Mental Models Encode Embodied Linguistic Perspectives

383

18 . Phili hid the book behind himi/himself i. 19 . Phili ignored the oil on himi/himself i*.

In 18, n onreflexive co reference a nd r eflexive c oreference a re b oth possible. In 19 only anaphoric coreference is possible. This is because the act of hiding tends to maintain the causal perspective of “Phil” more than the act of ignoring. When Phil hides the book, it is still very much in his mind and so its position vis-à-vis his body still triggers self-reference. However, when Phil ignores the oil, it is no longer in his mind. At this point, the observation of the oil is dependent on an outside viewer and no longer subject to reflexivity. Nouns such as story or picture can also trigger perspective shifting within clauses. In 20 and 21, reflexives are required if there is coreference, because there is no intervening material that shifts perspective away from either John or Mary. In 22, however, the nonreflexive is possible, since the story sets up a new perspective from which John is viewed as an actor in the story and not a l istener to the story. In 23, the action of telling involves Max so deeply in the story itself that the full perspective shift is impossible and the non-reflexive cannot be used. 20. 21. 22 . 23 .

John talked to Mary i about *heri/herself i. John talked to Mary i about *himi/himself i. Johni heard a story about himi/himself i. Max i told a story about *himi/himself i.

The p resence o f i ntervening perspec tives fac ilitates t he u se o f short distance pronouns that would otherwise be blocked by reflexives. Consider these examples: 24 . Johni saw a snake near himi/himself i. 25 . Jessiei stole a photo of heri/herself i out of the archives.

The material that detracts from the reflexive perspective may also follow t he p ronoun, a s i n t hese ex amples f rom Tenny a nd S peas (2002). 26. Johni signaled behind himi/himself i to the pedestrians. 27 . Billi pointed next to himi/himself i at the mildew on the roses. 2 8. Luciei talked about the operation on heri/herself i that Dr. Edward performed.

384

Embodiment, Ego-Space, and Action

In these sentences, use of the nonreflexive prepares the listener for a shift of perspective following the pronoun. Without that additional material, the nonreflexive would be strange. Finally, perspective shift can also be induced by evaluative adjectives such as beloved or silly, as in these examples from Tenny and Speas (2002): 29 . Jessiei stole a photo of *heri/herself i out of the archives. 30 . Jessiei stole a silly photo of heri/herself i out of the archives.

In all of these cases, creation of an additional perspective can serve to shift attention away from the core reflexive relation, licensing use of a nonreflexive pronoun. Ambiguity Syntactic a mbiguities a nd g arden pa ths a re t ypically de scribed i n terms of the construction of alternative structural trees. However, we can also view ambiguities as arising from the competition (MacDonald, Pearlmutter, & Seidenberg, 1994; MacWhinney, 1987b) between alternative perspectives. Moreover, if we look closely at the processing of t hese a mbiguities, t here is evidence for perspec tive t racking effects that go beyond simple structural competition. Consider the examples in sentences 31 to 34. 31. 32. 33. 34.

Visiting relatives can be a nuisance. Crying babies can be a nuisance. Teasing babies is unfair. If they arrive in the middle of a workday, visiting relatives can be a nuisance.

Looking at each of these ambiguities in terms of the principle of incremental interpretation, we can see how alternative perspectives organize alternative interpretations. In each case, there is a competition be tween t he overtly ex pressed noun following t he pa rticiple and a n u nexpressed subject. I n each c ase, t he pa rticiple is looking to fill the subject/perspective role. In example 31, it is plausible that relatives could be visiting. With relatives filling t he role of t he perspective, the interpretation is that “if relatives visit you, they can be a nuisance.” At the same time, we are also able to imagine that some unspecified person serves as the omitted perspective of visit. In this

How Mental Models Encode Embodied Linguistic Perspectives

385

case, the relatives fill the role of the object, y ielding the interpretation that “it can be a nuisance to pay a visit to one’s relatives.” In example 32, on t he other ha nd, t he verb is intransitive. If we were to associate the perspective with an unexpressed subject, then we wou ld have no role for babies. So, here, only one interpretation is possible, and it involves the babies as the initial perspective. That initial perspec tive is e ventually sh ifted at t he word nuisance, since we ha ve t o t ake t he perspec tive o f t he perso n bei ng a nnoyed t o understand how the babies become a nuisance. In 33, the babies are unlikely to be d oing t he teasing a nd t hey serve as good objects, so the unexpressed subject wins the role of perspective. In 35, the foregrounding of they in the first clause prepares the way for perspective continuation t o t he seco nd cla use wh ich p romotes relatives as t he subject of v isiting. A lthough, we c an st ill find a n a mbiguity i n 34, we are less likely to notice it than in 31. As we trace through these various competitions, we see that the demands for incremental construction of a perspectival mental model work to shape the extent to which we can maintain alternative ambiguous perspectives. Perspectival a mbiguity c an a lso a rise f rom t he co mpetition between alternative phrasal attachments. In example 35, the initial perspective resides with Brendan. Although the verb fly would prefer to have a preverbal noun serve as its perspective, the implausibility of seeing a c anyon fly t hrough t he a ir tends to force us away f rom this syntactically preferred option. 35. 36. 3 7. 38 .

Brendan saw the Grand Canyon flying to New York. Brendan saw the dogs running to the beach. The women discussed the dogs on the beach. The women discussed the dogs chasing the cats.

However, t he sh ift t o t he perspec tive o f the d ogs is easier in 36, although a gain w e c an ma intain t he perspec tive o f Brendan if we wish. In cases of prepositional phrase attachment competitions, such as 37, we can maintain the perspective of the starting point or shift to the direct object. If we identify with the women, then we have to use the beach as the location of their discussion. If we shift perspective to the dogs then we can imagine the women looking out their kitchen window and talking about the dogs as they run around on the beach. In 38, on the other hand, we have a harder time imagining that the women, instead of the dogs, are chasing the cats. Sentences such a s

386

Embodiment, Ego-Space, and Action

37 a nd 38 ha ve motivated a v ariety of formal accounts of sentence processing within the framework of the Garden Path account (Frazier, 1987). For these sentences, the perspective hypothesis provides an acco unt t hat f ocuses o n co nceptual co mpetitions, r ather t han recovery from nonconceptual parsing decisions. It is possible to shift perspective abruptly between clauses by treating the verb of the first clause as intransitive and the following noun as a n ew subject. A sh ift of this type is most likely to occur with a verb like jog that is biased toward an intransitive reading, although it can also function as a transitive. Examples 39 to 41 illustrate this effect: 39. Although John frequently jogs, a mile is a long distance for him. 40. Although John frequently jogs a m ile, the marathon is too much for him. 41. Although John f requently smokes, a m ile i s a sho rt d istance for him.

Detailed s elf-paced r eading a nd e ye-movement st udies o f s entences like 39, with the comma removed, show that subjects often slow down just after reading a mile. This slow-down has been taken as evidence for the garden-path theory of sentence processing (Mitchell, 1994). However, it can also be interpreted as reflecting what happens during the time spent in shifting t o a n ew perspec tive wh en the cues preparing the processor for the shift are weak. Examples of this type show that perspectival shift ing is an integral part of online, incremental sentence processing (Marslen-Wilson & Tyler, 1980). This description of the processing of these ambiguities has relied on the six assumptions stated earlier. In particular, these perspective shifts are triggered by alternative activations of role fi llers and phrasal attachments, a s s pecified i n g reater de tail i n de scriptive ac counts such as MacWhinney (1987a) or O’Grady (2005) or in computationally explicit models such a s Kempen a nd Hoenkamp (1987) or Hausser (1999). These competitions play out as we add incrementally to the ongoing perspectival mental models we are creating. Syntactic processes play out t heir r ole t hrough t he competitive operation of role fi lling and attachment, but the actual shift of perspective occurs within t he m ental m odel t hat i s bei ng co nstructed i n a s imulated perspectival space.

How Mental Models Encode Embodied Linguistic Perspectives

387

Scope Ambiguities i n q uantifier s cope p rovide a v ery d ifferent w ay o f understanding the operation of perspective tracking. To understand what is at issue here, consider this example: 42. Every boy climbed a tree.

For s entences t hat beg in w ith q uantified n ominals l ike every boy, the construction of a n i nitial perspec tive is more complex t han i n the case of sentences that begin with a s imple nominal like my dog or Bill. For simple nominals, we only have to create a single unified imagined agent i n perspec tival space. For quantified nominals, we have to create a perspective that allows for multiple agents. Moreover, right from the beginning, we have to take into account the nature of the quantifier. We can t hink of quantified perspectives in terms of Venn diagrams (Johnson-Laird, 1983). For the phrase every boy, we set up a Venn diagram that includes several nodes characterized as boys. We do not need to actually count these nodes in our imagination. Instead, we use an automated procedure for perspective activation that sets up enough imagined nodes to satisfy us that there are several boys. We then link every to climb by duplicating the acts of climbing across the multiple perspectives. We do not have to actually make each boy engage in climbing in our mind. Instead, we can rely on an automated procedure that makes one boy climb and then assumes that the others will “do the same.” The ambiguity in sentence 42 arises when we come to a tree. At this point, we have the option of imagining climbing either a single tree or multiple trees (O’Grady, 2006). We can think of this ambiguity as involving a single unified multiple perspective with many boys and one tree versus a d ivided multiple perspective with many boys and many trees. If we imagine that the initial perspective constitutes a unified group, we are relatively less likely to imagine multiple trees. In fact, the nature of the verb determines the extent to which we keep a unified or divided multiple perspective. Consider t hese examples from O’Grady (2006): 43. Everyone gathered at a restaurant. 44. Everyone surrounded a dog.

388

Embodiment, Ego-Space, and Action

Here, the activity of gathering or surrounding requires the individuals i n t he i nitial multiple perspec tive t o ac t i n concert. W hen they ac t t his w ay, t he co mponents o f t he m ultiple perspec tive a re more likely to focus on a single object, rather than multiple objects. Thus, t he sh ift to a u nified s ingle r epresentation f or t he ob ject i s determined by the embodied representation of the subject’s perspective as it combines with the activity of the verb. Scope ambiguities also display interesting interactions with grammatical constructions t hat sh ift t he order of s entence elements. I n example 45, we can imagine either that the students are all reading the same books or that each student is reading a different set of three books. This contrast is much like the contrast in 42 between a multiple perspective that remains unified when processing the object and a multiple perspective that divides when processing the object. 45. Two students read three books. 4 6. Three books are read by two students.

In 46, on the other hand, it is difficult to imagine more than one set of three books. As a result, 46 does not permit the ambiguity that we find i n 45. This effect i llustrates t he operation of a ssumption 3 regarding the conceptual centrality of the starting point. According to assumption 3, starting points constitute the foundation stone for the construction of the rest of the edifice of the sentence. Because the remaining edifice rests on this foundation stone, we need to make as full a commitment as we possibly can to the referential clarity of the starting point. We can express this particular corollary of assumption 3 in these terms: Principle of R eferential C ommitment: N ominals t hat a re b eing u sed a s starting points should be linked to unambiguous referents in mental models. I f f ull d efinite c oreference c annot b e a chieved, t hen nom inal starting points are assumed to be uniquely identifiable new referents.

In t he c ase o f 4 6, t his m eans t hat, o nce three book s a re post ulated in mental space they cannot be multiplied into two sets of three books as in 45. This same principle is involved in the construction of interpretations for sentences like 47 to 50: 4 7. The devoted environmentalist tracked every mountain goat. 48. A devoted environmentalist tracked every mountain goat.

How Mental Models Encode Embodied Linguistic Perspectives

389

4 9. The boy ate every peanut. 50. A boy ate every peanut.

The contrast in these sentences is between starting points that are fully referential, as in 47 and 49, and those that are indefinite, as in 48 and 50. When the starting point is fully referential, it cannot later be divided by backwards multiplication of perspectives. As a result, 47 and 49 a re not ambiguous. However, in 48 we can imagine each mountain goat being tracked by a different environmentalist. To do this, we return to the perspective of the starting point and multiply that perspec tive. This multiplication of perspectives is possible for 48, bec ause i ndefinite perspec tives a re not a s f ully committed referentially a s de finite perspec tives. I n 50, we c an i magine a s imilar ambiguity, although the idea of many boys each eating only one peanut is perhaps a bit silly. Perspective t racking t heory a lso ex plains wh y 5 1 a nd 5 2 a re acceptable, whereas 53 is questionable. In 51 the perspective of every farmer is distributed so that each of the farmers ends up owning a well-fed donkey. In this perspective, there are many donkeys being fed. This means t hat we can continue in (52) by asking whether or not all of these donkeys will grow. Sentence (53), on the other hand, forces us to break this distributive scoping and to think suddenly in terms of a single donkey, which violates the mental model set up in the main clause. 51. Every farmer who owns a donkey feeds it. 52. Every farmer who owns a donkey feeds it, but will they grow? 53. Every farmer who owns a donkey feeds it, but will it grow?

Relativization Restrictive r elative cla uses p rovide f urther e vidence o f t he i mpact of perspective shift ing on sentence processing difficulty. Processing these st ructures c an r equire u s t o co mpute m ultiple sh ifts o f per spective. Consider these four types of restrictive relative clauses: 5 5 5 5

4. 5. 6. 7.

SS: OS: OO: SO:

The dog that chased the cat kicked the horse. The dog chased the cat that kicked the horse. The dog chased the cat the horse kicked. The dog the cat chased kicked the horse.

0 switches 1- switch 1+ switch 2 switches

390

Embodiment, Ego-Space, and Action

In the SS type, the perspective of the main clause is also the perspective of the relative clause. This means that there are no true perspective switches in the SS relative type. In the OS type, perspective flows f rom t he ma in clause subject (dog) to t he ma in clause object (cat) in accord with the general principle of partial shift of perspective to the object. At the word that perspective then flows further to the cat as the subject of the relative clause. This perspective shift is made less abrupt by the fact that cat had already received secondary focus b efore t he s hift w as made . I n t he OO t ype, perspec tive a lso switches once. However, in this case, it switches more abruptly to the subject of the relative clause. In the SO relative clause type, there is a double perspective shift. P erspective begins with the main clause subject (dog). When the next noun (cat) is encountered, perspective shifts once. However, at the second verb (kicked), perspective has to shift back t o the initial perspective (dog) to complete the construction of the interpretation. Sentences w ith m ultiple ce nter e mbeddings h ave e ven m ore switches. Consider an example like (58) which has four difficult perspective switches (dog -> cat -> boy -> cat -> dog). 58 . The dog the cat the boy liked chased snarled. 59. M y m other’s b rother’s w ife’s si ster’s do ctor’s f riend had a he art attack.

Sentences that have as much perspective shift ing as 58, without additional lexical or pragmatic support, are incomprehensible, at least at first hearing. But note that the mere stacking of nouns by itself is not enough to t rigger perspec tive sh ift overload. C onsider example 59. In that example, we do not really succeed in taking each perspective and s witching to t he next. I nstead, we just a llow ourselves to sk ip over e ach perspec tive a nd la nd o n t he la st o ne m entioned. I n t he end, we just know that someone’s friend had a heart attack and fail to track the relation of that friend to the mother’s brother. In the terms of a p rocessing model, we can say t hat we continue to push words onto a stack and end up losing track, in terms of our mental model, of the items that were pushed down to the bottom of the stack. In the case of 59, we still end up with an interpretable sentence. In the case of 58, the result makes little sense. Researchers have often t ried to account for these processing difficulties in structural terms. However, fMRI work (Booth et al., 2001;

How Mental Models Encode Embodied Linguistic Perspectives

391

Just, Carpenter, Keller, Eddy, & Thulborn, 1996) has contrasted the processing of object relative sentences like 57 with the processing of subject r elatives l ike 5 4. These st udies ha ve sh own t hat 57 p roduces g reater ac tivation i n a w ide variety of left-hemisphere areas. Models such as that proposed by Grodzinsky and Amunts (2006) have attempted to link structural complexity to processing in Broca’s area. But the f MRI results show that complexity leads to activation across a far wider area. This wider profile of activation is consistent with t he i dea t hat t he c omplexity in volved i s n ot s tructural, b ut rather involves the fuller construction of switched perspectives in a full mental model. Perhaps n o s entence ha s figured m ore h eavily i n d iscussions o f sentence processing than example 60. 6 0. The horse raced past the barn fell.

The co mpetition m odel ac count o f p rocessing i n r educed r elative sentences of this t ype emphasizes the dual morphological f unction of verbal suffi x -ed. This suffi x can mark either the past tense or the past pa rticiple. W hen -ed ma rks t he pa st tense, t he verb raced is a simple intransitive with horse as its subject or perspective. However, when -ed marks the participle, then the verb raced allows for a nonexpressed subject and takes the horse as the object, much as in the shift of sentence 32 above with the visiting relatives. Because the resting activation of the past tense interpretation of the suffi x is higher than that of the participle, listeners may not pick up the participle interpretation until they realize that the sentence will not parse with the pa st tens e i nterpretation. I n t his c ase, w e s ense a g arden pa th because we only ac tivate t he weak perspective configuration when the strong configuration fails. However, similar configurations will behave very differently. For example, in 61, we sense no garden path because there is no noun following kept that would allow the transitive r eading a nd w e t herefore ha ve t o r ely o n t he r educed r elative reading. In 62, there is no ambiguity, because the irregular participle cannot be confused with the past tense. 6 1. The bird kept in the cage sang. 6 2. The car driven past the barn honked.

392

Embodiment, Ego-Space, and Action

Clitic Assimilation As a further, detailed example of the impact of perspective taking on grammar, let us consider the process of clitic assimilation. In English, the infinitive to often fuses with a preceding modal verb to produce contractions such as wanna from want to in cases such as 64. However, this assimilation is blocked in environments like the one in 65, making 66 unacceptable. 63. 64. 65. 66.

Who do you want to see? Who do you wanna see? Who do you want to go? *Who do you wanna go?

The perspective hypothesis views the reduced infinitive in 64 as a cue that marks perspective maintenance. In 65, this reduction is impossible, because there is a forced processing shift from who to you and then b ack t o who(m). I nfinitive r eduction a lso ma rks perspec tive maintenance in examples 67 to 69. 67. I get ta go. (Privilege) 68. I got ta go. (Privilege, past tense) 69. I gotta go. (Obligation)

In 67 and 68 , t he privilege of going is due presumably to the intercession o f a n o utside pa rty. The perspec tive o f t his o utside pa rty interrupts perspec tive ma intenance. I n 6 9, o n t he o ther ha nd, t he obligation is i nternal to t he speaker a nd perspec tive is ma intained across the reduced infinitive.

Implicit Causality Much of our a nalysis here ha s focused on perspec tive ma rking by highly grammaticalized forms like pronouns, participles, gerundives, relativizers, a nd i nfinitives. However, perspec tive ma rking ex tends far beyond g rammatical forms, appearing w idely i nside adjectives, verbs, nouns, and prepositions. Individual lexical items can characterize complex social roles and mental acts. Items like libel, Internet, or solidarity, encode social scenarios organized about t he perspective of social actors. Let us take the noun libel as an example. When

How Mental Models Encode Embodied Linguistic Perspectives

393

we spe ak o f so me co mmunication a s bei ng “ libelous,” w e a re t aking t he perspec tive o f a n “ accused” perso n wh o decla res t o so me general audience t hat t he (purported) l ibeler ha s a sserted t hat t he accused has engaged in some illegal or immoral activity. Moreover, the accused wishes to convince the general audience that the libeler’s claims are false and designed to make the audience think poorly of the accused in ways t hat influence his or her ability to f unction in public life. This single word conveys a complex set of interacting and shifting social perspectives. To evaluate whether or not a st atement is libelous, we have to assume the perspective of the accused, the purported l ibeler, a nd t he audience t o e valuate t he v arious cla ims and possible counterclaims. All of this requires continual integration and shifting of social roles and mental acts. Verbs l ike promise, forgive, admire, a nd persuade al so e ncode multiple relations of expectation, benefit, evaluation, and prediction between soc ial ac tors. To e valuate t he u ses of t hese verbs requires flexible perspec tive t aking a nd coo rdination. W ithin t his la rger group of me ntal s tate ve rbs, one d imension of c ontrast i s k nown as “ implicit causality.” Sentence 70 illustrates the implicit causality configuration of t he ex periencer-stimulus verb admire. The causal configuration is revealed in the second clause where the subject (she) is t he c ause o f t he ad miration. I n s entence 7 1, w ith t he st imulusexperiencer verb apologize, causality remains with the subject of the first clause (John). 70. John admired Mary i, because shei was calm under stress. 71 . Johni apologized to Mary, because hei had cracked under stress.

According to the perspective hypothesis, shifts in causality should lead to s hifts i n perspec tive. To t rack t hese sh ifts experimentally, McDonald and MacWhinney (1995) asked subjects to listen to sentences like 70 and 71, while making a cross-modal probe recognition judgment. Probe targets included old nouns (John, Mary) new nouns (Frank, Jill), old verbs (admire, apologize), and new verbs (criticize, resemble). The probes were placed at various points before and after the pronoun (he and she). The task was to j udge whether the probe was old or new. McDonald and MacWhinney found that stimulusexperiencer verbs like apologize in 71 tend to p reserve the reaction time advantage for the first noun (John) as a p robe throughout the sentence. I n ter ms o f t he perspec tive h ypothesis, t his m eans t hat

394

Embodiment, Ego-Space, and Action

perspective is not shifted away from the starting point in these sentences. However, experiencer-stimulus verbs like admired in 69 tend to force a shift in perspective away from the starting point (John) to the stimulus (Mary) right at the pronoun. This leads to a per iod of time a round t he pr onoun du ring w hich Mary ha s r elatively fa ster probe recognition times. However, by the end of the sentence in 70, the advantage of the first noun reappears. The fact that these shifts are bei ng processed i mmediately on-line i s e vidence i n support of the perspective hypothesis. In addition to encoding implicit causality, verbs can also encode information regarding implicit source of knowledge. 72. 73. 74. 75.

Minnie told Dorothy that she knew Superman. Minnie asked Dorothy if she knew Superman. Minnie reminded Dorothy that she knew Superman. Minnie told Dorothy that she made Superman cry.

In 72, we assume that Minnie has access to knowledge about herself which she provides to Dorothy. In 73, on the other hand, we assume that Dorothy must be t he source of t he i nformation, since M innie would certainly have access to her own knowledge. A lthough both 74 and 75 could be read ambiguously, the most probable reading in each case is one that maintains the perspective of the starting point. Adults are able to maintain the viewpoint of the initial subject even in t he co mplement cla use. A t t he s ame t ime, t hey u se fac ts abo ut the verb ask to sh ift perspec tive i n 73. However, be tween 5 a nd 8 , children (Franks & Connell, 1996) are more likely to shift to the perspective of Dorothy in all of these sentences. This tendency to shift perspective arises from a general preference for local attachment evident at this age. It is possible that children are not able to coordinate the dual perspectives of the main and subordinate clause efficiently for these structures during this age range (Huttenlocher & Presson, 1973).

Perspectival Overlays In t he p receding s ections, w e ha ve f ocused o n t he w ays i n wh ich grammatical devices provide cues that help listeners track shifts o f perspective. These shifts have i nvolved t he ac tions a nd motions of agents, as they operate on other objects. This system of causal action

How Mental Models Encode Embodied Linguistic Perspectives

395

marking i s at t he core of perspec tive t racking a nd it i s t he s ystem that is marked most overtly through grammatical constructions and forms. However, t here are at least five other systems of perspective shifting that function as linguistic overlays on this basic system of perspective tracking. These include the systems for marking perspective in space, time, empathy, evidentiality, and metaphor. Although these s ystems a re of g reat i mportance for mental model construction, t hey h ave r elatively li ttle im pact o n gr ammatical s tructure, relying instead on marking through individual lexical items.

Space Spatial l ocalization i s f undamental t o perspec tive t racking. The linguistic ma rking o f spac e r elies o n p repositions, m otion v erbs, and names for landmarks. Sometimes t he marking of location can involve perspe ctival a mbiguity. C onsider t his c lassic i llustration from Cantrall (1974): 7 6. The adults in the picture are facing away from us, with the children behind them.

In this example, we see a competition between alternative reference points. If we take the perspective of the adults as our reference point, then t he children are located between t he adults and t he v iewer of the picture. If we take the perspective of the viewers of the picture as the reference point, the children are located on the other side of the adults, farther away from us. Ambiguities of this type are reminiscent of the shifts in reference point in sentences like 35 where we can imagine either imagine ourselves flying to New York or else see the Grand Canyon flying to New York. However, on a structural level, 76 is not ambiguous, whereas 35 is. In other words, the ambiguities we find in spatial perspective taking are not reflected in the grammar. However, they are clearly reflected in our mental models. This is why we can consider perspective taking in space as an overlay on grammar. Perspectival competitions can arise even from what seems to be a single reference point. For example, i f we a re ly ing down on our backs in a h ospital bed, we might refer to the area beyond our feet as in f ront o f me , e ven t hough t he a rea be yond t he f eet i s u sually

396

Embodiment, Ego-Space, and Action

referred to as under me. To do this, we may even imagine raising our head a b it to correct t he reference field, so t hat at least our head is still upright. Because spatial reference is so prone to ambiguity of this type, we have developed many linguistic devices for reducing such ambiguities. One way of reducing ambiguity is to use a third position as the reference point. For example, i f we describe a pos ition a s bei ng 50 yards behind the school, it is not clear whether we are taking our own position as the reference point for behind or whether we are using the facing of the school as the reference point. To avoid this problem we can describe the position as 50 yards toward the mountain from the school. In this case, we are taking the perspective of the mountain, rather than that of the speaker or the school. We then construct a temporary Cartesian grid based on the mountain and perform allocentric projection to the school. Then we compute a d istance of 50 yards from the school in the direction of the mountain. Languages such as Guugu Yimithirr (Haviland, 1993) and Mayan take this solution yet one step fa rther by setting up per manent maplike coordinates against which all locations can be pinpointed. Time Perspective t aking i n t ime i s closely a nalogous t o perspec tive t aking in space. Like space, time has an extent through which we track events in terms of their relation to reference moments. Just as spatial objects have positions and extents, events have locations in time and durations. Just as we tend to view events as occurring in front of us, rather than behind us, we also tend to view time as moving forwards from past to future. As a result, it is easier to process sentences like 77 with an iconic temporal order than ones like 78 with a reversed order. However, sentences like 79 which require no foreshadowing of an upcoming event are the most natural of all. 77 . After we ate our dinner, we went to the movie. 78. Before we went to the movie, we ate our dinner. 79. We ate our dinner and then we went to the movie.

Temporal r eference i n na rrative a ssumes a st rict i conic r elation between the flow of the discourse and the flow of time. Processing of

How Mental Models Encode Embodied Linguistic Perspectives

397

sequences that violate temporal iconicity by placing the consequent before the antecedent is relatively more difficult (Zwaan, 1996). However, in practice, it is difficult to describe events in a fully linear fashion a nd we need to ma rk flashbacks a nd ot her d iversions t hrough tense, aspect, and temporal adverbials. Empathy The third system of perspectival overlays involves marking for empathy. Here, Tenny and Speas (2002) have conducted a useful survey of devices used by various languages. Among the most marked of these devices a re e valuative adjectives such a s beloved or damned. C onsider these examples: 80. John was looking for Sarah’s beloved cat. 81. John was looking for Sarah’s damned cat.

In 80, the cat is beloved from Sarah’s point of view. In 81, however, the cat is damned from either John’s point of view or the speaker’s point of view. Language is laden with evaluative perspectives of this type. However, much of the evaluation we convey in everyday interaction is encoded equally well through intonation and gesture—areas that lie outside the scope of our current exploration.

Evidentiality A fourth area of perspectival overlay involves the marking of the evidence sources a nd t ypes. For ex ample, we may k now some t hings because we saw them directly with our own eyes, whereas we know others t hings bec ause w e ha ve h eard t hem f rom t rusted so urces. Often, w e s imply ma ke a ssertions w ithout p roviding i nformation regarding e vidence. In other c ases, we may a sk questions, i ndicate doubt, express belief, and so on. An example of the role of evidential perspective is t he contrast between statements and questions. Consider the contrast between 82 and 83: 82 . The bicyclist appears to have escaped injury. 83. Did the bicyclist appear to have escaped injury?

398

Embodiment, Ego-Space, and Action

8 4. The re porter s aid t hat t he b icyclist app eared to ha ve e scaped injury. 8 5. The re porter a sked i f t he b icyclist app eared to ha ve e scaped injury.

In 82 the evidence is evaluated on the basis of evidence available to the speaker. In 83, on the other hand, the evidence is evaluated on the ba sis of e vidence available to t he l istener. E xamples 8 4 a nd 85 display a similar asymmetry. In English, the marking of finer d imensions o f e videntiality i s conveyed by particles and adverbs such a s well, sure, still, and just. In other languages, these same forms can appear as markings on the verb. Some languages pay close attention to fine distinctions in the source a nd na ture of t hese e vidences. Japanese d isplays a pa rticularly interesting restriction on evidence reflected in 86 and 87: 86. You are sick. 87. You seem sick.

In Japanese, one cannot say 86, because it is presumptuous to imagine that one has access to inner states of another person. In fact, it might e ven be a b it i nappropriate t o p roduce 8 7. This constraint, known as “speaker’s territory of knowledge” (Kamio, 1995), involves aspects of both evidentiality and empathy. The tracking of evidential perspectives occurs primarily on the level of t he f uller discourse or narrative. On t his level, we use particular constructions to help our listeners locate objects in their own mental models. In effect, we construct mental models of the mental models of our listeners a nd use t hese to determine t he marking of evidentiality. As Givon (2005) puts it, speakers select grammatical constructions on the basis of their “mental models of the interlocutor’s current deontic and epistemic states.” Metaphor A fift h area of perspectival overlay on language involves the use of metaphor. The production and comprehension of metaphors, similes, and analogies has been a lively topic now for nearly three decades in both the linguistic and psychological literature. Examples 88 and

How Mental Models Encode Embodied Linguistic Perspectives

399

89 illustrate some of the typical patterns studied in this vast descriptive and experimental literature. 8 8. 89. 90. 91.

The road runs down to the river. Headline: Congress stumbles in debate on tax reform. Stocks took a plunge at the end of the day. His marriage was like a glacier.

Everyday conversational speech makes very little use of metaphor, but other genres, ranging from financial reporting to pop psychology, rely heavily on extensions of the type illustrated in 89 and 90 to liven up otherwise boring prose. Lakoff and Johnson (1980) have shown how a small set of core metaphors, based on embodied cognition, dominates our construction of mental models. Fauconnier and Turner (1996) have further described the blending of multiple perspectives t hat oc curs i n mental models. Perhaps t he most remarkable of t hese blends a rise i n d rama a nd poe try when we t rack t he perspectives of actors in plays within plays within stories, as in Macbeth and The Story of the Tailor in The Arabian Nights. The processing of perspec tive i n metaphors a nd blends i nvolves conceptual overlays on language, much like the processing of perspective in space, time, empathy, or evidentiality. Like these other systems, the processing of metaphor makes no direct contact with grammar, f unctioning i nstead t hrough s emantic ex tension o f t he meanings of individual words such as stumble or glacier. However, it would be a m istake to ig nore t he ramifications of t hese t ypes of processing for our theories of neuronal encoding of perspectives in mental models. The perspectival complexity of Macbeth or The Arabian Nights marks only the beginning of complex mental models that must be en coded i n verbal form. For t ruly d azzling levels of complexity we can turn to Grigori Perelman’s solution of the Poincaré conjecture or the chains of reasoning in cases argued at the Supreme Court of the United States.

Production and Integration Having examined the marking of perspective through grammar, we are now ready to consider how language integrates across these various levels or layers of perspective taking. The best way to explore this

400

Embodiment, Ego-Space, and Action

issue is to consider snippets of actual narratives and conversations. Consider this passage from an Associated Press release in 2001. 92. A c yclone ha mmered t he B angladesh c oast M onday w ith t he force of “hundreds of demons” leveling entire villages of mud and thatch huts, flooding crops, and killing at least six people.

This passage beg ins f rom t he perspec tive of t he c yclone. It t hen uses a metaphorical image to allow the storm to act as an agent that wields a hammer. The past tense suffi x on the verb hammered places this ac tion i nto t he pa st. The object of t he ha mmering is t he Ba ngladeshi coast. The coast is brought on stage, but there is no shift of perspective away from the cyclone and its hammering. Immediately after coast we have the introduction through Monday of a temporal overlay. Then, t he metaphor of hammering is linked to the force of hundreds of demons. The perspective of the cyclone continues with leveling,flooding, and killing. In each case, our attention moves briefly to the objects without really shifting away from the cyclone. In this sentence, there is a rich combination of images from many modalities. The c yclone i s a v isual i mage, t he ha mmer i s a v isual image l inked t o motor movements of t he a rm a nd perceptions of noise and percussion. The image of the Bangladesh coast brings to mind the position of Bangladesh on a map of the Bay of Bengal, along with a t race o f t he del ta f ormed b y t he B rahmaputra. The image of Monday forces us to refer to our recent calendrical memory to locate this event in a spatial analog to time. The images of demons bring to mind scenes from Indian art with black demon faces flying through the air and stories from the Vedas. We do not count out the demons i n detail, but roughly i magine a n a rray of ma ny demons, perhaps ex tending o ver a w ide p hysical spac e a long t he coa st t o accommodate their great number. When we link entire to villages, we have to engage in a mental activity of leveling that is completive and leaves no huts standing. Similarly, when we flood the crops, we have to imagine a whole scene with plants under water and when we envision the killing, we actually envision six dead bodies, although we are not exactly sure how they died. Together, the construction of the image for just this sentence relies on diverse cognitive systems for motor action, space, time, enumeration, quantifier scope, visual imagery, s emantic me mory, me taphor, ge ography, ge ometry, a nd biology.

How Mental Models Encode Embodied Linguistic Perspectives

401

Some might like to t hink of t hese systems as cognitive modules (Pinker, 1997). It is certainly true t hat t hese diverse cognitions a re supported by widely separated brain regions with highly differentiated functions, but it is misleading to think of language as popping together beads produced by encapsulated modules, as suggested by Carruthers (2002). I nstead, la nguage a llows d iverse a reas t o w ork in concert and to achieve communication by writing to the “blackboard” of the sentence currently under construction. In this sense, language does i ndeed promote i ntegration ac ross t hese nonmodular cog nitive s ystems i n a w ay t hat c an promote cog nitive g rowth (Spelke, 2002). In fact, it makes sense to think of language as providing a springboard for recent human cultural evolution, by allowing us to construct integrated mental models (MacWhinney, 2005; Mithen, 1996) t hat led t o t he f urther elaboration of consciousness (Dennett, 1991), social structure, and religious imagination (Campbell, 1949). The se reflections regarding information integration through perspective taking have been grounded so far on the single example sentence 92. This same passage continues with the material in 93. 9 3. Three men and two children were crushed under collapsed buildings o r h it by flying p ieces o f t in ro ofs i n t he s outhern p ort o f Chittagong. One man died in Teknaf, about 110 miles down t he coast, when he w as blown off h is roof, while t rying to s ecure it. The storm roared in from the Bay of Bengal with wind gusts of 125 mph, forcing a half-million people to flee their huts and huddle in concrete shelters. Many power and telephone lines were down, so a full account of casualties and damage was not available.

In this continuation, we shift from the initial perspective of the cyclone in 92 to the perspective of people being crushed. This involves a passive perspective, as marked by the -ed suffi x. The actual causal agents follow later. The first agent is totally missing, since the fact that it was the cyclone that caused the buildings to collapse is not expressed. For the set passive verb hit the agent is flying pieces of tin roofs. Here, we begin with the notion of flying even before we know what might be flying. We soon realize that what is flying are pieces of tin roofs, and we try to imagine how the cyclone pulled these pieces off of their roofs. As we read through a pa ssage quickly, we may decide not to perform the extra mental work needed to fill out this further detail

402

Embodiment, Ego-Space, and Action

of t he mental model. Even before we shift t o the perspective of the flying pieces, we must s ee t he v ictims c rushed u nder buildings or being hit. We do not know which victims were crushed and which were hit by flying pieces of roof, so we just imagine some of each in both positions without trying to do any actual count. Next we s hift from Chittagong to Teknaf. The shift i n spac e i s accompanied by the introduction of the new perspective of one man. For this man, we first imagine him dead, then we see how he is passively blown off his roof by t he c yclone as t he u nmentioned agent, and then we must put him back on his roof and imagine him trying to secure the roof. This order of events is the opposite of the actual order and building up a mental model in opposite order can be difficult. An alternative version of this sentence would read as in 94. 94. In Teknaf, 110 miles down the coast, a ma n was trying to s ecure his roof when the cyclone blew him off to his death.

Next we shift perspective back to the storm as in now acts not just on six people, but half a million. Finally, we move to viewpoint of the reporter, who explains that it was not possible to give a full report of the casualties because telephone lines were down. This analysis of a news story selected at random illustrates the extent to which language weaves together diverse cognitions into the common grid of the sentence. Sentences achieve this integration by mixing together adjectives, lexical metaphors, quantifiers, descriptive v erbs, n umerals, t emporals, p repositional p hrases, p assives, omitted subjects, participles, adverbs, conjunctions, and a wide variety of other grammatical devices. Some of these devices mark referents, some mark perspective shifts, and others add spatial, temporal, and evaluative overlays to the basic causal grid. If we move beyond examples like this, to examination of spontaneous conversation, the landscape changes markedly. Instead of weaving together places, times, and actors, conversations weave together diverse v iewpoints, u nderstandings, a nd g oals. C onversations a re heavily dedicated to the maintenance of interpersonal relations and the e stablishment o f mutual k nowledge. Ag ainst t his back ground, perspective t racking st ill plays a n i mportant role, but t he perspective being tracked is one that is under continual negotiation between the conversational participants. A fuller examination of these issues is c urrently u nderway i n t he context of a nalyses of conversational interactions in classrooms (Greeno & MacWhinney, 2006).

How Mental Models Encode Embodied Linguistic Perspectives

403

Cultural Transmission Vygotsky (1934) believed that children’s conversational interactions played a fundamental role in their cognitive development. He viewed these interactions as setting a model for mental structures that children would then internalize as “inner speech.” He characterized inner speech in terms of processes such a s topic-comment structure and ellipsis, but provided no additional linguistic or cognitive detail. The perspective hypothesis can be viewed as an elaboration of the initial Vygotskyan program. The idea here is that, by tracking perspective in conversations and narratives, children construct mental models that encode perspectival patterns in long term memory. Because these models are extracted from adult input, the perspective tracking and causal reasoning they contain will reflect the st andards o f t he ad ult co mmunity. F or ex ample, i f fa iry t ales begin w ith p hrases s uch a s once upon a time , f ar, f ar a way, t hen children will also learn to construct mental models for fairy tales in a cognitive space t hat is far away or imaginary in space and time. If the efforts of the valiant prince lead eventually to happiness and marriage, t hen ch ildren w ill b uild m ental m odels i n wh ich t hese expectations are linked. If the efforts of a determined little locomotive allow it to pull a train up a hill, children will also imagine that they c an ach ieve g oals t hrough de termination. I n e ffect, children will use perspectivally constructed models extracted from conversation and narrative as a method for learning the systems and values o f t heir c ulture. I n t his p rocess, t he l inks de veloped t hrough perspective t aking on a ll t he levels we have d iscussed a re c rucial. Without language and the perspective tracking it allows, this level of cultural transmission would not be possible. Developmentalists have extended the Vygotskyan vision by linking the construction of mental models to play (Eckler & Weininger, 1989), games (Ratner & Bruner, 1978), narration (Bruner, 1992; Nelson, 1998), apprenticeship (Lave, 1991), learning contexts (Rogoff, 2003), and conversational sequencing (Bateson, 1975). What is common in all of these accounts is the idea that children are exposed to interactions in which they track the logical flow of ideas perspectivally. From this process, they extract internalized mental models that are specific to their cultures and social groups (Spradley, 1972) that they can then transmit to others (Blackmore, 2000).

404

Embodiment, Ego-Space, and Action

Conclusion In this paper we have examined the ways in which the perspective hypothesis c an o ffer n ew ex planations f or a v ariety o f pa tterns i n grammar a nd s entence p rocessing. I n t his n ew f ormulation, t he links in mental models are viewed as inherently perspectival and grounded on simulated, embodied cognition. This cognitive system relies on a wide range of neuronal structures for body image matching, spatial projection, empathy, and perspective tracking. Language uses t his u nderlying s ystem to ach ieve st ill f urther cog nitive i ntegration. W hen spe akers p roduce s entences, t hey u se g rammatical devices to integrate diverse perspectives and shifts. When listeners process t hese s entences, t hey u se g rammatical ma rkers, co nstructions, and lexical forms to decode these various shifted and overlaid perspectives. Because perspective taking and shift ing are fundamental to communication, language provides a wide array of grammatical devices for specifically marking perspective and perspective shift. Language a llows u s t o i ntegrate i nformation f rom t he d omains o f direct experience, space, time, plans, causality, evidentiality, evaluation, empathy, and mental acts. Across each of these dimensions, we assume and shift between perspectives in order to construct a f ully human, unified conscious awareness. Acknowledgments Thanks t o W illiam O ’Grady, J ames Gr eeno, R oberta K latzky, a nd Marnie Arkenberg for their comments on this paper. This work was supported by NSF Award SBE-0354420.

References Adolphs, R . ( 2003). C ognitive n euroscience o f h uman s ocial b ehavior. Nature Reviews of Neuroscience, 4, 165–178. Adolphs, R., & Spezio, M. (2006). Role of the amygdala in processing visual social stimuli. Progress in Brain Research, 156, 363–378. Avenanti, A., Bueti, D., Galati, G., & Aglioti, S. (2005). Transcranial magnetic s timulation h ighlights t he s ensorimotor s ide of e mpathy for pain. Nature Neuroscience, 8, 955–960.

How Mental Models Encode Embodied Linguistic Perspectives

405

Bailey, D ., C hang, N., F eldman, J., & N arayanan, S . ( 1998). E xtending embodied lexical development. Proceedings of the 20th Annual Meeting of the Cognitive Science Society, 20, 64–69. Ballard, D. H ., Ha yhoe, M . M ., P ook, P. K ., & R ao, R . P. (1997). Dei ctic codes f or t he emb odiment o f c ognition. Behavioral and Br ain S ciences, 20, 723–767. Bateson, M. (1975). Mother–infant exchanges: The epigenesis of conversational interaction. In D. Aaronson & R. Rieber (Eds.), Developmental psycholinguistics an d c ommunication di sorders ( pp. 1 12–140). N ew York: New York Academy of Sciences. Blackmore, S. (2000). The power of memes. Scientific Ame rican, October, 64–73. Booth, J. R ., MacWhinney, B., Thulborn, K. R., Sacco, K., Voyvodic, J. T., & F eldman, H . M . (2001). De velopmental a nd le sion e ffects during brain ac tivation f or s entence c omprehension a nd m ental ro tation. Developmental Neuropsychology, 18, 139–169. Bruner, J. (1992). Acts o f m eaning. C ambridge, M A: Ha rvard U niversity Press. Budiu, R., & A nderson, J. (2004). Interpretation-based processing: A u nified theory of semantic sentence comprehension. Cognitive S cience, 28, 1–44. Campbell, J. (1949). The hero with a thou sand faces. Princeton, NJ: Princeton University Press. Cantrall, W. (1974). View point, reflexives and the nature of noun phr ases. The Hague: Mouton. Carruthers, P. (2002). The cognitive functions of language. Behavioral and Brain Sciences, 33, 657–674. Chomsky, N. (1957). Syntactic structures. The Hague: Mouton. Chomsky, N. (1975). Reflections on language. New York: Random House. Chomsky, N. (1982). Some concepts and consequences of the theory of government and binding. Cambridge, MA: MIT Press. Cohen, M. S., Kosslyn, S. M., Breiter, H. C., DiGirolamo, G. J., Thomp son, W. L., Anderson, A. K. et al. (1996). Changes in cortical activity during mental rotation. A mapp ing study using functional MRI. Brain, 119, 89–100. Dennett, D. (1991). Consciousness explained. New York: Penguin Press. Donald, M. (1991). Origins of the modern mind. Cambridge, MA: Harvard University Press. Eckler, J., & Weininger, O. (1989). Structural parallels between pretend play and narratives. Developmental Psychology, 25, 736–743. Fauconnier, G., & Turner, M. (1996). Blending as a central process of grammar. In A. Goldberg (Ed.), Conceptual structure, discourse, and language (pp. 113–130). Stanford, CA: CSLI.

406

Embodiment, Ego-Space, and Action

Franks, S. L., & Connell, P. J. (1996). Knowledge of binding in normal and SLI children. Journal of Child Language, 23, 431–464. Frazier, L. (1987). Sentence processing: A t utorial review. In M. Coltheart (Ed.), Attention an d pe rformance (Vol. 1 2, pp . 6 01–681). L ondon: Erlbaum. Gernsbacher, M. A. (1990). Language comprehension as structure building. Hillsdale, NJ: Erlbaum. Gibson, E . (1998). Linguistic complexity: Locality of syntactic dependencies. Cognition, 68, 1–76. Givon, T. (2002). The v isual i nformation-processing s ystem a s a n e volutionary precursor of human language. In T. Givon & B. Malle (Eds.), The evolution of language out of pre-language (pp. 3–51). Amsterdam: Benjamins. Givon, T. (2005). Context as other minds: The pragmatics of sociality, cognition, and communication. Philadelphia: Benjamins. Greeno, J., & Mac Whinney, B . ( 2006). Perspective s hifting in cl assroom interactions. Paper presented at the AERA Meeting. Grodzinsky, Y., & Amunts, K. (2006). Broca’s region. Oxford: Oxford University Press. Grodzinsky, Y., & Reinhart, T. (1993). The innateness of binding and coreference. Linguistic Inquiry, 24, 187–222. Harris, C. L., & Bates, E. A. (2002). Clausal backgrounding and pronominal c orefence: A f unctionalist a lternative to c -command. Language and Cognitive Processes, 17, 237–269. Hausser, R. (1999). Foundations of computational linguistics: Man-machine communication in natural language. Berlin: Springer. Haviland, J. B. (1993). Anchoring, iconicity, and orientation in Guugu Yimithirr pointing gestures. Journal of Linguistic Anthropology, 3, 3–45. Huttenlocher, J., & Presson, C. (1973). Mental rotation and the perspective problem. Cognitive Psychology, 4, 277–299. Jeannerod, M . ( 1997). The co gnitive n euroscience o f acti on. C ambridge, MA: Blackwell. Johnson-Laird, P. N. (1983). Mental mod els: Towards a c ognitive sc ience of language, inference, and consciousness. Cambridge, MA: Harvard University Press. Just, M. A., Carpenter, P. A., Keller, T. A., Eddy, W. F., & Thulborn, K. R. (1996). Brain activation modulated by s entence comprehension. Science, 274, 114–116. Kakei, S., Hoff man, D. S., & Strick, P. L. (1999). Muscle and movement representations in the primary motor cortex. Science, 285, 2136–2139. Kamio, A . (1995). Territory o f i nformation i n E nglish a nd J apanese a nd psychological utterances. Journal of Pragmatics, 24, 235–264.

How Mental Models Encode Embodied Linguistic Perspectives

407

Kempen, G., & Hoenkamp, E. (1987). An incremental procedural grammar for sentence formulation. Cognitive Science, 11, 201–258. Kintsch, W., & Van Dijk, T. (1978). Toward a model of text comprehension and production. Psychological Review, 85, 363–394. Lakoff, G., & Johnson, M. (1980). Metaphors we live by. Chicago: Chicago University Press. Lave, J. (1991). Situated learning: Legitimate peripheral participation. New York: Cambridge University Press. MacDonald, M. C., Pearlmutter, N. J., & S eidenberg, M. S. (1994). Lexical nature of syntactic ambiguity resolution. Psychological Review, 101(4), 676–703. Macrae, C., Heatherton, T., & Kel ley, W. (2004). A s elf less ordinary: The medial prefrontal cortex and your. In M. Gazzaniga (Ed.), The cognitive neurosciences (Vol. 3, pp. 1067–1076). Cambridge: MIT Press. MacWhinney, B. (1977). Starting points. Language, 53, 152–168. MacWhinney, B. (1987a). The competition model. In B. MacWhinney (Ed.), Mechanisms o f la nguage a cquisition ( pp. 2 49–308). H illsdale, N J: Erlbaum. MacWhinney, B . (1987b). Toward a p sycholinguistically pl ausible p arser. In S. Thom ason (Ed.), Proceedings of the Eastern States Conference on Linguistics (pp. . Columbus, Ohio: Ohio State University. MacWhinney, B. (2005). Language evolution and human development. In B. E llis & D . Bjorklund (Eds.), Origins of th e s ocial min d (p p. 3 83– 410). New York: Guilford. Marslen-Wilson, W. D., & Tyler, L. K. T. (1980). The temporal structure of spoken language understanding. Cognition, 8, 1–71. Martin, A., Wiggs, C. L., Ungerleider, L. G., & Ha xby, J. V. (1996). Neural correlates of category-specific knowledge. Nature, 379, 649–652. McClelland, J. L. (1979). On the time-relations of mental processes: An examination of systems of processes in cascade. Psychological Review, 86, 287–330. McDonald, J. L., & MacWhinney, B. J. (1995). The time course of anaphor resolution: E ffects o f i mplicit v erb c ausality a nd g ender. Journal of Memory and Language, 34, 543–566. McNeill, D. (1992). Hand an d min d: Wh at ge stures reveal about thoug ht. Chicago: University of Chicago Press. Meltzoff, A. N. (1995). Understanding t he i ntentions of others: Re-enactment of intended acts by 18-month-old children. Developmental Psychology, 31, 838–850. Meltzoff, A . N., & De cety, J. ( 2003). W hat i mitation tel ls u s a bout s ocial cognition: A rapprochement between developmental psychology and cognitive neuroscience. Philosophical Transactions of the Royal Society of London B, 358, 491–500.

408

Embodiment, Ego-Space, and Action

Middleton, F. A., & Strick, P. L. (1998). Cerebellar output: Motor and cognitive channels. Trends in Cognitive Sciences, 2, 348–354. Miller, G ., & J ohnson-Laird, P . ( 1976). Language a nd p erception. C ambridge, MA: Harvard University Press. Mitchell, D. C. (1994). Sentence parsing. In M. Gernsbacher (Ed.), Handbook of ps ycholinguistics (pp. 3 75–405). S an D iego, C A: A cademic Press. Mithen, S. (1996). The prehistory of th e mind: The cognitive origins of ar t, religion, and science. London: Thames & Hudson. Nelson, K. (1998). Language in cognitive development: The emergence of the mediated mind. New York: Cambridge University Press. O’Grady, W. (2005). Syntactic carpentry. Mahwah, NJ: Erlbaum. O’Grady, W. (2006). The syntax of quantification in SLA: An emergentist approach. In M. O’Brien, C. Shea, & J. A rchibald (Eds.), Proceedings of t he 8t h Gene rative A pproaches t o Second L anguage A cquisition Conference (GASLA 2006) (pp. 98 –113). Somerville, M A: Cascadilla Press. Parsons, L. M., Fox, P. T., Downs, J. H., Glass, T., Hirsch, T. B., Martin, C. C. et a l. (1995). Use of i mplicit motor i magery for v isual shape d iscrimination as revealed by PET. Nature, 375, 54–58. Pecher, D., & Z waan, R . (Eds.). (2005). Grounding cog nition. C ambridge: Cambridge University Press. Pelphrey, K. A., Mitchell, T. V., McKeown, M. J., Goldstein, J., A llison, T., & M cCarthy, G . (2003). B rain ac tivity e voked by t he p erception o f human walking: Controlling for meaningful coherent motion. Journal of Neuroscience, 23, 6819–6825. Pelphrey, K. A., Morris, J. P., & M cCarthy, G. (2005). Neural basis of eye gaze processing deficits in autism. Brain, 128, 1038–1048. Pelphrey, K. A., Viola, R. J., & McCarthy, G. (2004). When strangers pass. Psychological Science, 15, 598–603. Pinker, S. (1997). How the mind works. New York: Norton. Ramachandran, V. S. (2000). Phantom limbs and neural plasticity. Neurological Review, 57, 317–320. Ramachandran, V. S., & H ubbard, E . M. (2001). Synaesthesia: A w indow into p erception, t hought a nd l anguage. Journal of C onsciousness Studies, 8, 3–34. Ratner, N., & Bruner, J. (1978). Games, social exchange and the acquisition of language. Journal of Child Language, 5, 391–401. Reinhart, T. (1981). Definite NP anaphora and c-command domains. Linguistic Inquiry, 12, 605–635. Reinhart, T. (1983). Anaphora and semantic interpretation. Chicago: University of Chicago Press.

How Mental Models Encode Embodied Linguistic Perspectives

409

Rizzolatti, G., Fadiga, L ., Gallese, V., & F ogassi, L . (1996). Premotor cortex and the recognition of motor actions. Cognitive Brain Research, 3, 131–141. Rogoff, B . ( 2003). The c ultural n ature of hum an d evelopment. O xford: Oxford University Press. Spelke, E. (2002). Developing k nowledge of space: Core systems and new combinations. In S. Kosslyn & A. Galaburda (Eds.), Languages of the brain (pp. 239–258). Cambridge, MA: Harvard University Press. Spradley, J. ( Ed.). (1972). Culture an d c ognition: Rul es, m aps, an d pl ans. New York: Chandler. Stanfield, R . A ., & Z waan, R . A . (2001). The effect of i mplied orientation derived from verbal context on picture recognition. Psychological Science, 12, 153–156. Stein, D ., & W right, S . ( Eds.). ( 1995). Subjectivity and subjec tivisation. Cambridge: Cambridge University Press. Tenny, C., & Sp eas, P. (2002). Configurational properties of point of view roles. In A. DiSciullo (Ed.), Asymmetry in g rammar. A msterdam: Benjamins. Vygotsky, L. (1934). Thought and language. Cambridge: MIT Press. Zwaan, R . A . (1996). P rocessing na rrative t ime sh ifts. Journal of E xperimental Psychology: Learning, Memory, and Cognition, 22, 1196–1207.

Author Index

A Abravanel, E., 86 Adams, F., 209 Adelman, P. K., 96 Adelson, E. H., 116 Adolph, K. E., 277, 278, 279, 281, 283, 285, 286, 288, 289, 290, 293, 294, 295, 297, 298, 299, 300, 301, 302, 306, 307, 308, 310, 311, 312 Adolphs, R., 370, 372 Aguirre, G. K., 32 Ahlström, V., 120 Albright, T. D., 208, 230 Aldridge, J. W., 226, 229 Alexander, G. E., 225, 229 Allen, G. L., 17 Allison, T., 133 Allport, D. A., 218, 231 Alyan, S., 32 Amano, S., 341 Ambady, N., 134 Andersen, R. A., 208, 219, 220, 247 Anderson, A. K., 136 Anderson, J. R., 45 Andre, J., 4, 6, 7, 8, 26 Anisfeld, M., 357 Arbib, M. A., 231 Arkenberg, M., 404 Ash, M. G., 120 Ashby, W. R., 209 Ashmead, D. H., 8, 9 Astafiev, S. V., 256, 267 Atkinson, A. P., 93, 94, 95 Avenanti, A., 371 Averbeck, B. B., 229 Avraamides, M. N., 14, 15, 153, 161, 172 B Bailey, D., 371 Baldwin, D. A., 339

Ballard, D. H., 210, 371 Bangert, M., 54 Baron-Cohen, S., 358 Barresi, J., 47, 48, 83 Barrett, T. E., 162 Barsalou, L. W., 71, 82 Basso, M. A., 221, 222 Bastian, A., 223 Bateson, M., 403 Beardsworth, T., 59, 327 Bechara, A., 226 Beer, R. D., 209 Bekkering, H., 181, 335 Berger, S. E., 304, 305, 312 Bergson, H., 209 Berkowitz, L., 96 Berns, G. S., 225 Bernstein, N., 276, 277 Bertenthal, B. I., 120, 281, 291, 294, 296, 311, 326, 328, 329, 337, 347, 348, 349, 350, 354, 355, 356 Berti, A., 185, 248, 262 Best, J. B., 207 Beurze, S. M., 256, 267 Beusmans, J. M. H., 29 Bhalla, M., 189 Bichot, N. P., 222 Bingham, G. P., 7 Binkofski, F., 253, 258 Blackmore, S., 403 Blake, R., 124 Blakemore, S. J., 46, 48, 58, 84, 102, 325 Blanchard, R. J., 80 Block, N., 207 Bly, L., 281 Bock, O., 265 Bonatti, L., 83 Bonda, E., 128 Böök, A., 2, 14 Booth, A., 351, 352, 353

411

412

Author Index

Booth, J. R., 390 Boraud, T., 226 Bosbach, S., 69 Botvinick, M., 132 Boynton, G. M., 217, 218, 232 Brain, W. R., 2 Brass, M., 326, 337 Bremmer, F., 262 Bridgeman, B., 30 Bril, B., 285 Bristow, D., 267 Broadbent, D. E., 218 Brooks, R., 209 Brothers, L., 136 Brown, J. W., 225 Bruner, J. S., 335, 403 Buccino, G., 326 Budiu, R., 374 Bullock, D., 214, 229 Bülthoff, I., 128, 130 Buneo, C. A., 219, 220 Burbaud, P., 219 Burnod, Y., 214 Burt, P., 121 Butterworth, G., 340, 346 Buxbaum, L., 84 C Calton, J. L., 217, 220, 251 Calvo-Merino, B., 54, 55, 126, 325, 336 Caminiti, R., 220 Campbell, J., 401 Campos, J. J., 286, 292, 293, 307 Cantrall, W., 395 Carello, C. D., 221 Carey, D. P., 31, 188 Carlson, V. R., 4 Carruthers, P., 401 Casile, A., 56, 57, 59, 126 Castiello, U., 187, 218, 222, 223, 231, 326 Cavina-Pratesi, C., 253, 259 Chaminade, T., 327, 345 Chan, M. Y., 311, 312 Choamsky, N., 206 Chomsky, N., 375, 378, 379 Chong, R. K., 226 Chouchourelou, A., 134, 136 Churchland, P. S., 209 Cisek, P., 204, 212, 213, 215, 218, 221, 223, 225, 227, 229, 230, 231, 232, 233 Clark, A., 48, 204, 209, 227, 276 Clarke, T. J., 119, 134

Clower, W. T., 230 Coe, B., 221 Cohen, L. R., 92, 120, 132, 265 Cohen, M. S., 370 Colby, C. L., 208, 216, 217, 220, 232, 247, 248, 251, 262 Cole, J. D., 69, 70 Connolly, J. D., 252, 256, 267 Constantinidis, C., 219 Cooke, D. F., 248 Corkum, V., 339 Corlett, J. T., 7 Cowey, A., 185 Crammond, D. J., 221 Creem, S. H., 31, 188, 197 Creem-Regehr, S. H., 2, 7, 189 Cross, E. S., 57 Csibra, G., 324, 358 Cuijpers, R. H., 5 Culham, J. C., 209, 219, 247, 251, 252, 253, 255, 258 Cutting, J. E., 7, 59, 120, 180, 184, 348 D Da Silva, J. A., 4 Darwin, C., 81, 94 Dassonville, P., 31 Dawson, G., 98 de Jong, B. M., 257, 267, 268 de Vries, J. I. P., 277 Deak, G. O., 339 Decety, J., 46, 52, 125, 323, 326, 356 Dechent, P., 256, 267 Dennett, D., 401 DeRenzi, E., 85 Desjardins, J. K., 80 Desmurget, M., 214, 221 DeSouza, J. F. X., 31, 252, 265 DeSperati, C., 50 Dewey, J., 209 di Pellegrino, G., 226, 248 Diamond, A., 331 Diamond, R., 91 Diedrich, F. J., 334 DiFranco, D., 253 Dimberg, U., 97 Dittrich, W. H., 59, 134 Dodd, D. H., 207 Dokic, J., 48 Domini, F., 180 Donald, M., 372 Dorris, M. C., 219, 220, 221

Author Index

413

Downing, P., 83, 84 Dretske, F., 210, 233 Dulcos, S. E., 96, 97 Durgin, F. H., 20, 21

Frith, C. D., 133, 323 Fukusima, S. S., 12, 13 Funk, M., 51, 85, 127, 131, 132 Fuster, J. M., 226

E Eby, D. W., 7 Eckler, J., 403 Edwards, M. G., 326 Ellard, C. G., 20, 22 Elliott, D., 7, 8 Engel, A. K., 217 Enright, J. T., 148 Eppler, M. A., 292 Epstein, W., 4, 191, 221 Erlhagen, W., 231 Eskandar, E. N., 251 ESPNmag. com, 196 Ewert, J-P., 231

G Gail, A., 251 Gallagher, S., 84 Gallese, V., 46, 210, 233, 325 Galletti, C., 251, 268 Garciaguirre, J. S., 286, 303, 311, 312 Gaunet, F., 173 Gauthier, L., 91 Gazzaniga, M. S., 208, 230 Gentile, A. M., 312 Georgopoulos, A. P., 232 Gergely, G., 345, 358 Gernsbacher, M. A., 377 Gibson, E., 128, 275, 377 Gibson, E. J., 278, 290, 291, 307 Gibson, J. J., 45, 81, 179, 209, 212, 215, 223, 233, 260, 278, 288 Giese, M. A., 128, 130 Gilinsky, A. S., 4, 5, 22 Givon, T., 372, 398 Glenberg, A. M., 71, 82, 210 Glimcher, P. W., 213, 232 Gnadt, J. W., 31 Gogel, W. C., 3, 4, 5, 18 Gold, J. L., 213, 222, 227, 232 Goldman, A., 48 Goltz, H. C., 262 Gonzalez, C. L., 187 Goodale, M. A., 31, 197, 198, 217, 247 Gottlieb, J., 219, 220 Grafton, S. T., 252 Graziano, M. S. A., 220 Greeno, J., 402, 404 Greenwald, A. G., 46, 325 Grefkes, C., 247, 252 Grezes, J., 58, 70, 323, 326, 355 Grodzinsky, Y., 379, 391 Grön, G., 32 Grosjean, M., 52, 53 Grossman, E. D., 84, 102, 128, 133 Grush, R., 48

F Fadiga, L., 215, 326 Fagg, A. H., 231 Fantz, R. L., 83 Farrell, M. J., 19 Farroni, T., 340, 341 Fauconnier, G., 399 Fazendeiro, T., 100 Felleman, D. J., 216, 232 Ferraina, S., 219, 220 Ferrari, F., 339 Fetz, E. E., 210, 211 Field, T., 83 Fitts, P. M., 50, 52, 207 Flach, R., 51, 62, 68, 327 Flanders, M., 156, 174 Fleury, M., 70 Flores d’Arcais, J-P., 61 Fodor, J. A., 113 Fogassi, L., 46, 325 Foley, J. M., 4, 6, 7, 25, 29, 30, 265 Fontaine, R., 357 Fox, R., 86 Frak, V., 252 Franchak, J. M., 278, 279 Frankenburg, W. K., 281, 284, 285 Franklin, N., 173 Franks, S. L., 394 Franz, V. H., 31 Frazier, L., 386 Freedland, R. L., 283 Frey, S. H., 258

H Haffenden, A. M., 31 Haggard, P., 148 Halligan, P. W., 185, 248 Halverson, H. M., 253

414

Author Index

Hamilton, A., 59, 69, 126 Hardcastle, V. G., 209, 233 Hari, R., 267 Harlow, H. F., 308 Harnad, S., 81, 233 Harris, C. L., 380, 381 Harris, P., 48 Harvey, I., 209 Harway, N. I., 34 Hasebe, H., 263 Haslinger, B., 54 Hasson, U., 133, 261 Hatfield, E., 94, 95 Haueisen, J., 54 Hauser, M. D., 226 Hausser, R., 386 Haviland, J. B., 396 Haxby, J. V., 102 He, Z. J., 7, 9, 30 Heide, W., 31 Hendriks-Jansen, H., 204, 209, 227, 231 Henriques, D. Y., 265 Heptulla-Chatterjee, S., 87, 122, 132 Hertenstein, M. J., 95 Heyes, C., 326 Hildreth, E., 116 Hinde, R. A., 218 Hiris, E., 120 Hobson, P. R., 98, 100 Hofer, T., 338, 358 Hofstadter, D. R., 207 Hommel, B., 46, 48, 210, 347 Hood, B. M., 340, 341 Horak, F. B., 226 Horwitz, G. D., 221 Hoshi, E., 226, 227 Houghton, G., 231 Howard, I. P., 3 Hubbard, T. L., 51 Hubel, D., 115 Humphreys, G. W., 223, 231 Hutchison, J. J., 5, 12, 32, 33 Huttenlocher, J., 394 I Iacobini, M., 47, 124, 133, 325, 326 Iriki, A., 184, 198, 248, 261 Israël, 17 J Jackson, J. H., 209 Jackson, R. E., 195

Jacobs, A., 125, 126, 128, 134, 349 James, W., 45, 46, 81, 325 Janssen, P., 219, 220, 221 Jeannerod, M., 48, 58, 131, 252, 324, 370 Joh, A. S., 290, 306 Johansson, G., 59, 118, 119, 127, 130, 328, 348 Johnson, M. H., 83, 85 Johnson, M. L., 281 Johnson, P. B., 220 Johnson-Laird, P. N., 207, 387 Jordan, M. I., 328 Just, M. A., 391 Juurmaa, J., 17 K Kail, R., 279 Kakei, S., 370 Kalaska, J. F., 212, 213, 215, 217, 218, 219, 220, 221 Kalivas, P. W., 225 Kamio, A., 398 Kandel, S., 51, 67, 327 Karnath, H. O., 253 Kawashima, R., 252 Kawato, M., 328 Keele, S. W., 221 Keller, ??, 68 Kelly, J. W., 4, 5, 6 Kempen, G., 386 Kendrick, K. M., 81 Kermadi, I., 226, 229 Kertzman, C., 252 Kerzel, D., 51 Kieras, D., 45 Kilner, J. M., 327, 337 Kim, J-N., 213, 226 Kinsella-Shaw, J. M., 306 Kintsch, W., 374 Kiraly, I., 324 Klatzky, R. L., 14, 15, 145, 148, 149, 150, 151, 152, 159, 161, 172, 173, 371, 404 Knapp, J. M., 2, 4, 6, 7, 10, 12, 13, 23 Knoblich, G., 48, 51, 58, 60, 65, 66, 130, 132, 137, 327, 345, 372 Koch, C., 2, 183, 198 Kohler, E., 46, 56, 60 Kornblum, S., 231 Korte, A., 51, 120 Koski, L., 47, 326 Kozlowski, L. T., 134 Krebs, J. R., 181

Author Index Kudoh, N., 29 Kuhl, P., 83 Kurata, K., 223 Kusunoki, M., 219, 220, 232 L Lacerda, F., 324 Lacquaniti, ??, 50 Ladavas, E., 248 Lakoff, G., 399 Lampl, M., 281, 282 Land, M. F., 180, 181 Landy, M. S., 180 Langdell, T., 98 Larsen, A., 175 Larsen, R. J., 96 Lavadas, E., 184 Lave, J., 403 Law, I., 266 Lazarus, R. S., 94 Legerstee, M., 83 Lehar, S., 2 Leo, A. J., 297 Leung, E. H., 339, 347 Levin, C. A., 29 Lhermitte, F., 357 Liberman, A. M., 46 Linkenauger, S. A., 186, 198 Loarer, E., 14 Lockman, J. J., 296 Longo, M. R., 325, 326, 331, 332, 333, 336, 347, 358 Loomis, J. M., 2, 3, 5, 6, 7, 8, 9, 12, 13, 14, 15, 16, 17, 18, 19, 23, 26, 27, 28, 29, 30, 33, 34, 159, 371 Louis-Dam, A., 345 Loula, F., 60, 92, 129, 130, 349 Luo, Y., 324, 345 M MacDonald, M. C., 384 Macrae, C., 370 MacWhinney, B., 71, 373, 377, 384, 386, 401 Maguire, E. A., 32 Malach, R., 261 Maratos, O., 357 Maravita, A., 248 Marconi, B., 220 Marcovitch, S., 331, 334 Marey, E. J., 118 Mark, L. S., 306, 313 Marlinsky, V. V., 17

415

Marr, D. C., 3, 113, 207, 216, 324 Marslen-Wilson, W. D., 386 Martin, A., 370 Martin, R. A., 80 Mast, F. W., 179 Matelli, M., 217 Maturana, H. R., 209 Maurer, D., 85, 88 May, M., 172, 175 Mazzoni, P., 219 McClelland, J. L., 376 McCready, D., 4 McDonald, J. L., 393 McGoldrick, J. E., 88 McGraw, M. B., 357 McIntosh, D. N., 95, 96, 97 McNeill, D., 373 McPeek, R. M., 222 Mead, G. H., 209 Medendorp, W. P., 14, 31 Meltzoff, A. N., 86, 95, 131, 137, 356, 357, 370, 372 Merleau-Ponty, M., 45, 209 Merriam, E. P., 146 Messing, R., 7 Miall, R. C., 328 Middleton, F. A., 225, 229, 232, 370 Miller, E. K., 226 Miller, G. A., 207, 221, 374 Millikan, R. G., 209, 233 Milner, A. D., 30, 31, 189, 197, 198, 217, 232, 247 Milner, D. A., 187 Mink, J. W., 225, 226 Mitchell, D. C., 386 Mithen, S., 401 Mittelstaedt, M. L., 17 Mohler, B. J., 20, 21 Mondschein, E. R., 301 Montepare, J. M., 133 Moody, E. J., 95 Moore, C., 82, 100 Moore, K. L., 277, 280 Moran, J., 217, 218, 232 Morris, J. P., 133 Mountcastle, V. B., 219 Munakata, Y., 334, 337 N Nakagawa, A., 357 Nakamura, K., 220, 230 Neggers, S. F., 265

416

Author Index

Nelson, K., 403 Neumann, O., 218, 231 Newell, A., 45, 205, 207, 208, 209, 224 Niedenthal, P. M., 96, 101 Noonan, K. J., 281 Núñez, R., 209 O O’Grady, W., 377, 386, 387, 404 O’Regan, J. K., 210 Ogden, J. A., 85 Ohira, H., 96 Ooi, T. L., 3, 5, 17, 18, 19, 20, 29, 30, 32, 33, 34 Oram, M. W., 128 Orliaguet, J-P., 345 Ounsted, M., 280 Oyama, T., 4 Ozonoff, S., 100 P Palmer, C. E., 280 Palmer, C. F., 290 Pani, J. R., 174 Panksepp, J., 93, 94 Paré, M., 220 Parsons, L. M., 370 Passingham, R. E., 217, 218 Paus, T., 266 Pavlov, I., 205 Pavlova, M., 329, 349 Pecher, D., 71, 375 Pegna, A. J., 185 Pellijeff, A., 257, 267 Pelphrey, K. A., 323, 372 Petit, L. S., 162 Philbeck, J. W., 2, 4, 6, 8, 9, 10, 12, 13, 16, 17, 18, 19, 20, 25, 31, 32 Piaget, J., 45, 81, 82, 204, 209, 227 Pinker, S., 207, 401 Pinto, J, 86, 120, 132, 349, 350, 355 Pisella, L., 214 Pitzalis, S., 250, 252, 257, 268 Plamondon, R., 52 Platt, M. L., 208, 213, 219, 220, 221, 222 Plumert, J. M., 314 Pollick, F. E., 59, 119, 129, 134 Portin, K., 256, 267 Posner, M. I., 340 Post, R. B., 31 Powell, K. D., 219 Powers, W. T., 209

Prado, J., 252, 256, 267 Prasad, S., 131, 132 Prechtl, H. F. R., 357 Premack, D., 358 Previc, F. H., 248 Price, E. H., 85 Prinz, W., 45, 46, 57, 60, 71, 124, 125, 126, 130 Proffitt, D. R., 1, 12, 32, 180, 183, 188, 189, 190, 191, 348 Puce, A., 136 Pylyshyn, Z., 113, 207, 208 Q Quinlan, D. J., 262 Quinn, P., 83 Quintana, J., 226 R Rader, N., 292 Rainer, G., 226 Ramachandran, V. S., 370 Ratner, N., 403 Redgrave, P., 225 Redish, A. D., 32 Reed, C. L., 84, 86, 87, 88, 89, 91, 95, 99, 132 Reed, E. S., 277 Reinhart, T., 379, 383 Repp, B. H., 62, 63, 64, 65, 68, 327 Reynolds, J. H., 208, 233 Richards, J. E., 292 Richardson, A. R., 20, 22, 23 Richter, H. O., 263 Riecke, B. E., 172 Riehle, A., 223 Riener, C. R., 196 Rieser, J. J., 7, 8, 14, 20, 21, 34, 159, 173, 175, 192 Riskind, J., 96 Rizzolatti, G., 46, 47, 58, 84, 124, 132, 220, 324, 325, 326, 339, 347, 356, 371 Ro, T., 83 Robinson, S. R., 277, 311, 312 Rocha, C. F. D., 182 Rogoff, B., 403 Romo, R., 221, 232 Rose, J. L., 347 Rossetti, Y., 31 Rowe, J. B., 226 Ruff man, T., 334 Runeson, S., 59, 70, 119, 129, 133 Russell, B., 2 Rutherford, M. D., 99

Author Index S Sahm, C. S., 2, 7, 10, 11, 19, 23, 32 Sawamoto, N., 226 Saxe, R., 84, 102, 323 Saygin, A. P., 355 Scaife, M., 339 Schall, J. D., 221, 222 Schmidt, R. A., 285, 312 Schofield, W. N., 335 Schultz, W., 226 Schwoebel, J., 84 Searle, J., 233 Sebanz, N., 55, 71 Sedgwick, H. A., 4, 180, 191 Sereno, M. I., 31, 262 Shadlen, M. N., 220 Shannon, C. E., 206 Shelton, A. L., 148, 154, 155, 173 Shen, L., 221 Shepard, R. N., 113 Shiff rar, M., 50, 60, 87, 116, 117, 118, 121, 122, 124, 127, 130, 137, 349 Sholl, M. J., 17 Sinai, M. J., 9 Singer, W., 208, 217, 233 Singhal, A., 253 Sirigu, A., 162 Skinner, B. F., 206 Slaughter, V., 83, 85, 87, 88 Smeets, J. B., 253 Smith, L. B., 331, 332 Smith, P. C., 7 Smythies, J. R., 2 Snyder, L. H., 208, 219, 220, 232, 251 Sommerville, J. A., 358 Speigle, J. M., 8, 9 Spelke, E., 401 Spivey, M., 48 Spradley, J., 403 Stanfield, R. A., 375 Stankowich, T., 182 Steenhuis, R. E., 7, 8 Stefanucci, J. K., 191, 195 Stein, D., 375 Stein, J. F., 208, 216, 219, 220 Stekelenberg, J. J., 89 Stengel, E., 357 Stepper, S., 96 Sterelny, K., 209 Stetten, G., 166, 167 Stevens, J. A., 51, 124, 125, 349 Stevenson, H. W., 308

417

Still, A., 209 Strack, F., 96 Sugrue, L. P., 219, 220 Sumi, S., 329, 349 Swendsen, R., 164 T Tai, Y. F., 327 Takikawa, Y., 226 Tanaka, J. W., 88, 91 Tanji, J., 226 Tenny, C., 383, 384, 397 Thelen, E., 82, 204, 209, 227, 277, 331, 354 Thompson, E., 209 Thompson, W. B., 2, 10, 12, 13, 21, 23 Thomson, J. A., 7, 8, 14, 18, 159 Thorndike, E., 205 Thornton, I. M., 49, 57, 59, 122 Tipper, S. P., 222, 223, 231 Titchener, E., 204 Titzer, R., 292 Toates, F., 231 Tolman, E. C., 206 Tomasello, M., 330, 356 Toye, R. C., 29 Tracy, J. L., 95 Treue, S., 217, 218, 232 Trevarthen, C., 82 Turati, C., 85 Turing, A. M., 206, 207 Tversky, B., 155 U Umiltà, M. A., 46 Ungerleider, L. G., 208, 216, 232 V Valenza, E., 85 Valyear, K. F., 187, 198 van Donkelaar, P., 265 van Hof, P., 335 Van Sommers, P., 60 Vanni, S., 267 Vereijken, B., 285 Verfaillie, K., 329 Virji-Babul, N., 123, 126 Viviani, P., 50, 125, 130, 329 Vogt, S., 326 Volkmar, F., 97 von der Malsburg, C., 208, 233 von Hofsten, C., 253, 324, 340 Vygotsky, L., 403

418

Author Index

W Wagner, M., 29, 148 Walk, R. D., 291, 292 Wall, J., 192 Wallach, H., 116 Wapner, S., 335 Warren, W. H., 213, 278, 290, 306 Watson, J. B., 206 Weinstein, S., 85 Weiskrantz, L., 30 Weiss, P. H., 249 Welsh, T. N., 222 Wenger, K. K., 226 Went, F. W., 195 Wertheimer, M., 120 Wesp, R., 197 Whitshaw, I. Q., 32 Willbarger, J., 97 Wilson, H., 116, 124, 125 Wilson, M., 45, 47, 48, 49, 52, 54, 65, 71, 82, 83, 84 Winston, J. S., 133 Wise, S. P., 220

Witherington, D. c., 292 Witt, J. K., 1, 20, 32, 183, 184, 191, 193, 196, 198 Wolff, W., 59 Wolpert, D. M., 48, 214, 328 Wong, E., 31 Woodward, A. L., 324, 339, 345 Worsley, C. L., 32 Wraga, M. J., 173 Wray, R. E., 45 Wu, B., 3, 5, 7, 8, 19, 29, 30, 32, 167, 168 Wundt, W., 204 Y Yarbus, A., 180 Ydenberg, R. C., 182 Yin, R. K., 88 Z Zahorik, P., 8 Zelazo, P. D., 331 Zwaan, R. A., 397

Subject Index Numbers in italic refer to figures or tables

A Action, 204 characterized, 45 cognitively mediated vs. perceptually guided, 165, 165–169, 166 distance perception, 6–10 intention, 55–56 Action coordination, action prediction, 67–69 Action identification, self-generated actions, 58–65 Action perception, 45–71 body sense, 69–70 continuous, graded representations, 48 expertise, 53–58 motor laws, 49–53 apparent body motion, 51–52 Fitt’s law, 52–53 two-thirds power law, 50–51 new skill, 56–58 spatial perception, 6–10 Action prediction action coordination, 67–69 handwriting, 66–67 own vs. other’s action, 65–66 Action representations, 46 Action selection, 212–213, 213 evolutionary elaborations, 215 parameters, 212–213, 213 Action simulation, 48, 326–326 Action specification, 212–213, 213 affordance competition hypothesis, 205, 218–230 cognitive ability, 227–247 decision making through distributed consensus, 224–227 simultaneous processing of potential actions, 221–224 fronto-parietal network, 205, 218–230 parameters, 212–213, 213

Action understanding deafferentation, 69–70 defined, 324 human movement structure perception, 328–329, 329 modes, 324–325 motor knowledge development perspective, 323–359 perseverative errors in searching for hidden objects, 331–338, 332, 333, 336, 338 point-light display perception of biological motion, 348–356, 349, 351, 352, 353, 354 visual orienting in response to deictic gestures, 339–348, 342, 343, 344, 346 prediction of effects of action, 327–328 covert imitation, 327–328 proprioception, 69–70 sensory neuronopathy, 69–70 touch, 69–70 Adaptation path integration, 20 perceptually directed action, 20–25, 22 distance-specific adaptation, 22–23 Affordance competition hypothesis, 211–234 action specification, 205, 218–230 cognitive ability, 227–247 decision making through distributed consensus, 224–227 simultaneous processing of potential actions, 221–224 cerebral cortex, 214–215, 215 fronto-parietal network, 205, 218–230 visual processing, 216–218 Affordances, 278–280 defined, 278 development, 280–288

419

420

Subject Index

Affordances (continued) body growth, 280–281, 282 environment, 287 motor proficiency, 281–287, 284, 285 new perception-action systems, 281– 287, 284, 285 infant perception, 288 bridges, 303–306 bridges using handrail, 303–306, 304 bridges with wobbly and wooden handrails, 305, 305–306 cliffs, 290–293, 292 gaps, 293–297, 294, 298 perceptual problem, 288–290 slopes, 297–380, 299 slopes crawling vs. walking, 300–301 walking with weights, 302, 302–303 motor action, 278–280 thresholds, 278–279, 279 Alignment allocentric layer, 172–173, 174, 175 accessibility ordering, 173 bridging from actor’s representation to other coordinate systems, 169–170 frames of reference, 145–175 alignment load taxonomy, 171–175, 174 coordinate transformation, 157–158 imaginal walking, 170–171 parameter remapping, 157–158 right-hand rule, 171 ultrasound, 171 obliqueness, 173–174, 174, 175 Allocentric layer, 172 alignment, 172–173, 174, 175 accessibility ordering, 173 body-in-environment frame, 172 defined, 146 environmental frame of reference, 172 frames of reference, 172–173, 174, 175 accessibility ordering, 173 identification, 150 set of measures, 149 imagined frame, 172 Ambiguity, perspective tracking, 384–387 relativization, 382–384, 389–392 scope, 387–389 Aperture problem, 115, 115–118, 117 Attractor space, 48 Audition, blind walking, 8–9, 9 Auditory distance perception, 23–25, 24 Auditory stimulus, self-identification, 62–65

Autism, social perception, 97–100 atypical face and configural processing, 98–99 body perception, 99 difficulties in social adjustment, 98 mimicry, 98 perceiving emotion from movement, 100 rule-based approach of emotional perception, 99–100 social-emotional perception, 99–100 template-based processes, 99, 100 B Balance, infant, 281–282 Ball or bean bag throwing, 7 Behavior cognitive processes, 205, 212 motor processes, 205, 212 perceptual processes, 211–212 Behaviorism, 206 schematic functional architectures, 205 Belly crawling, 283, 284 Blind walking, 7–11, 8, 18 audition, 8–9, 9 effect of feedback, 21–22 full-cue conditions, 9–10, 10 recalibration, 21–22 reduced-cue conditions, 9–10, 10 visually-directed, 9–10, 10 visual perception, gain, 21 Body growth, 280–281, 282 fetus, 280 infant, 280–281 Body image matching, projection, 370–371 Body-in-environment frame, allocentric layer, 172 Body inversion paradigm, 88–89, 89, 90 Body movement, see also Specific type anatomically plausible movement path, 51 multimodal body schema, 51–52 Body perception configural processing continuum, 88–89 creating self-other correspondences, 83–93 multimodal body schema, 51–52 spatial perception, 181–183 action-specific perception, 181 behavioral ecology, 180–182 specialized body processing, 87–90 sources, 90–92 specialized body representations, 84–86 using one’s own body to organize information from others, 86–87

Subject Index Body posture, 80–81 feelings, 96–97 importance, 81 understanding others, 94–97 Body processing, specialized, 87–90 sources, 90–92 Body representations, specialized, 84–86 Body sense, action perception, 69–70 Brain, 205–206, see also Specific part macaque monkey brain, 247–249 peripersonal space, 248 Bridges, 303–306 C Central executive, 224 Cerebral cortex, 208 affordance competition hypothesis, 214–215, 215 Clapping, identification of one’s own, 62–63 Cliffs, 290–293, 292 Cognition embodiment, relationship, 203–204 general architecture, 45 schematic functional architectures, 205 Cognitive development, language, 403 Cognitive neuroscience, 208 Cognitive psychology, 208, 209 Color vision, 3 Common coding theory of perception and actions, 46 drawing, 60–62 embodiment, 47 handwriting, 60–62 motor skill acquisition, 58 representation, 47–48 vs. radical interactionism, 47 Computation, 206 Computer metaphor, 205, 207 Concept-measurement relationship, perception, 3–4 Converging measures, distance perception, 32–33 Coreference, 379–380 Cortical visual processing, two visual streams framework, 30–32 Crawling, infants, 283–285, 284, 285 Cue integration, 180 current models, 180 Domini et al.’s model, 180 perceiver’s goals, 180–181 perceiver’s intent, 180–181 weighted averaging models, 180

421

D Dancers, expertise in action perception, 54–55, 57–58 Deafferentation, action understanding, 69–70 Deictic gestures, visual orienting in response to deictic gestures, 339– 348, 342, 343, 344, 346 Delaying commitment, 380–382 Descriptive representations, 210–211 Development affordances, 280–288 body growth, 280–281, 282 environment, 287 motor proficiency, 281–287, 284, 285 new perception-action systems, 281– 287, 284, 285 afte r infancy, 312–314 evidence for observation–execution matching system, 330–356 motor knowledge and action understanding, 323–359 Direct matching system, 325, 327 developmental evidence, 330–356 Distance perception, 1–33 action, 6–10 converging measures, 32–33 indirect methods, 4 intention to act, 1 judgments of collinearity, 5 judgments of perceived exocentric extent, 5 meaning of, 1 methods for measuring, 4–12 perceived exocentric distance, distortions, 29–30 perceived shape, distortions, 29–30 perception of exocentric direction, 5 percept-percept couplings, 4 perceptually directed action model, 12–19 calibration role, 12–13, 19–25 processing stages, 13–15, 14–15 reasons for error, 1 scale construction, 5 spatial updating, 1, 2, 6–10 accuracy, 1–2 triangulation methods, 10–12, 11, 13 two visual systems, action-specific representations, 30–32 verbal report, 4, 5 Dorsal processing, spatial perception, 188, 189

422

Subject Index

Dorsal stream, 247–248 Drawing common coding theory of perception and actions, 60–62 mirror systems, 60–62 point-light displays, 60–62 Dualism, 204 schematic functional architectures, 205 Dynamical theory, 209 E Effector-specific areas, 248–249 Egocentric distance, verbal reports, spatial updating to correct for bias, 25–29, 27 Egocentric frames identification, 150 set of measures, 149 Embedded action fetus, 276–277 infant perception, 276–277 Embodied action fetus, 276–277 infant perception, 276–277 Embodied cognition, 203–204, 209 action context, 82 action goal, 82 sensorimotor roots, 81–82 social perception, added complexities, 81–83 in social world, 81 Embodied framework for behavior, 210–215 Embodied linguistic perspectives, mental model encoding, 369–404 Embodied motion perception, visual sensitivity, 113–137 Embodiment, 91 characterized, 45–46 common coding theory of perception and actions, 47 defined, 203 meaning, 203–204 mirror systems, 47 theoretical assumptions, 45–46 Emmert’s Law, size-distance invariance, 5 Emotional body perception, 93–97 Emotional contagion, 95–96 Empathy, 397 projection, 372 Environmental frame of reference, allocentric layer, 172 Evidentiality, 397–399

Expertise, 91 action perception, 53–58 Extinction, peripersonal space, 248 F Face, 88 relative location of features, 85 Face inversion effect, 88–89, 90 Facial expression, 80 effect of facial movement on affective experience, 96 induced corresponding emotional states, 96 understanding others, 94–97 Falling, spatial perception, 194–195 cost of injury, 195 slant perception, 195 vertical distance overestimation, 195–196 Fetus body growth, 280 embedded action, 276–277 embodied action, 276–277 Fitt’s law, 52–53 Frames of reference, see also Specific type alignment, 145–175, 157–158 alignment load taxonomy, 171–175, 174 coordinate transformation, 157–158 imaginal walking, 170–171 parameter remapping, 157–158 right-hand rule, 171 ultrasound, 171 allocentric frames, 148–151, 149 identification, 150 set of measures, 149 allocentric layer, 172–173, 174, 175 accessibility ordering, 173 coordination across body-defined, 146 defined, 146 egocentric frames, 148–151, 149 identification, 150 set of measures, 149 mechanisms, intermediate levels of description, 169–170 methods to identify parameters, 151 errors spatially biased by frame, 155–156 parameter values easiest to report, 154–155, 155 what is reported?, 151–154, 152, 153 multiple, 145–175

Subject Index changes in egocentric parameters under imagined locomotion, 158– 161, 160, 161 cognitively mediated action vs. perceptually guided action, 165, 165–169, 166 coordinating physical frames of reference, 169 current positional cues, 159 embodied actor in, 158–169 imagined updating, 159–161, 160, 161 ongoing movement cues, 159 right-hand rule, 162–164, 163 spatial thinking through action, 161–164 obliqueness, 173–174, 174, 175 parameters, 147 processes, 147–148 reference-frame parameterization studies implications, 156–157 task demands, 145 Fronto-parietal network action specification, 205, 218–230 affordance competition hypothesis, 205, 218–230 Full-cue conditions, blind walking, 9–10, 10 Functional magnetic resonance imaging, 247–268 superior parieto-occipital cortex, 247–268 activation from arm transport during reaching actions, 249–258, 250, 254, 256–257 in humans, 247–268 preference for near gaze, 262–266, 264 preference for objects within arm’slength, 259–262, 260 G Gait, 56–57 infant perception, 350–354, 351, 352, 353, 354 point-light displays, 59–60 self-generated actions, 59–60 self-recognition, individualistic styles, 60 symmetrical patterning, 350–354, 351, 352, 353, 354 Gaps, 293–297, 294, 298 Grammar perspective hypothesis coreference, 379–380

423

delaying commitment, 380–382 noncentrality, 380 structure vs. function, 378–384 perspective taking clitic assimilation, 392 empathy, 397 evidentiality, 397–399 implicit causality, 393–395 metaphor, 399 perspectival overlays, 395–399 space, 395–396 time, 396–397 perspective tracking, 373 maintenance, 378 modification, 378 shift, 378 Grasping neurophysiology, 187–188 spatial perception, 186–188 left-handed people, 187 right-handed people, 186–187 H Handwriting action prediction, 66–67 common coding theory of perception and actions, 60–62 mirror systems, 60–62 own vs. other’s recognition, 66–67 point-light displays, 60–62 trajectory, 51 velocity, 61–62 Heading, 147 Hidden objects, perseverative errors in searching for, 331–338, 332, 333, 336, 338 Hitting, spatial perception, 196–197 Human motion object motion comparing perception, 114–123 visual system in, compared, 114–137 visual analysis, 113–137 visual system action perception vs. object perception, 121–137 aperture problem, 115, 115–118, 117 bodily form, 131–133 controlling for viewpoint-dependent visual experience, 130–131 level of analysis, 116–117, 117 local measurements inherently ambiguous, 116

424

Subject Index

Human motion (continued) motion integration across space, 115–123 motion integration across time, 120–123, 123 motor experience vs. visual experience, 128–133 motor expertise, 124–127 multiple apertures, 118 perceptual sensitivity to emotional actions, 136–137 point-light displays, 118–120, 119 social context and apparent human motion, 134–136, 135 social-emotional processes, 133–137 visual expertise, 127–128 I Ideomotor principle, voluntary action, 46 Imaginal walking, 170–171 Imagined frame, allocentric layer, 172 Infant perception, 275–315 affordances, 288 bridges, 303–306 bridges using handrail, 303–306, 304 bridges with wobbly and wooden handrails, 305, 305–306 cliffs, 290–293, 292 gaps, 293–297, 294, 298 perceptual problem, 288–290 slopes, 297–380, 299 slopes crawling vs. walking, 300–301 walking with weights, 302, 302–303 embedded action, 276–277 embodied action, 276–277 everyday experience, 311–312 gait, 350–354, 351, 352, 353, 354 learning in development, 306–314 learning sets, 308–310 perception of point-light displays of biological motion, 348–356, 349, 351, 352, 353, 354 perseverative errors in searching for hidden objects, 331–338, 332, 333, 336, 338 visual orienting in response to deictic gestures, 339–348, 342, 343, 344, 346 Infants balance, 281–282 body growth, 280–281 crawling, 283–285, 284, 285

locomotion, 283–287 everyday experience, 311–312 learning in development, 306–314 learning sets, 308–310 perceptually guided action, 275–315 perception-action studies development, 276 learning, 276 perception-action systems, 286–287 spatial body representation, 85–86 supramodal body scheme, 86 walking, 285–286 Information, 206–207 Information processing system, schematic functional architectures, 205 Intention, 323 action, 55–56 distance perception, 1 self-other relationships, 82 Inter-location relations, parameters, 147 Inversion effects, 92 J Joint attention, visual orienting in response to deictic gestures, 339–348, 342, 343, 344, 346 L Language, 206, 373–404 cognitive development, 403 integration, 400–403 mental model encoding, 369–404 production, 400–403 Lateral intraparietal area, 219–220 Left-handed people, 187 Listening, 54 Localization, projection, 371 Locomotion, infants, 283–287 everyday experience, 311–312 learning in development, 306–314 learning sets, 308–310 Luminous figures, judgments of shapes, 18 M Machine theory, 206 Magnetoencephalography, superior parieto-occipital cortex, 256–257 Metaphor, 399 Mimicry, 95–96, 98 Mirror neurons, 325–326 Mirror systems, 46 drawing, 60–62

Subject Index embodiment, 47 handwriting, 60–62 motor skill acquisition, 58 parietal cortex, 46–47 premotor cortex, 47 Motor action affordances, 278–280 perceptual guidance, 275–276 Motor knowledge, action understanding development perspective, 323–359 perception of point-light displays of biological motion, 348–356, 349, 351, 352, 353, 354 perseverative errors in searching for hidden objects, 331–338, 332, 333, 336, 338 visual orienting in response to deictic g estures, 339–348, 342, 343, 344, 346 Motor laws, action perception, 49–53 apparent body motion, 51–52 Fitt’s law, 52–53 two-thirds power law, 50–51 Motor skill acquisition, 57–58 common coding theory of perception and actions, 58 mirror systems, 58 Multimodal spatial body representation, 86–87 Musicians, self-identification, 64–65 N Naïve realism, 2 Near space, characterized, 184 Neural data, pragmatic perspective interpretation, 216 Neuroimaging studies, peripersonal space, 247–268 Neurophysiology, 209 grasping, 187–188 New skill, action perception, 56–58 Noncentrality, 380 O Object, defined, 147 Object motion human motion comparing perception, 114–123 visual system in, compared, 114–137 visual system action perception vs. object perception, 121–137 aperture problem, 115, 115–118, 117

425

bodily form, 131–133 controlling for viewpoint-dependent visual experience, 130–131 level of analysis, 116–117, 117 local measurements inherently ambiguous, 116 motion integration across space, 115–123 motion integration across time, 120–123, 123 motor experience vs. visual experience, 128–133 motor expertise, 124–127 multiple apertures, 118 perceptual sensitivity to emotional actions, 136–137 point-light displays, 118–120, 119 social context and apparent human motion, 134–136, 135 social-emotional processes, 133–137 visual expertise, 127–128 Obliqueness alignment, 173–174, 174, 175 defined, 146 frames of reference, 173–174, 174, 175 Observation-execution matching system, 325 developmental evidence, 330–356 Occipito-parietal cortex, 247–268 Open-loop behavior, 7 P Parameters frames of reference, 147 inter-location relations, 147 types, 147 Parietal cortex, 209 mirror systems, 46–47 Path integration, adaptation, 20 Perceived egocentric distance, measurement, 1–33 Perceived exocentric distance, distance perception, distortions, 29–30 Perceived shape, distance perception, distortions, 29–30 Perception, 204, see also Specific type action continuous, graded representations, 48 expertise, 53–58 characterized, 45 concept-measurement relationship, 3–4 representational nature, 2–3

426

Subject Index

Perception-action studies, infant perception development, 276 learning, 276 Perception-action systems, infants, 286–287 Perception of others, inherently social, 79 Percept-percept couplings, distance perception, 4 Perceptually directed action, 7, 12–19 adaptation, 20–25, 22 distance-specific adaptation, 22–23 angular declination, 17, 20 calibration, 12–13, 19–25 infant locomotion, 275–315 on-line modifications, 18–19 spatial updating, 19 systematic error, 19–20 Perceptual representations, 46 Peripersonal space, 248 brain, 248 characterized, 184 extinction, 248 neuroimaging studies, 247–268 spatial encoding, 265–268 Personal space, characterized, 184 Perspective hypothesis, 373–376, 403 claims, 373–374 grammar coreference, 379–380 delaying commitment, 380–382 noncentrality, 380 structure vs. function, 378–384 Perspective taking, grammar clitic assimilation, 392 empathy, 397 evidentiality, 397–399 implicit causality, 393–395 metaphor, 399 perspectival overlays, 395–399 space, 395–396 time, 396–397 Perspective tracking, 376 ambiguity, 384–387 relativization, 382–384, 389–392 scope, 387–389 grammar, 373 maintenance, 378 modification, 378 shift, 378 projection, 372–373 sentence comprehension

competition, 377 cues, 377 incremental interpretation, 376 load reduction, 376–377 principles, 376–377 role slots, 377 starting points, 377 Phenomenal world, 2 Philosophy, psychology, relationship, 204 Pitch, 147 Point-light displays, 118–120, 119 drawing, 60–62 gait, 59–60 handwriting, 60–62 Positron emission tomography, superior parieto-occipital cortex, 256–257 Posterior parietal cortex, 218–220 Postperceptual processes, 12 Pragmatic representations, 210–211 Premotor cortex, mirror systems, 47 Projection, 370–373 body image matching, 370–371 empathy, 372 localization, 371 perspective tracking, 372–373 Proprioception, action understanding, 69–70 Psyche, 204 Psychology history of, 204–210 neglected body, 204 philosophy, relationship, 204 Purpose, spatial perception, 183 Putting, spatial perception, 196–197 R Radical interactionism, 45–46 vs. common coding theory of perception and actions, 47 Rats in maze, 206 Reaching spatial perception, 184–186 extending reach with tool, 184 tool effe ct, 184, 185 superior parieto-occipital cortex, 249– 258, 250, 254, 256–257 preferential response to objects within reachable space, 259–262, 260 Recalibration blind walking, 21–22 general form, 23 verbal reports, 21–22

Subject Index Reduced-cue conditions, blind walking, 9–10, 10 Relativization, 382–384, 389–392 Representation, common coding theory of perception and actions, 47–48 Representative realism, 2 Right-hand rule, 162–164, 163, 171 Roll, 147 S Scale, virtual reality systems, uniform scale compression, 2 Scale construction, distance perception, 5 Searching, perseverative errors in searching for hidden objects, 331–338, 332, 333, 336, 338 Self-generated actions action identification, 58–65 gait, self-recognition, 59–60 Self-identification, auditory stimulus, 62–65 Self-other mapping, 83–93 Self-other relationships, intentionality, 82 Self-recognition, gait, individualistic styles, 60 Sensory neuronopathy, action understanding, 69–70 Sentence comprehension, perspective tracking competition, 377 cues, 377 incremental interpretation, 376 load reduction, 376–377 principles, 376–377 role slots, 377 starting points, 377 Shared representation, 355 Simulation, 48 Situated cognition, 46 Situated robotics, 209 Size-distance invariance, 4–5 defined, 4–5 Emmert’s Law, 5 Slant perception, 189–191, 195 Slopes, 289–290, 297–380, 299 Social-emotional processes, 133–137 Social perception, 79–102 autism, 97–100 atypical face and configural processing, 98–99 body perception, 99 difficulties in social adjustment, 98

427

mimicry, 98 perceiving emotion from movement, 100 rule-based approach of emotional perception, 99–100 social-emotional perception, 99–100 template-based processes, 99, 100 body-specific representations and processes, 81 creating self-other correspondences, 83–93 deficits in understanding others, 97–100 embodied cognition, added complexities, 81–83 importance, 80–81 specialized body processing, 87–90 specialized body processing sources, 90–92 specialized body representations, 84–86 using one’s own body to organize information from others, 86–87 Spatial image nonperceptual input, 15 path integration, 16 updating, 16, 16 Spatial perception action, 6–10 action-specific approach, 183–199 body, 181–183 action-specific perception, 181 behavioral ecology, 180–182 characterized, 179 dorsal processing, 188, 189 falling, 194–195 cost of injury, 195 slant perception, 195 vertical distance overestimation, 195–196 grasping, 186–188 left-handed people, 187 right-handed people, 186–187 hitting, 196–197 purpose, 183 putting, 196–197 putting together what, where, and how, 197–199 reaching, 184–186 extending reach with tool, 184 tool effe ct, 184, 185 spatial updating, 6–10 throwing, 193–194 anticipated effort, 194

428

Subject Index

Spatial perception (continued) effort associated with, 193 intention of throwing, 193 ventral processing, 188, 189 visually specified environment, 179–181 walking, 188–193 behavioral ecology, 192–193 perceiving distances, 192 psychophysical response compression, 190 slant perception, 189–191 surface layout of ground, 188–189 Spatial thinking, 170 Spatial updating distance perception, 1, 2, 6–10 accuracy, 1–2 perceptually directed action, 19 spatial perception, 6–10 Split treadmill, 354–355 Structuralism, 204 Substance dualism, 204 Superior parieto-occipital cortex activation foci, 266–268, 267 functional magnetic resonance imaging, 247–268 activation from arm transport during reaching actions, 249–258, 250, 254, 256–257 in humans, 247–268 preference for near gaze, 262–266, 264 preference for objects within arm’slength, 259–262, 260 magnetoencephalography, 256–257 positron emission tomography, 256–257 reaching, 249–258, 250, 254, 256–257 preferential response to objects within reachable space, 259–262, 260 visual perception, preference for near gaze, 262–266, 264 T Tasks, frames of reference, 145 own intrinsic frame of representation, 146 Teleoperator systems, 3 Thro wing, 7 spatial perception, 193–194 anticipated effort, 194 effort associated with, 193 intention of throwing, 193 Timing information, 62–64 Tools, 186 Touch, action understanding, 69–70

Trajectory, velocity, 50–51 Triad judgments, 154–155, 155 Triangulation methods, distance perception, 10–12, 11, 13 Two-thirds power law, 50–51 Two visual systems debate about, 1 distance perception, action-specific representations, 30–32 U Ultrasound, 171 Understanding others body posture, 94–97 facial expression, 94–97 V Velocity handwriting, 61–62 trajectory, 50–51 Ventral processing, spatial perception, 188, 189 Ventral visual pathway, 324–325 Verbal reports distance perception, 4, 5 effect of feedback, 21–22 egocentric distance, spatial updating to correct for bias, 25–29, 27 recalibration, 21–22 systematic underreporting bias, 26–29, 27, 28 Vertical distance overestimation, 195–196 Virtual reality, 172–173 uniform scale compression, 2 Visual attention, action-specific, 181 Visual distance perception, 23–25, 24 Visual expertise, 91 Visually directed pointing, 7 Visually specified environment, spatial perception, 179–181 Visual perception, see also Visual system blind walking, gain, 21 learning new motor task, 56–57 superior parieto-occipital cortex, preference for near gaze, 262–266, 264 Visual processing, 216–218 affordance competition hypothesis, 216–218 Visual space perception, 3 Visual stimulus, above-ground plane, 17, 17–18

Subject Index Visual system, 113, see also Visual perception human motion action perception vs. object perception, 121–137 aperture problem, 115, 115–118, 117 bodily form, 131–133 controlling for viewpoint-dependent visual experience, 130–131 level of analysis, 116–117, 117 local measurements inherently ambiguous, 116 motion integration across space, 115–123 motion integration across time, 120–123, 123 motor experience vs. visual experience, 128–133 motor expertise, 124–127 multiple apertures, 118 perceptual sensitivity to emotional actions, 136–137 point-light displays, 118–120, 119 social context and apparent human motion, 134–136, 135 social-emotional processes, 133–137 visual expertise, 127–128 modular understanding, 113 object motion action perception vs. object perception, 121–137 aperture problem, 115, 115–118, 117 bodily form, 131–133 controlling for viewpoint-dependent visual experience, 130–131

429

level of analysis, 116–117, 117 local measurements inherently ambiguous, 116 motion integration across space, 115–123 motion integration across time, 120–123, 123 motor experience vs. visual experience, 128–133 motor expertise, 124–127 multiple apertures, 118 perceptual sensitivity to emotional actions, 136–137 point-light displays, 118–120, 119 social context and apparent human motion, 134–136, 135 social-emotional processes, 133–137 visual expertise, 127–128 Visual virtual reality, perceptual errors, 10, 11 Voluntary action, ideomotor principle, 46 W Walking infants, 285–286 spatial perception, 188–193 behavioral ecology, 192–193 perceiving distances, 192 psychophysical response compression, 190 slant perception, 189–191 surface layout of ground, 188–189 Y Yaw, 147